A beginner-friendly hands-on project to learn and grasp the fundamentals of Generative AI using LangChain and popular LLM providers. Each folder is a self-contained learning module that progressively builds your understanding — from invoking your first LLM to building a fully functional chatbot.
| Module | Key Concepts |
|---|---|
| LLMs | Invoking a large language model for the first time |
| ChatModels | Using Chat Models with OpenAI, HuggingFace (cloud & local) |
| EmbeddedModels | Text embeddings, document embeddings & semantic similarity |
| Prompts | Prompt Templates, saving/loading templates, Streamlit UI |
| Chatbot | LangChain messaging, ChatPromptTemplate, MessagesPlaceholder & CLI chatbot |
| StructuredOutput | Structured LLM responses using TypedDict & Pydantic schemas |
| OutputParsers | StrOutputParser, JsonOutputParser, StructuredOutputParser & chains |
| Chains | Simple, Sequential, Parallel & Conditional chains with RunnableBranch |
| Runnables | Why Runnables exist — the old-way problem & the unified .invoke() solution |
| RunnablesV2 | Hands-on Runnable components — Sequence, Parallel, Passthrough, Lambda & Branch |
genai_1/
├── LLMS/
│ └── llm_demo.py # Basic LLM invocation with GPT-3.5-Turbo
├── ChatModels/
│ ├── openaiChatDemo.py # Chat with OpenAI GPT-4
│ ├── huggingfaceChatDemo.py # Chat using HuggingFace cloud endpoint
│ └── chatModelLocal.py # Chat using a local HuggingFace model
├── EmbeddedModels/
│ ├── embeddingOpenai.py # Generate embeddings for a single query
│ ├── embeddingOpenaiDoc.py # Generate embeddings for multiple documents
│ └── embeddingDocSimilarity.py # Document similarity search with cosine similarity
├── prompts/
│ ├── promptGenerator.py # Create & save a reusable PromptTemplate
│ └── promptUi.py # Streamlit UI — Research Paper Summarizer
├── chatbot/
│ ├── messages.py # LangChain built-in messaging (System, Human, AI)
│ ├── chatbot.py # Interactive CLI chatbot with chat history
│ ├── chatPromptTemplate.py # Using ChatPromptTemplate with system & human messages
│ ├── messagePlaceholder.py # MessagesPlaceholder for injecting chat history
│ └── chatHistory.txt # Sample chat history for MessagesPlaceholder demo
├── structuredOutput/
│ ├── typeDictDemo.py # Basic TypedDict usage example
│ ├── structuredOutputTypeDict.py # Structured output with TypedDict schema
│ ├── pydanticDemo.py # Basic Pydantic BaseModel usage example
│ └── structuredOutputPydantic.py # Structured output with Pydantic schema
├── outputParsers/
│ ├── strOutputParserDemo.py # Chaining prompts without a parser
│ ├── strOutputParser.py # StrOutputParser with LangChain chains
│ ├── jsonOutputParser.py # JsonOutputParser for JSON responses
│ └── structureOutputParser.py # StructuredOutputParser with ResponseSchema
├── chains/
│ ├── simpleChain.py # Basic prompt | model | parser chain
│ ├── sequentialChain.py # Multi-step sequential chain (report → summary)
│ ├── parallelChain.py # RunnableParallel — run branches concurrently & merge
│ └── conditionalChain.py # RunnableBranch — route based on sentiment classification
├── runnables/
│ ├── simpleLLM.py # Old-way LLM usage (pre-Runnable, using .predict())
│ ├── chainsProblem.ipynb # The problem — inconsistent interfaces (.predict, .format, .run)
│ └── chainsSolution.ipynb # The Runnable solution — unified .invoke() & runnableConnector
├── runnablesV2/
│ ├── runnableSequence.py # RunnableSequence — multi-step chain (joke → explain)
│ ├── runnableParallel.py # RunnableParallel — generate tweet & LinkedIn post concurrently
│ ├── runnablePassthrough.py # RunnablePassthrough — pass data through unchanged alongside transforms
│ ├── runnableLambda.py # RunnableLambda — wrap custom Python functions as Runnables
│ └── runnableBranch.py # RunnableBranch — conditionally summarize based on word count
├── requirements.txt
├── template.json # Saved prompt template (generated by promptGenerator.py)
└── .env # API keys (not committed)
- Python 3.10+
- API keys for one or more providers (see API Keys below)
-
Clone the repository
git clone https://github.com/<your-username>/genai_1.git cd genai_1
-
Create & activate a virtual environment
python -m venv venv # Windows venv\Scripts\activate # macOS / Linux source venv/bin/activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
Create a
.envfile in the project root and add your API keys:OPENAI_API_KEY=your_openai_api_key HUGGINGFACEHUB_API_TOKEN=your_huggingface_api_token
File:
LLMS/llm_demo.py
The simplest starting point — invoke OpenAI's gpt-3.5-turbo with a single prompt and print the response.
python LLMS/llm_demo.pyFiles:
ChatModels/
Learn how to use Chat Models from multiple providers:
| File | What It Does |
|---|---|
openaiChatDemo.py |
Chat with OpenAI GPT-4 |
huggingfaceChatDemo.py |
Chat via HuggingFace Inference API (cloud) |
chatModelLocal.py |
Chat using a local HuggingFace model (TinyLlama) |
python ChatModels/openaiChatDemo.py
python ChatModels/huggingfaceChatDemo.py
python ChatModels/chatModelLocal.pyFiles:
EmbeddedModels/
Understand how text gets converted into numerical vectors and how to find the most relevant document for a given query.
| File | What It Does |
|---|---|
embeddingOpenai.py |
Generate an embedding vector for a single query |
embeddingOpenaiDoc.py |
Generate embeddings for multiple documents |
embeddingDocSimilarity.py |
Find the most similar document using cosine similarity |
python EmbeddedModels/embeddingOpenai.py
python EmbeddedModels/embeddingDocSimilarity.pyFiles:
prompts/
Learn how to create structured, reusable Prompt Templates and build a simple web interface:
promptGenerator.py— Define a prompt template with input variables and save it astemplate.jsonpromptUi.py— A Streamlit web app that loads the saved template and lets you summarize research papers interactively
# Generate the template
python prompts/promptGenerator.py
# Launch the Streamlit UI
streamlit run prompts/promptUi.pyFiles:
chatbot/
Build a conversational AI chatbot that remembers context, and learn how LangChain handles messages:
messages.py— Understand LangChain's built-in message types (SystemMessage,HumanMessage,AIMessage)chatbot.py— A fully interactive CLI chatbot that maintains chat history across turnschatPromptTemplate.py— UseChatPromptTemplatewith system & human message placeholdersmessagePlaceholder.py— UseMessagesPlaceholderto inject prior chat history from a filechatHistory.txt— Sample conversation data used by the placeholder demo
# Understand messaging
python chatbot/messages.py
# ChatPromptTemplate demo
python chatbot/chatPromptTemplate.py
# MessagesPlaceholder demo
python chatbot/messagePlaceholder.py
# Run the chatbot (type "exit" to quit)
python chatbot/chatbot.pyFiles:
structuredOutput/
Learn how to get structured, predictable responses from LLMs instead of free-form text. Define a schema and LangChain will ensure the model's output conforms to it.
typeDictDemo.py— Understand Python'sTypedDictfor defining typed dictionariesstructuredOutputTypeDict.py— UseTypedDict+Annotatedto define a review schema and extract structured data from a product reviewpydanticDemo.py— Understand Pydantic'sBaseModelfor data validationstructuredOutputPydantic.py— Use PydanticBaseModel+Fieldto define a review schema with field descriptions
# TypedDict basics
python structuredOutput/typeDictDemo.py
# Structured output with TypedDict
python structuredOutput/structuredOutputTypeDict.py
# Pydantic basics
python structuredOutput/pydanticDemo.py
# Structured output with Pydantic
python structuredOutput/structuredOutputPydantic.pyFiles:
outputParsers/
Learn how to parse and transform LLM responses using LangChain's output parsers, and how to chain multiple prompts together:
strOutputParserDemo.py— Chain two prompts sequentially (generate text → summarize) without using a parserstrOutputParser.py— UseStrOutputParserto build a clean LangChain chain (template | model | parser)jsonOutputParser.py— UseJsonOutputParserto get structured JSON output from the modelstructureOutputParser.py— UseStructuredOutputParserwithResponseSchemato define expected output fields
# Chaining prompts without a parser
python outputParsers/strOutputParserDemo.py
# StrOutputParser with chains
python outputParsers/strOutputParser.py
# JsonOutputParser
python outputParsers/jsonOutputParser.py
# StructuredOutputParser with ResponseSchema
python outputParsers/structureOutputParser.pyFiles:
chains/
Learn how to compose LangChain chains — from a basic single-step chain to advanced parallel and conditional routing:
| File | What It Does |
|---|---|
simpleChain.py |
Basic prompt → model → parser chain |
sequentialChain.py |
Sequential chain — generate a report then summarize it |
parallelChain.py |
RunnableParallel — run notes & quiz branches concurrently, then merge |
conditionalChain.py |
RunnableBranch — classify sentiment and route to positive/negative handler |
# Simple chain
python chains/simpleChain.py
# Sequential chain (report → summary)
python chains/sequentialChain.py
# Parallel chain (notes + quiz → merged doc)
python chains/parallelChain.py
# Conditional chain (sentiment-based routing)
python chains/conditionalChain.pyFiles:
runnables/
Understand the motivation behind Runnables — the core abstraction in modern LangChain. These files walk through the problem the LangChain team faced and how they solved it:
| File | What It Does |
|---|---|
simpleLLM.py |
The old-way LLM app — uses .predict() & manual .format() (no unified interface) |
chainsProblem.ipynb |
The problem — fakeLLM, fakePromptTemplate, fakeLLMChain each have different APIs |
chainsSolution.ipynb |
The solution — a Runnable ABC with a unified .invoke() & runnableConnector |
The Core Idea:
- Problem (
chainsProblem.ipynb): In early LangChain, every component had a different method — LLMs used.predict(), prompts used.format(), chains used.run(). This made chaining components inconsistent and hard to compose. - Solution (
chainsSolution.ipynb): Define aRunnableabstract base class where every component implements.invoke(). Then arunnableConnectorcan pipe output from one Runnable into the next — enabling the modernprompt | model | parserchain syntax.
# Old-way LLM usage
python runnables/simpleLLM.py
# Open the notebooks to explore the problem & solution
# chainsProblem.ipynb — inconsistent interfaces
# chainsSolution.ipynb — Runnable with unified .invoke()Files:
runnablesV2/
Now that you understand why Runnables exist (from module 9), this module lets you use each Runnable component in practice:
| File | What It Does |
|---|---|
runnableSequence.py |
RunnableSequence — chain steps together (generate a joke → explain it) |
runnableParallel.py |
RunnableParallel — run a tweet & LinkedIn post generation concurrently |
runnablePassthrough.py |
RunnablePassthrough — pass data through unchanged alongside parallel transforms |
runnableLambda.py |
RunnableLambda — wrap a custom Python function (word counter) as a Runnable |
runnableBranch.py |
RunnableBranch — conditionally summarize a report if it exceeds 300 words |
# RunnableSequence — joke → explanation
python runnablesV2/runnableSequence.py
# RunnableParallel — tweet + LinkedIn post
python runnablesV2/runnableParallel.py
# RunnablePassthrough — preserve original output
python runnablesV2/runnablePassthrough.py
# RunnableLambda — custom function in a chain
python runnablesV2/runnableLambda.py
# RunnableBranch — conditional summarization
python runnablesV2/runnableBranch.py| Provider | Environment Variable | Get Your Key |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
platform.openai.com |
| HuggingFace | HUGGINGFACEHUB_API_TOKEN |
huggingface.co/settings/tokens |
- LangChain — LLM application framework
- OpenAI — GPT-3.5 / GPT-4 models & embeddings
- HuggingFace — Open-source models (cloud & local)
- Pydantic — Data validation & structured output schemas
- Streamlit — Rapid web UI prototyping
- scikit-learn — Cosine similarity calculations
- python-dotenv — Environment variable management
This project is open source and available for learning purposes.
⭐ If this repo helped you learn something new, consider giving it a star!