π€ Local LLM Chat App (Ollama + Llama 3 + LangChain)
A simple, fast, and clean AI chatbot built using:
Ollama (local LLM runner)
Llama 3 model
LangChain LCEL
Streamlit UI
This project shows how to run a real offline LLM chatbot with a modern, chat-style interface.
π Features
π§ Runs Llama 3 locally using Ollama
π¬ Clean chat UI using Streamlit
π Modern LangChain LCEL pipeline (prompt | llm | parser)
πΎ Session-based chat history
π Sidebar with model selection
β‘ Instant, smooth responses
π Safe environment variable handling using .env
π Project Structure local-llm-chat-app/ β βββ app.py βββ README.md βββ requirements.txt βββ .env.example βββ .gitignore
π Installation 1οΈβ£ Create virtual environment (optional but recommended) python -m venv venv source venv/bin/activate # Mac/Linux venv\Scripts\activate # Windows
2οΈβ£ Install dependencies pip install -r requirements.txt
π§ Environment Variables
Duplicate .env.example and rename it to:
.env
Add your keys:
LANGCHAIN_API_KEY=your_langchain_api_key_here LANGCHAIN_PROJECT=your_project_name_here
(These are optional β the app works even without LangSmith.)
π¦ Install & Run Ollama
Install Ollama from: https://ollama.com
Pull the llama3 model:
ollama pull llama3
Or use:
ollama pull gemma:2b
Start Streamlit:
streamlit run app.py
Your LLM chatbot will open in the browser π
π§ Technologies Used
Python
LangChain
LangChain Ollama integration
Streamlit
Llama 3 (via Ollama)
dotenv (environment management)
π¨βπ» Author
Shehjad Patel AI Engineer | LangChain | LLMs | Streamlit | Ollama