A fully local AI chatbot built using Streamlit and Ollama, delivering a ChatGPT-like experience with real-time streaming, zero API cost, and complete privacy.
- 🔐 Fully Offline – No API keys or internet required
- ⚡ Real-Time Streaming – Token-by-token responses
- 💬 Chat Memory – Maintains conversation history
- 🎨 Clean UI – Built with Streamlit chat components
- 🧠 Local LLM – Powered by Ollama (Kimi 2.5)
- Frontend/UI: Streamlit
- Backend: Python
- LLM Runtime: Ollama
- Model: kimi-k2.5
.
├── app.py # Main Streamlit application
├── requirements.txt
└── README.md
Download and install from: https://ollama.com
Then pull the model:
ollama pull kimi-k2.5git clone https://github.com/your-username/kimi-local-chat.git
cd kimi-local-chatpip install -r requirements.txtstreamlit run app.py- User enters input via Streamlit UI
- Message stored in session state
- Sent to Ollama local model
- Model generates response in streaming mode
- UI updates token-by-token
+-------------------+
| User Input UI |
| (Streamlit Chat) |
+---------+---------+
|
v
+-------------------+
| Session State |
| (Chat History) |
+---------+---------+
|
v
+-------------------+
| Ollama API |
| (Local Model) |
+---------+---------+
|
v
+-------------------+
| Streaming Output |
| Token-by-Token |
+---------+---------+
|
v
+-------------------+
| Streamlit Display |
+-------------------+
- Personal AI assistant
- Offline chatbot applications
- Learning LLM integrations
- Privacy-focused AI tools
- 🎙️ Voice input (Whisper)
- 📚 RAG (Document Q&A)
- 🌐 Multi-model support
- 📱 Mobile-friendly UI
Pull requests are welcome! For major changes, please open an issue first.
If you like this project, give it a ⭐ on GitHub!