A sophisticated Telegram bot that seamlessly integrates with Ollama API to provide AI-powered conversations with multiple personas, conversation memory, and customizable settings.
- 🤖 Multiple AI Personas: Switch between different conversation styles (assistant, creative, technical, casual, uncensored)
- 💾 Conversation Memory: Maintains context throughout your chat session
- ⚙️ Customizable Settings: Adjust conversation history length and other parameters
- 🔄 Data Persistence: Automatically saves conversations and settings
- 🛡️ Robust Error Handling: Graceful handling of API errors and timeouts
- 🎯 User-Friendly Interface: Intuitive menu system with inline keyboards
- 🔧 Modular Architecture: Clean, maintainable codebase structure
- Node.js (v18 or higher)
- Ollama installed and running locally
- A Telegram bot token from @BotFather
- Clone the repository:
git clone https://github.com/AsbDaryaee/Ollama-Telegram-Bot.git
cd telegram-ollama-bot- Install dependencies:
npm install- Configure your environment:
- edit
config/config.jswith your settings:
- edit
module.exports = {
BOT_TOKEN: "your-telegram-bot-token",
OLLAMA_URL: "http://localhost:11434",
MODEL_NAME: "llama2",
MAX_HISTORY: 10,
REQUEST_TIMEOUT: 30000,
};- Ensure Ollama is running:
ollama serve- Start the bot:
npm start/start- Initialize the bot and show the main menu/stop- Stop the current conversation and clear history/menu- Display the main menu at any time
- 💬 Chat with AI - Start or continue a conversation
- 🎭 Change Persona - Switch between different AI personalities
- ⚙️ Settings - Adjust bot configuration
- ℹ️ Help - Get usage instructions
- 🚪 Exit - Close the menu
- 👨💼 Assistant: Professional and helpful responses
- 🎨 Creative: Imaginative and artistic responses
- 🔧 Technical: Detailed technical explanations
- 😊 Casual: Friendly and conversational tone
- 🔓 Uncensored: Direct responses without content filtering
- Max History: Control how many previous messages the AI remembers (1-50)
- Model Selection: Choose different Ollama models (if available)
User: /start Bot: [Main Menu with options]
User: [Clicks "💬 Chat with AI"] Bot: You're chatting with Assistant persona. What would you like to know?
User: Explain quantum computing Bot: [Detailed explanation based on current persona...]
User: [Clicks "🎭 Change Persona"] Bot: [Persona selection menu]
User: [Selects "🎨 Creative"] Bot: Persona changed to Creative! 🎨
User: Write a poem about AI Bot: [Creative poem about artificial intelligence...]
You can also use environment variables instead of editing the config file:
export BOT_TOKEN="your-telegram-bot-token"
export OLLAMA_URL="http://localhost:11434"
export MODEL_NAME="llama2"
export MAX_HISTORY="10"The bot supports any model available in your Ollama installation. Popular options include:
llama2- General purpose modelcodellama- Code-focused modelmistral- Fast and efficient modelneural-chat- Conversational model
To use a different model, update the MODEL_NAME in your configuration.
The bot automatically manages data persistence:
- Conversations: Stored in
bot_data/conversations.json - System Prompts: Stored in
bot_data/system_prompts.json - Auto-save: Data is saved after each interaction
- Graceful Shutdown: Ensures data integrity when stopping the bot
The bot includes comprehensive error handling for:
- API Connection Issues: Automatic retry with user notification
- Model Loading: Clear messages when models are unavailable
- Timeout Handling: Configurable request timeouts
- Data Corruption: Automatic recovery and backup creation
- Network Issues: Graceful degradation and user feedback
node ./index.js- Handlers: Process different types of Telegram updates
- Keyboards: Generate inline keyboard layouts
- Utils: Reusable utility functions
- Data: Manage persistent storage operations
- Config: Centralized configuration management
-
Bot not responding:
- Verify Ollama is running:
curl http://localhost:11434/api/tags - Check bot token validity
- Review console logs for errors
- Verify Ollama is running:
-
Memory issues:
- Reduce
MAX_HISTORYsetting - Clear conversation data:
/stop
- Reduce
-
API timeouts:
- Increase
REQUEST_TIMEOUTin config - Check Ollama model performance
- Increase
-
Permission errors:
- Ensure write permissions for
bot_data/directory - Check file ownership and access rights
- Ensure write permissions for
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes following the existing code style
- Test thoroughly with different personas and scenarios
- Submit a pull request with a clear description
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Check the troubleshooting section above
- Review the Ollama documentation
- Open an issue in the repository
Note: This bot requires a local Ollama installation. Make sure Ollama is properly configured and running before starting the bot.