Skip to content

DabeerAzeez/ollama-improv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Improv Partner

A command-line interface for practicing improv comedy with a local AI model using Ollama. Collaborate with an AI improv partner that strictly follows the "Yes, and" principle in real-time dialogue.

Features

  • 🎭 AI improv partner responding with SHORT, punchy lines (max 2 sentences)
  • 🧠 Animated spinner thinking indicator with smooth braille character animation
  • 💬 Maintains conversation history for coherent multi-turn scenes
  • 🎯 Strict system prompts enforce stage partner perspective and dialogue-only responses
  • 🌈 Colored CLI output: Green for user input, Magenta for AI partner, Light blue for thinking state
  • 🔄 Easy scene restart with the RESTART keyword
  • ⚡ Real-time token streaming for live response display
  • ✅ Professional code structure with clear class/function organization

Prerequisites

  1. Ollama installed and running - Download from ollama.ai

    • Must have a local model available (configured as MODEL_NAME in main.py)
    • Start Ollama server before running: ollama serve
  2. Python 3.8+ with virtual environment activated

    .\venv\Scripts\Activate.ps1

Installation

  1. Install Python dependencies:

    pip install -r requirements.txt

    This installs: ollama, colorama

  2. Update MODEL_NAME in main.py (optional):

    • Default is set to improv-dolphin
    • Change the MODEL_NAME constant to match your local Ollama model

Usage

Starting the Application

python main.py

Example Conversation

🎭 OLLAMA IMPROV PARTNER 🎭
═══════════════════════════════════════════════════════════════════

Welcome to your AI Improv Partner!

How to use:
1. Type your opening line to start a new scene
2. I'll respond with a short improv line (max 2 sentences)
3. Keep the scene going back and forth
4. Type 'RESTART' at any time to start a fresh conversation
5. Type 'QUIT' to exit

───────────────────────────────────────────────────────────────────

You: I can't believe apples are so expensive these days.

Partner: (thinking ⠋)
Partner: (thinking ⠙)
Partner: I know! Just paid $20 for a block of cheese at the farmer's market.

You: That's outrageous!

Partner: (thinking ⠹)
Partner: We should start growing our own food. I'm converting my entire apartment into a garden.

You: RESTART

🎬 Scene reset! Start a new scene whenever you're ready.

Commands

  • RESTART - End current scene and start a fresh conversation
  • QUIT - Exit the application
  • Press Ctrl+C - Emergency exit

How It Works

System Architecture

User Input
    ↓
User Message Added to Conversation History
    ↓
Start Animated Thinking Spinner (background thread)
    ↓
API Call to Ollama Model with System Constraints
    ↓
First Token Arrives → Stop Spinner, Stream Response in Real-Time
    ↓
Display Each Token As It Arrives
    ↓
Response Added to History → Ready for Next Input

Key Components

  1. ThinkingAnimator (Class) - Manages animated thinking spinner with braille characters in a daemon thread
  2. initialize_conversation() - Creates conversation history with strict system prompt
  3. get_ai_response() - Handles real-time streaming to Ollama, manages animator lifecycle
  4. chat_loop() - Main interaction loop handling user commands (RESTART, QUIT)
  5. main() - Orchestrates app startup and scene management

Implementation Highlights

  • Real-time Streaming: Each token from the AI is displayed immediately using sys.stdout.write() and flush()
  • Animated Thinking: Background daemon thread updates spinner every 100ms with smooth braille characters
  • System Prompt Enforcement: Strict constraints are passed to the model on every request via the system role
  • Scene Partner Perspective: System prompt emphasizes dialogue-only, no meta-commentary, different character
  • Colored Output: Terminal colors distinguish user (green), partner (magenta), and thinking state (light blue)

Configuration

Edit these constants at the top of main.py to customize behavior:

MODEL_NAME = "improv-dolphin"  # Your local model name
RESTART_KEYWORD = "RESTART"     # Keyword to reset the scene
MAX_RESPONSE_LENGTH = 150       # Max chars (for reference, currently not enforced)

To modify the AI's behavior, edit the system prompt in the initialize_conversation() function.

Troubleshooting

"Error connecting to Ollama model"

  • Ensure Ollama is running: ollama serve
  • Check that MODEL_NAME in main.py matches an installed model
  • Verify the model exists: ollama list
  • Check network connectivity to localhost:11434

AI responses are too long or don't follow constraints

  • The system prompt strictly enforces max 2 sentences and dialogue-only
  • If the model still doesn't comply, try a different model or one fine-tuned for instruction-following
  • Dolphin and Mistral models tend to follow system prompts well

Thinking spinner doesn't appear smooth

  • This is a terminal rendering issue, not a code bug
  • Works best in modern terminals (Windows Terminal, iTerm2, etc.)
  • Try using --verbose flag with ollama for debugging

Code Structure & Style

The code follows professional Python standards:

  • Organized sections: Imports → Constants → Classes → Helper Functions → Main Functions → Entry Point
  • ThinkingAnimator class: Encapsulates threading and animation logic
  • Clear separation: Library code vs. application logic
  • Comprehensive docstrings: All functions and classes documented
  • PEP8 compliant: Standard formatting and naming conventions
  • Type hints: Function signatures include return types

Future Enhancements

  • Save conversation history to file
  • Support multiple improv game types
  • Add user statistics (scenes completed, etc.)
  • Scene suggestions based on curriculum
  • Web UI version

License

Open source - feel free to modify and improve!

Support

For issues or questions:

  1. Verify Ollama is running
  2. Check the troubleshooting section
  3. Review error messages in the console output

About

AI-powered improv comedy partner CLI with real-time streaming and animated thinking indicator. Practice "Yes, and" scenes locally with strict dialogue-only constraints and instant AI feedback.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages