Open Council - Multi-Agent LLM Debate Orchestrator using LangGraph + LiteLLM.
Run a council of AI models in your terminal to debate, analyze, and stress-test ideas.
Open Council is a CLI-first orchestrator where multiple agents collaborate: some build the plan, others attack the risks, and a final judge synthesizes a decisive answer.
Think:
- Research assistant that cross-checks itself
- Devil's advocate for your architecture
- AI council that debates before answering
All from a single command:
council --mode odinOptional transparency flag:
council --mode odin --show-draftsYou ask:
"Design a scalable RAG system for real-time updates."
Open Council:
- Worker models propose practical architectures (indexing, retrieval, caching)
- Critic models surface failure modes (latency, consistency, cost)
- Judge model resolves conflict into one actionable verdict
You get:
- A structured recommendation
- The key trade-off to manage
- Immediate next steps
Most LLM tools are single-model chats that fail hard when a provider fails.
Open Council is built for real-world reliability:
- Multi-agent debate before final answer
- Deterministic graph workflows using LangGraph (not prompt spaghetti)
- Automatic fallback routing: Groq -> OpenRouter -> Gemini -> local Ollama
- Resilient CLI UX with setup guidance and graceful interrupt handling
Standard AI Agents: one answer from one model.
Open Council: multiple perspectives, explicit trade-offs, and a final synthesis designed for high-stakes decisions and system design reviews.
Open Council operates through specialized agentic graphs:
- Odin (Executive Mode): [Available in MVP] Parallel workers (Muninn and Huginn) + Odin judge synthesis.
- Artemis (Academic Mode): [Coming Soon] Iterative citation-heavy research loops.
- Leviathan (Devil's Advocate): [Coming Soon] Aggressive architecture and risk stress-testing.
curl -fsSL https://aayushbhaskar.github.io/OpenCouncil/install.sh | bash
council --mode odinThis installs Open Council under ~/.open-council-app and links council to ~/.local/bin.
If council is not found after install, either:
export PATH="$HOME/.local/bin:$PATH"or run directly:
~/.local/bin/council --mode odinTo update to the latest version, re-run the installer:
curl -fsSL https://aayushbhaskar.github.io/OpenCouncil/install.sh | bashOpen Council also performs a lightweight startup check (best effort) and shows an update hint when your local install is behind origin/main.
Optional startup update controls:
OPEN_COUNCIL_UPDATE_CHECK=0disables the startup checkOPEN_COUNCIL_AUTO_UPDATE=1enables opt-in auto-update on startup when behind- In-app configuration command:
/configto view current runtime flags and config file path/config set OPEN_COUNCIL_AUTO_UPDATE 1to enable auto-update/config set OPEN_COUNCIL_UPDATE_CHECK 0to disable startup checks
--show-draftsenables worker-draft printing before the final answer in modes that support draft exposure./show-draftsshows current status in chat./show-drafts onor/show-drafts offtoggles draft visibility for the current session.
Odin also supports optional node-specific model overrides:
MUNINN_MODEL(thesis/constructor worker)HUGINN_MODEL(antithesis/deconstructor worker)ODIN_MODEL(final synthesis judge)
If these are unset, Odin keeps the default provider fallback chain (Groq -> OpenRouter -> Gemini -> Ollama) per node.
Run:
council --mode odinThen ask:
Analyze microservices vs monolith for my startup.
Example response shape:
The Verdict: Start with a modular monolith to ship faster and defer distributed complexity.
The Critical Trade-off: Initial velocity now vs migration cost later.
The Path Forward:
1) Define strict module boundaries and contracts today.
2) Instrument core performance paths and set scaling thresholds.
3) Extract the first service only when measured load exceeds those thresholds.
- Odin mode LangGraph pipeline with Muninn + Huginn workers and Odin judge
- Async LiteLLM routing with Groq -> OpenRouter -> Gemini -> Ollama fallback (
open_council.core.llm) - Interactive CLI REPL with
/exitand/quit - Session draft visibility controls:
--show-draftsand/show-drafts on|off - In-chat mode command:
/mode(list) and/mode <name>(switch; Odin wired in MVP) - In-chat config command:
/configand/config set <KEY> <VALUE>for update flags - Optional per-node Odin model overrides (
MUNINN_MODEL,HUGINN_MODEL,ODIN_MODEL) - OpenRouter support via
OPENROUTER_API_KEY+OPENROUTER_MODEL - Graceful Ctrl+C handling (first press warns, second exits cleanly)
- First-run setup wizard using
~/.open-council/.env(temporary local.envfallback supported) - Ollama readiness checks (binary, server, model) with actionable guidance
Install Ollama from Ollama Downloads, then:
# 1) Start local Ollama server
ollama serve
# 2) Pull the configured fallback model (default)
ollama pull llama3.1If your configured model differs, pull that exact model name from OLLAMA_MODEL
(example: OLLAMA_MODEL=ollama/llama3.1 maps to ollama pull llama3.1).
If ~/.open-council/.env is missing, Open Council launches a first-run wizard to collect keys and run provider readiness checks before chat starts.
git clone https://github.com/aayushbhaskar/OpenCouncil.git
cd OpenCouncil
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pytest -q tests/test_llm_client.py
council --mode odinOpen Council is designed for graceful degradation:
- Network throttling via strict
asyncio.Semaphore - Tiered provider fallback using LiteLLM
- Deterministic orchestration with typed state in LangGraph
- ReAct-style Odin worker phases: reason -> search query generation -> DDG search -> Jina extraction -> refine -> draft
- Worker-controlled retrieval: Muninn and Huginn decide when web search is necessary, then gather only bounded evidence
- UI shows phase progression for retrieval/reasoning steps without dumping raw search payloads
Local checkpointing is planned for a later phase (SQLite-backed, not wired in MVP yet).
- Phase 1: Resilient MVP (Odin mode, LiteLLM routing, Rich CLI)
- Phase 2: Deep reasoning (Artemis mode, SQLite memory, web tools)
- Phase 3: Enterprise scale (Leviathan mode, local vector memory, cloud backends)
- Phase 4: Workstation layer (Ariadne mode, secure local file workflows)
