Broadside-AI is a CLI-first Python tool for parallel LLM aggregation. It fans a task out to multiple independent runs, gathers the outputs, and produces one final result that works well in scripts, CI, and other automation.
It is intentionally narrow:
- no inter-agent messaging
- no workflow DAGs
- no persistent multi-step state
- no crew hierarchy or planner/reviewer role system
That constraint is the product. Broadside-AI is for cases where "ask several times, then combine the signal" is useful, but a full orchestration framework would be overkill.
Broadside-AI is strongest when parallel attempts add value:
- code review, where multiple passes catch different issues
- classification and extraction, where structured outputs can be merged
- comparison and analysis, where consensus matters
- generation, where diversity creates better raw material
Committed benchmark snapshots in benchmarks/results/ currently show:
2.52xaverage speedup vs sequential with Anthropic Claude Sonnet 42.26xaverage speedup vs sequential with Ollama cloud Nemotron1.07xaverage speedup on a modest local CPU withgemma3:1b
pip install broadside-aiOr, to install it as an isolated CLI tool:
pipx install broadside-aiOptional extras for cloud backends:
pip install "broadside-ai[anthropic]"
pip install "broadside-ai[openai]"
pip install "broadside-ai[all]"After installing, verify the CLI works:
broadside-ai --helpWindows: if broadside-ai is not on PATH
Use the module entrypoint instead:
py -3.11 -m broadside_ai --helpUse the same Python launcher version you installed Broadside-AI into. On
machines with multiple Python installs, py -3 may point at a different
interpreter and fail to find the package.
If you prefer the console-script form, install with pipx so the command is
added to a CLI-friendly location.
Install from source
Clone the repo or download the GitHub ZIP, then install:
cd Broadside-AI
pip install .To install backend extras from a source checkout:
pip install ".[anthropic]"
pip install ".[openai]"
pip install ".[all]"Broadside-AI needs a real backend before run can do useful work. The easiest
way to avoid a frustrating first try is:
- Confirm the install worked.
- Set up one backend.
- Run a single prompt.
broadside-ai --helpBroadside-AI supports Ollama, Anthropic, and OpenAI-compatible APIs. For a first run, Ollama local is the least setup-heavy option.
Install Ollama, then pull a local model:
ollama pull gemma3:1bNow run Broadside-AI:
broadside-ai run --prompt "Write a pitch for a dotfile manager" --n 3 --model gemma3:1bThat should print one synthesized result to stdout.
Examples below that reference repository files such as RELEASE.md,
tasks/..., or benchmarks/... assume you are in a checkout of this repo.
A plain pip install broadside-ai does not place those files in your current
working directory, so use your own local files or create a task YAML first.
run prints only the synthesized result to stdout by default, which makes it
easy to compose with other tools:
broadside-ai run --prompt "Summarize this changelog" --n 3 > summary.txt
broadside-ai run --prompt "Write a pitch for a dotfile manager" --n 3 --model gemma3:1bFiles are written only when you ask for them with --save or --output.
For project-specific tasks, pass the source material in with --context-file
instead of relying on a bare prompt. Broadside will append those files to the
task sent to every branch.
broadside-ai run \
--prompt "Plan Broadside-AI's next PyPI release as a concise checklist" \
--context-file RELEASE.md \
--context-file pyproject.toml \
--context-file .github/workflows/publish.ymlThat works much better for repo operations than an ungrounded prompt like
"Plan out a PyPI project release", which usually produces a generic tutorial.
Install Ollama, sign in, and pull the default cloud model:
ollama signin
ollama pull nemotron-3-super:cloud
broadside-ai run --prompt "Write a pitch for a dotfile manager" --n 3Execution defaults are tuned for user success:
- cloud backends and Ollama cloud models run in parallel by default
- local Ollama models run sequentially by default
Override with --parallel or --sequential when needed.
ollama pull gemma3:1b
broadside-ai run --prompt "Write a pitch for a dotfile manager" --n 3 --model gemma3:1bSet ANTHROPIC_API_KEY in your shell first, then run:
pip install "broadside-ai[anthropic]"
broadside-ai run --prompt "Review this design" --n 3 --backend anthropicSet OPENAI_API_KEY in your shell first, then run:
pip install "broadside-ai[openai]"
broadside-ai run --prompt "Compare these options" --n 3 --backend openaiFor OpenAI-compatible providers, set OPENAI_BASE_URL and pass --model.
Choose the synthesis strategy based on the kind of output you want:
llm: one direct final answer for the user or downstream toolconsensus: an analysis of agreements, disagreements, and unique claimsvoting: aggregation for discrete answers or majority positionsweighted_merge: algorithmic merge for structured JSON-like outputs
Use --json-output for scripts and subprocess integrations:
broadside-ai run tasks/ticket_classification.yaml --n 5 --synthesis weighted_merge --json-outputThe JSON payload always includes:
schema_versionstatuspromptbackendmodelmoderequested_strategystrategyresultparsed_resultraw_outputsgathersaved_to
gather includes n_requested, n_completed, n_failed, n_parsed,
total_tokens, and wall_clock_ms.
schema_version is included so other tools can depend on the payload shape.
broadside-ai run tasks/code_review.yaml --n 3 --save
broadside-ai run tasks/code_review.yaml --n 3 --output artifacts/review-runSaved runs go under:
broadside_ai_output/{model}/{topic}_{timestamp}/
broadside-ai validate-task my_task.yamlValidation exits 0 when every file is valid and 1 when any file fails.
If a task provides output_schema, Broadside-AI asks every branch to return
valid JSON and parses the results through the full pipeline.
That enables weighted_merge, an algorithmic synthesis strategy that:
- makes zero LLM calls on the happy path
- merges numeric fields with weighted averages
- merges strings with majority vote
- merges lists by majority presence
- uses
confidenceas weight metadata, not as an output field
Example:
broadside-ai run tasks/ticket_classification.yaml --n 5 --synthesis weighted_merge --json-outputYou can also stop early when enough branches have arrived or agreed:
broadside-ai run tasks/ticket_classification.yaml --n 5 --early-stop 3 --agreement 0.66broadside-ai run tasks/code_review.yaml --n 3 --synthesis consensus --savebroadside-ai run tasks/ticket_classification.yaml --n 5 --synthesis weighted_merge --json-output > ticket.jsonbroadside-ai validate-task tasks/_template.yamlfrom broadside_ai import EarlyStop, Task, run_sync
task = Task(
prompt="Classify this support message.",
output_schema={
"label": "string",
"confidence": "float",
"reasoning": "string",
},
)
result = run_sync(
task,
n=5,
backend="ollama",
synthesis_strategy="weighted_merge",
early_stop=EarlyStop(min_complete=3, agreement_threshold=0.66),
)
print(result.result)
print(result.parsed_result)Async usage:
from broadside_ai import Task, run
task = Task(prompt="Summarize the tradeoffs of SQLite vs PostgreSQL for analytics.")
result = await run(task, n=3, backend="ollama")
print(result.result)Task -> Scatter -> Gather -> Synthesize
Task: prompt, optional context, optional output schemascatter(): run the task acrossnindependent branchesgather(): normalize outputs, parse structured results, and compute statssynthesize(): collapse outputs withllmfor a direct answer,consensusfor analysis,votingfor discrete choices, orweighted_mergefor structured datarun(): convenience wrapper for the full pipeline
pip install -e ".[dev]"
make test
make lint
make typecheck
make release-checkRepository docs:
Broadside-AI is at v0.1.0 (first public release). The CLI interface,
JSON output schema, and Python API (run, run_sync, Task, EarlyStop)
are considered stable for this release. Synthesis strategies and backend
options may expand in future versions. Breaking changes before v1.0 will
be noted in release notes.