Every model gives you
a different answer.
Get the right one.
Stop copy-pasting prompts across OpenAI, Claude, and Codex. Stop comparing outputs in different tabs. Concilium runs them all in parallel, has them peer-review each other's work, and gives you one validated answer — in a single interface.

The Problem
You already know one model isn't enough. So you open multiple terminals, paste the same prompt into Claude, Opencode, and Codex, then spend 20 minutes reading and comparing their outputs. There has to be a better way.
How It Works
Parallel Execution
Send one prompt and three agents start working at the same time. No more opening multiple terminals or browser tabs.
» Claude, OpenAI, and OpenCode run in isolated subprocesses. You watch all three stream output simultaneously in the same window.
Blind Review
Instead of you reading and comparing outputs, multiple juror models do it for you. Anonymously. With no bias.
» Responses are labeled A, B, C. Jurors evaluate correctness, edge-case handling, and code quality. You skip the manual comparison entirely.
Synthesis
A Chairman model merges the strongest parts of each solution into one final answer. Better than any single model alone.
» The result combines the best architecture decisions, error handling, and implementation details from all three agents.
Why Concilium?
- ✕ Copy-pasting the same prompt into 3 different tools.
- ✕ Switching between browser tabs, terminals, and apps.
- ✕ Reading 3 long outputs and comparing them manually.
- ✕ No way to know which answer has the fewest bugs.
- ✓ One prompt, three agents, all running at the same time.
- ✓ A single desktop app with a unified interface.
- ✓ Automatic blind peer-review ranks the best response.
- ✓ Synthesized output validated by adversarial consensus.
Watch Concilium in Action
See how Concilium orchestrates multiple LLMs to reach consensus on complex coding tasks.
Full demo walkthrough • 2:15
Built-in Analytics
Every council run generates detailed telemetry. Concilium captures token usage, costs, timing, and rankings — then surfaces them in a full analytics dashboard so you can make informed decisions about your AI workflow.
Token Usage Breakdown
Track input and output tokens for every agent, juror, and chairman. Grouped bar charts show exactly where your tokens go, split by model across the entire pipeline.
Cost Analysis
Total spend, average cost per run, cost per 1k tokens, and the most expensive model — all at a glance. Cost efficiency rankings show which models give the best value for money.
Performance & Rankings
Win rates show which models get ranked #1 most often. Average ranking tables, quality-per-dollar efficiency scores, and total #1 counts help you pick the best agents.
Model Comparison & History
Full comparison table with runs, tokens, cost, average time, and average rank per model. Plus a sortable run history with status, prompt, duration, and cost for every council.
Why Analytics Matter
Know exactly which models give the best answers per dollar.
Stage timing reveals where the pipeline slows down.
Data-driven decisions on which agents to enable.
See how your usage, costs, and quality evolve run-over-run.
All analytics are computed locally from your run history. No data ever leaves your machine.
Open Source
matiasdaloia/concilium
MIT License
The entire codebase is open source. Browse the code, report issues, or contribute features.
Get Involved
- → Report bugs
Found something broken? Open an issue.
- → Submit PRs
Fix a bug or add a feature.
- → Request features
Have an idea? Start a discussion.
- → Join discussions
Shape the future of collective AI.
Get Started
Concilium is open source and ready to build. Clone the repo, configure your environment, and build the app.
Prerequisites
- Node.js 18+ installed
- macOS 12+ (Apple Silicon or Intel) or Linux
- At least one CLI agent:
claude,codex, oropencode
Clone the repository
Configure Environment
Create a .env file in the desktop directory with your OpenRouter API key.
Install & Build
This will install dependencies and package the application into the out/ directory.
Run from anywhere
Link the CLI to your PATH, then launch Concilium from any project directory.
Or specify a path directly:
Alternative: manual launch