Every model gives you a different answer.
Get the right one.

Stop copy-pasting prompts across OpenAI, Claude, and Codex. Stop comparing outputs in different tabs. Concilium runs them all in parallel, has them peer-review each other's work, and gives you one validated answer — in a single interface.

O
G
C
3 agents, 1 answer
Open source · MIT
macOS & Linux
Concilium Emblem
00
Your Current Workflow

The Problem

You already know one model isn't enough. So you open multiple terminals, paste the same prompt into Claude, Opencode, and Codex, then spend 20 minutes reading and comparing their outputs. There has to be a better way.

Terminal 1claude
$ claude "implement auth..."
Thinking...
## JWT approach with refresh tokens...
Terminal 2codex
$ codex "implement auth..."
Running...
## Session-based with Redis store...
Terminal 3opencode
$ opencode "implement auth..."
Processing...
## OAuth2 with PKCE flow...
Then you have to...
1
Read all 3 outputsContext-switch between terminals
2
Compare manuallySpot differences in approach
3
Decide which is bestHope you picked right
4
Miss the edge casesNo peer review, no validation
01
Three-Stage Consensus

How It Works

01

Parallel Execution

Send one prompt and three agents start working at the same time. No more opening multiple terminals or browser tabs.

» Claude, OpenAI, and OpenCode run in isolated subprocesses. You watch all three stream output simultaneously in the same window.

opencode
codex
claude
02

Blind Review

Instead of you reading and comparing outputs, multiple juror models do it for you. Anonymously. With no bias.

» Responses are labeled A, B, C. Jurors evaluate correctness, edge-case handling, and code quality. You skip the manual comparison entirely.

JUROR_A
JUROR_B
JUROR_C
JUROR_D
03

Synthesis

A Chairman model merges the strongest parts of each solution into one final answer. Better than any single model alone.

» The result combines the best architecture decisions, error handling, and implementation details from all three agents.

CHAIRMAN · SYNTHESIZING
02
Better answers, less work

Why Concilium?

Without Concilium
  • Copy-pasting the same prompt into 3 different tools.
  • Switching between browser tabs, terminals, and apps.
  • Reading 3 long outputs and comparing them manually.
  • No way to know which answer has the fewest bugs.
~25 min per prompt · high cognitive load · error-prone
With Concilium
  • One prompt, three agents, all running at the same time.
  • A single desktop app with a unified interface.
  • Automatic blind peer-review ranks the best response.
  • Synthesized output validated by adversarial consensus.
~3 min per prompt · fully automated · peer-validated
3 Parallel Agents
N Blind Reviewers
1 Validated Answer

Watch Concilium in Action

See how Concilium orchestrates multiple LLMs to reach consensus on complex coding tasks.

Full demo walkthrough • 2:15

1
execute
2
review
3
synthesize
OpenCode
Codex
Claude
click play to start demo
03
Data-Driven Decisions

Built-in Analytics

Every council run generates detailed telemetry. Concilium captures token usage, costs, timing, and rankings — then surfaces them in a full analytics dashboard so you can make informed decisions about your AI workflow.

Token Usage Breakdown

Track input and output tokens for every agent, juror, and chairman. Grouped bar charts show exactly where your tokens go, split by model across the entire pipeline.

Per-model input vs output · Grouped bar charts · Total token tracking

Cost Analysis

Total spend, average cost per run, cost per 1k tokens, and the most expensive model — all at a glance. Cost efficiency rankings show which models give the best value for money.

Cost per run · Cost per 1k tokens · Cost efficiency ranking · Spend over time

Performance & Rankings

Win rates show which models get ranked #1 most often. Average ranking tables, quality-per-dollar efficiency scores, and total #1 counts help you pick the best agents.

Win rate tracking · Average ranking table · Quality/cost efficiency

Model Comparison & History

Full comparison table with runs, tokens, cost, average time, and average rank per model. Plus a sortable run history with status, prompt, duration, and cost for every council.

Model comparison table · Execution time bars · Sortable run history

Why Analytics Matter

Optimize spend

Know exactly which models give the best answers per dollar.

Find bottlenecks

Stage timing reveals where the pipeline slows down.

Compare models

Data-driven decisions on which agents to enable.

Track over time

See how your usage, costs, and quality evolve run-over-run.

All analytics are computed locally from your run history. No data ever leaves your machine.

Analytics Dashboard
20 runs · 6 days
Total Runs
0
95% success rate
Total Tokens
0.0k
81.8k in · 43.7k out
Total Cost
$0.00
$0.158 avg/run
Avg Duration
0.0s
per run (Stage 1)
Models Used
3
3 providers
Token Usage by Model
Input (blue) vs Output (green) tokens per model
claude
opencode
codex
Input tokens
Output tokens
Average Stage Timing
Time distribution across pipeline stages
Agents
Council
Stage 1 — Agents: 38.2s
Stage 2+3 — Council: 19.8s
Total: 58.0s
sample data for demonstrationall data stays local
00
MIT Licensed

Open Source

matiasdaloia/concilium

MIT License

The entire codebase is open source. Browse the code, report issues, or contribute features.

Star on GitHub
+

Get Involved

  • Report bugs

    Found something broken? Open an issue.

  • Submit PRs

    Fix a bug or add a feature.

  • Request features

    Have an idea? Start a discussion.

  • Join discussions

    Shape the future of collective AI.

Open an Issue
04
Run it locally in minutes

Get Started

Concilium is open source and ready to build. Clone the repo, configure your environment, and build the app.

1

Prerequisites

  • Node.js 18+ installed
  • macOS 12+ (Apple Silicon or Intel) or Linux
  • At least one CLI agent: claude, codex, or opencode
2

Clone the repository

$ git clone https://github.com/matiasdaloia/concilium.git
3

Configure Environment

Create a .env file in the desktop directory with your OpenRouter API key.

$ cd concilium/desktop
$ echo "OPENROUTER_API_KEY=sk-or-..." > .env
4

Install & Build

$ npm install
$ npm run build

This will install dependencies and package the application into the out/ directory.

5

Run from anywhere

Link the CLI to your PATH, then launch Concilium from any project directory.

$ npm link
$ cd ~/my-project
$ concilium

Or specify a path directly:

$ concilium ./backend

Alternative: manual launch

macOS
$ open out/Concilium-darwin-arm64/Concilium.app
Linux
$ ./out/Concilium-linux-x64/concilium