mco  by mco-org

Orchestration layer for parallel AI coding agents

Created 1 month ago
270 stars

Top 95.1% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

MCO (Multi-CLI Orchestrator) addresses the limitation of single-perspective AI coding agents by providing a neutral orchestration layer. It enables developers to dispatch prompts to multiple AI agents (e.g., Claude, Codex, Gemini) in parallel, synthesizing consensus from their diverse outputs. This empowers users to work like a Tech Lead, leveraging a team of AI agents for more comprehensive code reviews, bug hunting, and architectural analysis.

How It Works

MCO fans out prompts to selected agent CLIs concurrently, employing a wait-all execution model. Its core innovation lies in the Consensus Engine, which aggregates results, deduplicates identical findings across agents, and calculates agreement ratios and confidence scores. This allows for a synthesized view of agent findings, moving beyond simple deduplication to a robust analysis of collective intelligence. Advanced modes like --debate and --divide further refine analysis or distribute workloads.

Quick Start & Requirements

  • Install: npm i -g @tt-a1i/mco or clone and python3 -m pip install -e ..
  • Prerequisites: Python 3 on PATH. Requires installed and authenticated CLIs for supported agents (Claude Code, Codex CLI, Gemini CLI, OpenCode, Qwen Code). Optional: pip install mco[memory] for persistent memory features.
  • Links: Demo video (Bilibili)

Highlighted Details

  • Parallel Fan-out: Dispatches tasks to multiple agents simultaneously, waiting for all to complete.
  • Universal Integration: Works with various IDEs and agents (Claude Code, Cursor, Copilot, etc.) or plain shells via a self-describing CLI.
  • Agent-to-Agent Orchestration: Enables agents to dispatch tasks to other agents through MCO.
  • Consensus Engine: Generates agreement_ratio, consensus_score, and consensus_level (confirmed, needs-verification, unverified) for merged findings.
  • Advanced Review Modes: --debate for a challenge round, --divide files|dimensions for workload distribution.
  • Custom Agent Registry: Supports custom ACP-compatible binaries and Ollama-backed models via configuration files.
  • Cross-Session Memory: Optional --memory flag enables persistent institutional knowledge, agent scoring, and finding lifecycle tracking via evermemos-mcp.
  • Flexible Output: Supports report, markdown-pr, sarif (for CI/CD integration), and json formats.

Maintenance & Community

The README does not detail specific community channels, active contributors, or sponsorship information.

Licensing & Compatibility

  • License: MIT.
  • Compatibility: Permissive MIT license allows for commercial use and integration into closed-source projects.

Limitations & Caveats

The project requires the underlying AI agent CLIs to be installed and properly authenticated. Persistent memory features necessitate the EVERMEMOS_API_KEY environment variable. The --debate and --divide coordination modes are mutually exclusive.

Health Check
Last Commit

3 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
21
Issues (30d)
1
Star History
93 stars in the last 30 days

Explore Similar Projects

Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Kevin Hou Kevin Hou(Head of Product Engineering at Windsurf), and
9 more.

vibe-kanban by BloopAI

1.8%
25k
Kanban board for AI coding agents
Created 10 months ago
Updated 1 day ago
Feedback? Help us improve.