open-multi-agent  by JackChen-me

Orchestration framework for production-grade AI agent teams

Created 4 days ago

New!

3,035 stars

Top 15.6% on SourcePulse

GitHubView on GitHub
Project Summary

Open Multi-Agent is a production-grade, model-agnostic orchestration framework designed for building collaborative AI agent teams. It addresses the complexity of coordinating multiple AI agents by automating task scheduling, dependency management, and inter-agent communication. This framework benefits developers and researchers by simplifying the creation of sophisticated AI workflows that leverage diverse LLMs and specialized agent roles.

How It Works

The core of Open Multi-Agent lies in its flexible agent definition, allowing teams composed of agents with distinct roles, tools, and even different LLM providers (e.g., Anthropic, OpenAI). Agents communicate via a message bus and can share memory. Task execution is managed through a Task DAG Scheduler, which resolves dependencies topologically, enabling independent tasks to run in parallel. The framework's model-agnostic design allows seamless integration of various LLMs within a single team. Execution occurs in-process within a Node.js environment, minimizing overhead.

Quick Start & Requirements

  • Primary install: npm install @jackchen_me/open-multi-agent
  • Prerequisites: Environment variables ANTHROPIC_API_KEY (required) and OPENAI_API_KEY (optional). A Node.js environment is necessary.
  • Links: Code examples in the README serve as demonstrations.

Highlighted Details

  • Supports multi-agent teams with distinct roles, system prompts, and toolkits.
  • Task DAG scheduling automatically manages dependencies and parallel execution.
  • Model-agnostic architecture allows mixing LLMs from different providers (e.g., Claude, GPT) in one team.
  • Extensible custom tool definition using Zod schemas for robust input validation.
  • Provides streaming output capabilities for agent responses.

Maintenance & Community

Contributions are welcomed via issues, feature requests, and PRs, particularly in areas like LLM Adapters (Ollama, llama.cpp, vLLM, Gemini), example workflows, and documentation. No specific community links (e.g., Discord, Slack) or contributor details are provided in the README.

Licensing & Compatibility

Licensed under the MIT license, which is permissive for commercial use and integration into closed-source projects.

Limitations & Caveats

The framework's in-process execution within Node.js may present limitations for highly resource-intensive tasks or specific deployment architectures. The README indicates areas where community contributions are sought, suggesting potential for further development in LLM adapter support and example diversity.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
15
Issues (30d)
26
Star History
3,289 stars in the last 4 days

Explore Similar Projects

Feedback? Help us improve.