MoA  by togethercomputer

LLM orchestration framework for enhanced performance using multiple LLMs

created 1 year ago
2,787 stars

Top 17.5% on sourcepulse

GitHubView on GitHub
Project Summary

Mixture-of-Agents (MoA) is a framework for enhancing LLM performance by orchestrating multiple specialized agents. It targets researchers and developers seeking to improve response quality and achieve state-of-the-art results, particularly with open-source models, by leveraging collective intelligence.

How It Works

MoA employs a layered architecture where each layer consists of multiple LLM agents. These agents process input, and their outputs are aggregated, potentially through multiple refinement layers, to produce a final, superior response. This approach allows MoA to harness the diverse strengths of different models, leading to significant performance gains over single, even larger, models.

Quick Start & Requirements

  • Install: pip install together
  • API Key: Requires TOGETHER_API_KEY environment variable.
  • Demo: pip install -r requirements.txt then python bot.py for CLI demo.
  • Evaluation: Requires TOGETHER_API_KEY and OPENAI_API_KEY. Setup involves installing dependencies for AlpacaEval, MT-Bench, and FLASK.
  • Links: Overview, Quickstart, CLI Demo, Evaluation

Highlighted Details

  • Achieves 65.1% on AlpacaEval 2.0 using only open-source models, outperforming GPT-4 Omni (57.5%).
  • Outperforms Qwen1.5-110B-Chat and GPT-4 Omni on specific FLASK evaluation dimensions like correctness and factuality.
  • Supports multi-layer refinement for improved response quality.
  • Includes scripts to reproduce results for AlpacaEval, MT-Bench, and FLASK benchmarks.

Maintenance & Community

  • Developed by Together AI.
  • Acknowledges contributions from Meta AI, Mistral AI, Microsoft, Alibaba Cloud, and DataBricks for base models.
  • Credits LMSYS and KAIST AI for evaluation benchmarks.

Licensing & Compatibility

  • Licensed under Apache 2.0.
  • Permissive license suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

The project relies on the Together API for model inference, which may incur costs and requires an API key. While it showcases strong performance on benchmarks, real-world applicability may vary based on specific use cases and the chosen agent configurations.

Health Check
Last commit

6 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
58 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.