MoA  by togethercomputer

LLM orchestration framework for enhanced performance using multiple LLMs

Created 1 year ago
2,824 stars

Top 16.9% on SourcePulse

GitHubView on GitHub
Project Summary

Mixture-of-Agents (MoA) is a framework for enhancing LLM performance by orchestrating multiple specialized agents. It targets researchers and developers seeking to improve response quality and achieve state-of-the-art results, particularly with open-source models, by leveraging collective intelligence.

How It Works

MoA employs a layered architecture where each layer consists of multiple LLM agents. These agents process input, and their outputs are aggregated, potentially through multiple refinement layers, to produce a final, superior response. This approach allows MoA to harness the diverse strengths of different models, leading to significant performance gains over single, even larger, models.

Quick Start & Requirements

  • Install: pip install together
  • API Key: Requires TOGETHER_API_KEY environment variable.
  • Demo: pip install -r requirements.txt then python bot.py for CLI demo.
  • Evaluation: Requires TOGETHER_API_KEY and OPENAI_API_KEY. Setup involves installing dependencies for AlpacaEval, MT-Bench, and FLASK.
  • Links: Overview, Quickstart, CLI Demo, Evaluation

Highlighted Details

  • Achieves 65.1% on AlpacaEval 2.0 using only open-source models, outperforming GPT-4 Omni (57.5%).
  • Outperforms Qwen1.5-110B-Chat and GPT-4 Omni on specific FLASK evaluation dimensions like correctness and factuality.
  • Supports multi-layer refinement for improved response quality.
  • Includes scripts to reproduce results for AlpacaEval, MT-Bench, and FLASK benchmarks.

Maintenance & Community

  • Developed by Together AI.
  • Acknowledges contributions from Meta AI, Mistral AI, Microsoft, Alibaba Cloud, and DataBricks for base models.
  • Credits LMSYS and KAIST AI for evaluation benchmarks.

Licensing & Compatibility

  • Licensed under Apache 2.0.
  • Permissive license suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

The project relies on the Together API for model inference, which may incur costs and requires an API key. While it showcases strong performance on benchmarks, real-world applicability may vary based on specific use cases and the chosen agent configurations.

Health Check
Last Commit

8 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
23 stars in the last 30 days

Explore Similar Projects

Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Shizhe Diao Shizhe Diao(Author of LMFlow; Research Scientist at NVIDIA), and
20 more.

dify by langgenius

0.5%
114k
Open-source LLM app development platform
Created 2 years ago
Updated 12 hours ago
Feedback? Help us improve.