llm-consortium  by irthomasthomas

Plugin for the `llm` CLI tool orchestrating multiple LLMs

Created 9 months ago
366 stars

Top 76.9% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a plugin for the llm package to orchestrate multiple Large Language Models (LLMs) for complex problem-solving. It addresses the challenge of leveraging diverse LLM strengths by enabling iterative refinement and consensus-building among models, benefiting users who need robust, validated outputs for intricate tasks.

How It Works

The system orchestrates multiple LLMs by sending prompts to a configurable set of models, potentially running multiple instances of each. Responses are then synthesized, and a confidence score is calculated. If the confidence is below a threshold or minimum iteration count, an arbiter model is used to refine the synthesis and prepare for the next iteration, repeating until confidence is met or maximum iterations are reached. This iterative consensus approach aims to overcome individual model limitations and achieve higher quality, more reliable results.

Quick Start & Requirements

  • Install llm via uv tool install llm or pipx install llm.
  • Install the plugin: llm install llm-consortium.
  • Requires access to configured LLM APIs (e.g., OpenAI, Google AI).
  • Default models include claude-3-opus-20240229, claude-3-sonnet-20240229, gpt-4, and gemini-pro.
  • Official documentation: https://github.com/irthomasthomas/llm-consortium

Highlighted Details

  • Supports specifying multiple instances per model (e.g., gpt-4o:2).
  • Configurable confidence thresholds and iteration limits.
  • Advanced arbitration and synthesis capabilities.
  • Database logging of all interactions via SQLite.
  • Ability to save and reuse consortium configurations as custom models.

Maintenance & Community

  • Latest release: v0.3.1, introducing flexible model instance allocation and improved configuration management.
  • Developed within the llm ecosystem.

Licensing & Compatibility

  • MIT License.
  • Compatible with commercial use and closed-source applications as per MIT license terms.

Limitations & Caveats

The effectiveness is dependent on the quality of the underlying LLM APIs and the configuration of parameters like confidence thresholds and iteration counts. The default models require API access and associated costs.

Health Check
Last Commit

3 weeks ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
1
Star History
8 stars in the last 30 days

Explore Similar Projects

Starred by Edward Sun Edward Sun(Research Scientist at Meta Superintelligence Lab), Shizhe Diao Shizhe Diao(Author of LMFlow; Research Scientist at NVIDIA), and
2 more.

ama_prompting by HazyResearch

0%
547
Language model prompting strategy research paper
Created 3 years ago
Updated 2 years ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Pawel Garbacki Pawel Garbacki(Cofounder of Fireworks AI), and
4 more.

alpaca_farm by tatsu-lab

0.1%
826
RLHF simulation framework for accessible instruction-following/alignment research
Created 2 years ago
Updated 1 year ago
Feedback? Help us improve.