llm-consortium  by irthomasthomas

Plugin for the `llm` CLI tool orchestrating multiple LLMs

created 7 months ago
272 stars

Top 95.5% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a plugin for the llm package to orchestrate multiple Large Language Models (LLMs) for complex problem-solving. It addresses the challenge of leveraging diverse LLM strengths by enabling iterative refinement and consensus-building among models, benefiting users who need robust, validated outputs for intricate tasks.

How It Works

The system orchestrates multiple LLMs by sending prompts to a configurable set of models, potentially running multiple instances of each. Responses are then synthesized, and a confidence score is calculated. If the confidence is below a threshold or minimum iteration count, an arbiter model is used to refine the synthesis and prepare for the next iteration, repeating until confidence is met or maximum iterations are reached. This iterative consensus approach aims to overcome individual model limitations and achieve higher quality, more reliable results.

Quick Start & Requirements

  • Install llm via uv tool install llm or pipx install llm.
  • Install the plugin: llm install llm-consortium.
  • Requires access to configured LLM APIs (e.g., OpenAI, Google AI).
  • Default models include claude-3-opus-20240229, claude-3-sonnet-20240229, gpt-4, and gemini-pro.
  • Official documentation: https://github.com/irthomasthomas/llm-consortium

Highlighted Details

  • Supports specifying multiple instances per model (e.g., gpt-4o:2).
  • Configurable confidence thresholds and iteration limits.
  • Advanced arbitration and synthesis capabilities.
  • Database logging of all interactions via SQLite.
  • Ability to save and reuse consortium configurations as custom models.

Maintenance & Community

  • Latest release: v0.3.1, introducing flexible model instance allocation and improved configuration management.
  • Developed within the llm ecosystem.

Licensing & Compatibility

  • MIT License.
  • Compatible with commercial use and closed-source applications as per MIT license terms.

Limitations & Caveats

The effectiveness is dependent on the quality of the underlying LLM APIs and the configuration of parameters like confidence thresholds and iteration counts. The default models require API access and associated costs.

Health Check
Last commit

1 week ago

Responsiveness

1 week

Pull Requests (30d)
3
Issues (30d)
0
Star History
55 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.