OpenAOE  by InternLM

LLM group chat framework for parallel LLM outputs

Created 1 year ago
321 stars

Top 84.5% on SourcePulse

GitHubView on GitHub
Project Summary

OpenAOE is a framework designed for simultaneous interaction with multiple Large Language Models (LLMs), enabling users to send a single prompt and receive parallel responses. It targets LLM researchers, evaluators, and developers, simplifying the process of comparing and utilizing various commercial and open-source LLMs.

How It Works

OpenAOE facilitates "LLM Group Chat" (LGC) by providing a unified interface to interact with diverse LLMs. It supports both single-model serial responses and multi-model parallel responses. The framework abstracts away the complexities of integrating different LLM APIs, allowing users to configure and access models via a config-template.yaml file, which specifies backend and frontend settings.

Quick Start & Requirements

  • Install: pip install -U openaoe or via Docker (docker pull opensealion/openaoe:latest).
  • Run: openaoe -f /path/to/your/config-template.yaml (or docker run ...).
  • Prerequisites: Python >= 3.9. Requires API keys for commercial LLMs.
  • Configuration: A config-template.yaml file is necessary to define LLM access details.
  • Docs: English, 中文

Highlighted Details

  • Supports parallel responses from multiple LLMs with a single prompt.
  • Integrates with commercial LLM APIs (GPT3.5, GPT4, Gemini-Pro, etc.) and open-source LLMs (via LMDeploy, Ollama).
  • Provides both backend APIs and a Web UI.
  • Recent updates include support for Gemma-7b, Qwen-7b (via Ollama), and Mistral-7b.

Maintenance & Community

  • Project is actively developed with recent commits and releases.
  • Open source with a call for contributions and feature additions.

Licensing & Compatibility

  • The repository does not explicitly state a license in the provided README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The README does not specify a license, which may impact commercial adoption. Users are responsible for obtaining and configuring API keys for commercial models. The config-template.yaml requires manual population with sensitive API access data.

Health Check
Last Commit

3 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
5 stars in the last 30 days

Explore Similar Projects

Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.1%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 week ago
Feedback? Help us improve.