llm  by graniet

Orchestrate LLM, agent, and voice workflows with a unified Rust API

Created 1 year ago
277 stars

Top 93.6% on SourcePulse

GitHubView on GitHub
Project Summary

A powerful Rust library and CLI tool, graniet/llm unifies and orchestrates diverse Large Language Model (LLM) and AI backends. It targets developers building complex, multi-step AI workflows, offering a single, extensible API for services like OpenAI, Claude, Gemini, Ollama, and ElevenLabs. This simplifies integration and enables advanced features such as speech processing, vision, and agentic reasoning, reducing the complexity of managing multiple AI services.

How It Works

The project utilizes a builder pattern for intuitive configuration, abstracting over numerous LLM providers and AI capabilities through unified traits. This design allows developers to seamlessly chain different backends, implement function calling, handle vision and speech inputs/outputs, and serve workflows via a REST API. Its core advantage lies in abstracting away provider-specific complexities, enabling flexible, multi-provider AI orchestration within a single Rust project.

Quick Start & Requirements

  • CLI Installation: cargo install llm
  • Library Integration: Add llm = { version = "1.2.4", features = [...] } to Cargo.toml.
  • Prerequisites: Rust toolchain. API keys are required for cloud-based LLM providers.
  • Resources: Basic usage requires a Rust environment; LLM inference demands appropriate hardware.
  • Documentation: Examples are available in the project's directory.

Highlighted Details

  • Broad Backend Support: Integrates OpenAI, Anthropic, Ollama, DeepSeek, xAI, Phind, Groq, Google, Cohere, Mistral, Hugging Face, and ElevenLabs.
  • Advanced Workflow Features: Supports multi-step chains, function calling, vision, reasoning, structured output, speech-to-text, and text-to-speech.
  • Unified API & Builder Pattern: Simplifies configuration and interaction across diverse LLM services.
  • REST API Serving: Enables exposing LLM backends or chains via an OpenAI-standard REST API.
  • Agentic Capabilities: Facilitates building reactive agents with shared memory and configurable triggers.
  • Evaluation Framework: Includes tools for evaluating and comparing LLM outputs, supporting parallel execution.

Maintenance & Community

The project explicitly notes that the current implementation is distinct from a previously archived crate of the same name. No specific community channels (e.g., Discord, Slack) or details on maintainers/sponsors are provided in the README text.

Licensing & Compatibility

The license type and any associated compatibility notes for commercial use or closed-source linking are not specified in the provided README content.

Limitations & Caveats

Certain features, such as advanced memory support (e.g., sliding window, shared memory), are indicated as "soon" or under development. The distinction from a prior, archived crate might require careful review for users migrating or seeking historical context.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
9
Issues (30d)
1
Star History
9 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.