req_llm  by agentjido

Elixir SDK for standardized LLM API interaction

Created 4 months ago
355 stars

Top 79.1% on SourcePulse

GitHubView on GitHub
Project Summary

<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> ReqLLM provides a unified, idiomatic Elixir interface for interacting with diverse Large Language Model (LLM) APIs. It standardizes requests and responses across numerous providers, simplifying integration for Elixir developers building AI applications by abstracting away provider-specific complexities.

How It Works

A two-layer architecture features a high-level API (Vercel AI SDK-inspired) for common tasks and a low-level Req plugin API for full HTTP control. Built on OpenAI Chat Completions as a baseline, it uses provider-specific callbacks to adapt non-compatible APIs, ensuring a consistent developer experience.

Quick Start & Requirements

Standard Elixir project setup (mix deps.get) is required. API keys are securely managed via multiple layers (per-request override to .env). Elixir and standard development tools are necessary. A Discord community is available for support.

Highlighted Details

  • Supports 45+ providers and 665+ models via an auto-synced registry with metadata.
  • Features a canonical data model and supports multi-modal content parts (text, image URLs, tool calls).
  • Offers high-level (Vercel-AI style) and low-level (Req plugin) client layers.
  • Enables structured object generation with schema validation and OpenAI-native structured outputs.
  • Provides production-grade streaming via Finch with concurrent metadata collection.
  • Includes detailed usage and cost tracking (tokens, USD) per response.

Maintenance & Community

Currently in release candidate status (v1.0.0-rc.6), actively seeking community feedback. A Discord server is available. High code quality is maintained via Dialyzer/Credo, supported by an extensive fixture-based test suite covering 130+ models across 10 providers, regularly refreshed against live APIs.

Licensing & Compatibility

Licensed under the Apache License, Version 2.0, permitting commercial use and integration into closed-source applications.

Limitations & Caveats

Streaming uses Finch directly due to Req limitations with SSE. Default Finch pools use HTTP/1 to bypass a Finch bug (#265) with large HTTP/2 request bodies. The project is in release candidate status, implying potential pre-1.0.0 API changes.

Health Check
Last Commit

6 days ago

Responsiveness

Inactive

Pull Requests (30d)
45
Issues (30d)
22
Star History
34 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.