Discover and explore top open-source AI tools and projects—updated daily.
agentjidoElixir SDK for standardized LLM API interaction
Top 89.6% on SourcePulse
<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> ReqLLM provides a unified, idiomatic Elixir interface for interacting with diverse Large Language Model (LLM) APIs. It standardizes requests and responses across numerous providers, simplifying integration for Elixir developers building AI applications by abstracting away provider-specific complexities.
How It Works
A two-layer architecture features a high-level API (Vercel AI SDK-inspired) for common tasks and a low-level Req plugin API for full HTTP control. Built on OpenAI Chat Completions as a baseline, it uses provider-specific callbacks to adapt non-compatible APIs, ensuring a consistent developer experience.
Quick Start & Requirements
Standard Elixir project setup (mix deps.get) is required. API keys are securely managed via multiple layers (per-request override to .env). Elixir and standard development tools are necessary. A Discord community is available for support.
Highlighted Details
Maintenance & Community
Currently in release candidate status (v1.0.0-rc.6), actively seeking community feedback. A Discord server is available. High code quality is maintained via Dialyzer/Credo, supported by an extensive fixture-based test suite covering 130+ models across 10 providers, regularly refreshed against live APIs.
Licensing & Compatibility
Licensed under the Apache License, Version 2.0, permitting commercial use and integration into closed-source applications.
Limitations & Caveats
Streaming uses Finch directly due to Req limitations with SSE. Default Finch pools use HTTP/1 to bypass a Finch bug (#265) with large HTTP/2 request bodies. The project is in release candidate status, implying potential pre-1.0.0 API changes.
1 day ago
Inactive
anthropics