SDK for RAG-based LLM web app creation
Top 38.0% on sourcepulse
AutoLLM is a Python library designed to rapidly deploy Retrieval Augmented Generation (RAG) based LLM web applications and APIs. It targets developers seeking to simplify the creation of LLM-powered applications by unifying access to over 100 LLMs and 20+ vector databases, offering a streamlined RAG engine and FastAPI integration.
How It Works
AutoLLM abstracts the complexities of LLM orchestration by providing a unified API for various LLM providers (Hugging Face, Ollama, Azure, VertexAI, Bedrock) and vector stores (LanceDB, Qdrant). Its core AutoQueryEngine
class allows users to configure RAG pipelines with a single line of code, abstracting away data loading, chunking, embedding, vector storage, and LLM interaction. The library also offers a one-line conversion to a FastAPI application for easy deployment.
Quick Start & Requirements
pip install autollm
or pip install autollm[readers]
for data readers.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The AGPL 3.0 license may impose significant obligations for commercial use, particularly for network-accessed services, requiring source code disclosure.
1 year ago
Inactive