Agentic RAG app deployable in your own cloud infrastructure
Top 11.6% on sourcepulse
This project provides an easy-to-use, self-hostable platform for implementing Agentic Retrieval-Augmented Generation (RAG) in enterprise environments. It aims to offer a configuration experience similar to OpenAI's custom GPTs, but with the flexibility and control of deploying within your own cloud infrastructure via Docker. The primary audience is developers and IT professionals looking to integrate advanced RAG capabilities without the vendor lock-in of cloud-specific solutions.
How It Works
RAGapp is built using LlamaIndex, a popular framework for building LLM applications with data. It leverages Docker for deployment, allowing users to run it in their own cloud environments. The system supports connecting to hosted AI models (OpenAI, Gemini) and local models via Ollama, offering flexibility in model selection. Configuration is managed through an Admin UI, simplifying the setup of RAG pipelines.
Quick Start & Requirements
docker run -p 8000:8000 ragapp/ragapp
http://localhost:8000/admin
to configure.Highlighted Details
Maintenance & Community
make build-frontends
.Licensing & Compatibility
Limitations & Caveats
The project's source code is dynamically retrieved from create-llama
, requiring a make build-frontends
step before committing changes. Authentication is not included by default and is expected to be handled by an external API Gateway. Authorization features are planned for later versions.
6 months ago
1 week