Streamlit app for Retrieval Augmented Generation (RAG) with txtai
Top 75.0% on sourcepulse
This project provides a Streamlit application for Retrieval Augmented Generation (RAG), enabling users to combine search and Large Language Models (LLMs) for data-driven insights. It supports both traditional Vector RAG and a novel Graph RAG approach, making it suitable for researchers and developers looking to enhance LLM accuracy with custom data.
How It Works
The application leverages the txtai
library to implement RAG. Vector RAG uses vector search to retrieve the most relevant documents, which are then fed into an LLM prompt. Graph RAG extends this by incorporating knowledge graphs, allowing context generation through graph path traversals. This graph-based approach can improve factual accuracy and provide richer context by exploring relationships between concepts.
Quick Start & Requirements
docker run -d --gpus=all -it -p 8501:8501 neuml/rag
pip install -r requirements.txt
followed by streamlit run rag.py
Highlighted Details
#
prefix.Maintenance & Community
The project is actively developed by the neuml
organization. Further details on community and roadmap are not explicitly provided in the README.
Licensing & Compatibility
The project's licensing is not explicitly stated in the README. Compatibility for commercial use or closed-source linking would require clarification.
Limitations & Caveats
AWQ models are restricted to x86-64 architectures. The README does not detail specific performance benchmarks or known limitations of the Graph RAG implementation.
2 months ago
1 day