Local RAG implementation using Ollama
Top 34.8% on sourcepulse
This project provides a simplified, local implementation of Retrieval-Augmented Generation (RAG) using Ollama for LLM inference and local data sources. It targets users who want to build AI applications with private data without relying on cloud services, offering a straightforward setup for querying documents and emails.
How It Works
The system leverages Ollama to run LLMs and embedding models locally. It supports RAG on uploaded documents (PDF, TXT, JSON) via upload.py
and email data via collect_emails.py
. The core retrieval logic is handled by localrag.py
and emailrag2.py
, which can optionally rewrite user queries for improved retrieval accuracy.
Quick Start & Requirements
pip install -r requirements.txt
ollama pull llama3
(or other models), ollama pull mxbai-embed-large
python upload.py
, python localrag.py
, python collect_emails.py
, python emailrag2.py
.env
file.Highlighted Details
python localrag.py --model mistral
).Maintenance & Community
The project is associated with the AllAboutAI YouTube channel. Links to tutorials are provided.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial or closed-source use is not specified.
Limitations & Caveats
The project is presented as "SuperEasy" and "100% Local," but the lack of explicit licensing and detailed compatibility information may pose adoption challenges for commercial or closed-source applications. The email RAG feature requires specific setup for Gmail app passwords.
1 year ago
1 day