Discover and explore top open-source AI tools and projects—updated daily.
wzdavidLocal LLM RAG system for laptop deployment, enabling local knowledge Q&A
Top 92.9% on SourcePulse
ThinkRAG is a locally deployable Retrieval-Augmented Generation (RAG) system designed for efficient Q&A over private knowledge bases on a laptop. It targets professionals, researchers, and students seeking an offline, privacy-preserving AI assistant, offering optimized handling of Chinese language data and flexible model integration.
How It Works
Built on LlamaIndex and Streamlit, ThinkRAG employs a modular architecture. It supports various LLMs via OpenAI-compatible APIs and local deployments through Ollama. For data processing, it utilizes SpacyTextSplitter for enhanced Chinese text segmentation and BAAI embedding/reranking models for improved relevance. The system offers a development mode with local file storage and an optional production mode leveraging Redis and LanceDB for persistent storage and vector indexing.
Quick Start & Requirements
pip3 install -r requirements.txtBAAI/bge-large-zh-v1.5) and reranking models to the localmodels directory.OPENAI_API_KEY, DEEPSEEK_API_KEY) or the application interface.streamlit run app.pydocs/HowToDownloadModels.md for detailed model download instructions.Highlighted Details
Maintenance & Community
The project is open-source and welcomes contributions. Links to community channels or roadmaps are not explicitly provided in the README.
Licensing & Compatibility
Limitations & Caveats
The system is not recommended for Windows users due to unresolved issues. A specific, older version of Ollama (0.3.3) is required for compatibility.
3 months ago
1 week
ymcui