RAG-based chatbot for custom knowledge bases
Top 67.4% on sourcepulse
RAG-GPT provides a comprehensive solution for building intelligent customer service systems using Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs). It targets developers and businesses seeking to quickly deploy a customizable chatbot with a user-friendly interface and the ability to learn from diverse knowledge bases.
How It Works
The system leverages a Flask backend to integrate various LLMs (OpenAI, ZhipuAI, DeepSeek, Moonshot, Ollama) and supports multiple knowledge base types, including websites, isolated URLs, and local files. It employs a RAG architecture for contextually relevant answers, with options for query preprocessing and result reranking to enhance accuracy.
Quick Start & Requirements
git clone
the repository, configure .env
with API keys and model names, then run docker-compose up --build
or python3 rag_gpt_app.py
(after pip install -r requirements.txt
and python3 create_sqlite_db.py
).Highlighted Details
Maintenance & Community
The project is hosted on GitHub under the open-kf
organization. Specific contributor or community links (Discord, Slack, roadmap) are not detailed in the README.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The README mentions that only gpt-3.5-turbo
is currently available in the admin console's LLM selection, with plans for expansion. DeepSeek and Moonshot require ZhipuAI's Embedding API, as they do not provide their own.
1 year ago
1 day