Tool for LLM-powered document interaction
Top 45.0% on sourcepulse
IncarnaMind allows users to query personal documents (PDF, TXT) using various LLMs, including OpenAI's GPT series, Anthropic's Claude, and local open-source models like Llama2. It addresses limitations in traditional RAG by offering adaptive chunking and multi-document querying, enabling more precise and context-aware information retrieval.
How It Works
IncarnaMind employs a "Sliding Window Chunking" mechanism for adaptive data segmentation, balancing fine-grained and coarse-grained information access. This is coupled with an "Ensemble Retriever" to enhance both semantic understanding and precise retrieval across multiple documents. This approach aims to overcome the limitations of fixed chunking and single-document querying found in many RAG systems.
Quick Start & Requirements
pip install -r requirements.txt
), and optionally llama-cpp-python
with CUDA or Metal support.configparser.ini
. Place documents in the /data
directory.python docs2db.py
, then start chatting with python main.py
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Citation functionality is not yet implemented but is planned for a future release. The current version has limited asynchronous capabilities. OCR support and a frontend UI are also listed as upcoming features.
5 months ago
1 day