Discover and explore top open-source AI tools and projects—updated daily.
bibinprathapSovereign Graph RAG framework for secure, on-premise AI
Top 95.9% on SourcePulse
VeritasGraph is an enterprise-grade Graph RAG framework designed to overcome the context-blindness of traditional vector-based RAG systems. It offers secure, on-premise AI capabilities with verifiable attribution, targeting enterprises and researchers who require transparent and controllable AI solutions. The primary benefit is enhanced reasoning and data provenance, moving beyond simple similarity matching to true understanding of information connections.
How It Works
VeritasGraph uniquely combines hierarchical tree navigation (similar to a Table of Contents) with the semantic reasoning power of knowledge graphs. This hybrid approach allows users to navigate documents like a human would, following structured outlines, while simultaneously leveraging deep semantic connections and enabling multi-hop reasoning across disparate pieces of information. The framework constructs a knowledge graph from ingested documents, ensuring that every generated answer is traceable back to its source with 100% verifiable attribution.
Quick Start & Requirements
Installation is straightforward:
pip install veritasgraph
An interactive demo can be launched with:
veritasgraph demo --mode=lite
This lite mode requires no local GPU and uses cloud APIs (OpenAI/Anthropic). For privacy and offline use, local mode requires Ollama and approximately 8GB RAM. Production-ready full mode necessitates Docker and Neo4j. Links to video demonstrations and tutorials are provided within the README.
Highlighted Details
Maintenance & Community
The project has received recognition at the International Conference on Applied Science and Future Technology (ICASF 2025), indicating research engagement. Specific community channels like Discord or Slack are not explicitly mentioned in the README.
Licensing & Compatibility
VeritasGraph is released under the permissive MIT License, making it suitable for commercial use and integration into closed-source applications without significant restrictions.
Limitations & Caveats
Full on-premise deployment requires substantial hardware resources, including a CPU with 16+ cores, 64GB+ RAM (128GB recommended), and a high-end NVIDIA GPU with 24GB+ VRAM. Switching embedding models necessitates re-indexing documents, as embeddings must match the index. The lite mode relies on external cloud APIs.
1 day ago
Inactive