Graph RAG implementation using local models via Ollama
Top 36.8% on sourcepulse
This repository adapts Microsoft's GraphRAG for local LLM inference via Ollama, targeting developers and researchers who want to build knowledge graphs and perform question answering on private datasets without relying on costly cloud APIs. It offers a cost-effective and efficient alternative for complex RAG tasks.
How It Works
The core approach extends GraphRAG by integrating with Ollama, a local LLM runner. It leverages Ollama to serve both the language model for text generation and the embedding model for document indexing. The system builds a graph-based index by first extracting entities and relationships to form a knowledge graph, then generating community summaries for related entities. This allows it to handle "global" questions about the entire dataset, a task traditional RAG struggles with, by summarizing partial responses derived from community summaries.
Quick Start & Requirements
pip install -e .
within a conda
environment (Python 3.10 recommended).mistral
) and one embedding model (e.g., nomic-embed-text
) pulled via Ollama.settings.yaml
with local Ollama endpoints.Highlighted Details
Maintenance & Community
The project welcomes community contributions. It cites the original Microsoft GraphRAG repository and Ollama as key dependencies.
Licensing & Compatibility
The repository's license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking would require clarification of the licensing terms.
Limitations & Caveats
The README recommends Python 3.10 specifically for installation, suggesting potential compatibility issues with other versions. The "global" query method is explicitly stated as the only supported method.
10 months ago
1 day