Discover and explore top open-source AI tools and projects—updated daily.
Ayanami0730Agentic RAG for scalable, multi-hop question answering
Top 98.5% on SourcePulse
Summary
A-RAG is an advanced Retrieval-Augmented Generation (RAG) framework designed to overcome the limitations of static RAG systems by enabling LLMs to autonomously control retrieval. It targets researchers and developers building sophisticated multi-hop question-answering systems, offering improved accuracy and scalability by leveraging LLM reasoning for dynamic information retrieval.
How It Works
A-RAG operates on three core principles: autonomous strategy selection, iterative execution, and interleaved tool use within a ReAct-like loop. It exposes hierarchical retrieval interfaces—keyword search, semantic search, and chunk reading—directly to the LLM. This allows the agent to dynamically adapt its retrieval strategy across different granularities (keyword, sentence, chunk) based on task characteristics, enabling more efficient and context-aware information gathering compared to traditional Graph RAG or predefined Workflow RAG paradigms.
Quick Start & Requirements
uv sync --extra full or pip install -e ".[full]". uv is recommended.Qwen/Qwen3-Embedding-0.6B), OpenAI API key, and compatible OpenAI API endpoint.Highlighted Details
Maintenance & Community
Contributions are welcomed. The project is associated with authors from arXiv:2602.03442. Specific community channels or active maintainer information are not detailed in the README.
Licensing & Compatibility
Released under the MIT License, permitting commercial use and integration into closed-source projects.
Limitations & Caveats
The roadmap indicates planned support for additional benchmarks and LLM providers (Anthropic, Gemini), suggesting current implementation is primarily focused on OpenAI-compatible APIs. Features like ablation studies and visualization tools are also listed as future work.
2 months ago
Inactive
stanford-futuredata