Python library for agentic RAG from GitHub projects
Top 90.1% on sourcepulse
Llama-github is a Python library designed to enable LLM Chatbots, AI Agents, and Auto-dev Solutions to perform Agentic Retrieval Augmented Generation (RAG) on public GitHub repositories. It retrieves relevant code snippets, issues, and repository information to generate context for coding questions, aiming to streamline AI-driven development.
How It Works
The library employs an LLM-powered question analysis to break down complex queries and generate effective search strategies. It then retrieves highly relevant information from GitHub using advanced techniques and caches repositories across threads to improve efficiency and reduce API token consumption. Finally, it generates comprehensive, contextually relevant answers by combining retrieved GitHub data with LLM reasoning. The architecture is built with asynchronous programming for concurrent request handling and offers flexible LLM integration.
Quick Start & Requirements
pip install llama-github
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
In professional mode, context generation for a single query can take approximately one minute. The library relies on external API keys for OpenAI and optionally Jina.
6 days ago
Inactive