Discover and explore top open-source AI tools and projects—updated daily.
jgravelleToken-efficient code exploration for AI agents
Top 36.0% on SourcePulse
jCodeMunch MCP addresses the high token costs and inefficiencies AI agents face when exploring codebases. By indexing repositories once using Abstract Syntax Tree (AST) parsing, it enables precise, token-efficient retrieval of specific code symbols, drastically reducing costs and improving AI performance for tasks like code understanding and refactoring. It is designed for AI agents and developers seeking to optimize AI-driven code analysis.
How It Works
jCodeMunch leverages the tree-sitter parser to build a structured index of code symbols (functions, classes, methods, constants) from a codebase. This index allows MCP-compatible agents to query and retrieve exact code elements via stable symbol IDs and O(1) byte-offset seeking, rather than processing entire files. This approach provides precision context, significantly cutting down on token consumption and latency compared to traditional methods of scanning raw files.
Quick Start & Requirements
pip install jcodemunch-mcp. The uvx tool is recommended for integrating the MCP client.GITHUB_TOKEN), Anthropic (ANTHROPIC_API_KEY), or local LLMs (OPENAI_API_BASE, OPENAI_MODEL).j@gravelle.us.Highlighted Details
j.gravelle.us (opt-out available).Maintenance & Community
The project shows active development with recent updates addressing dependency pinning, community features, model pricing, security fixes, and expanded language support. Community engagement includes anonymous sharing of token savings data.
Licensing & Compatibility
This repository is licensed under a dual-use agreement. It is free for non-commercial use (personal, educational, research, hobby). Commercial use, defined broadly to include business environments, for-profit organizations, product integration, or revenue-generating services, requires a separate paid commercial license from the author.
Limitations & Caveats
jCodeMunch MCP is not intended for LSP diagnostics or completions, general editing workflows, real-time file watching, or cross-repository global indexing. Integration with local LLMs requires careful configuration, potentially involving pre-loading models and adjusting timeouts to prevent client-side timeouts during model inference.
1 day ago
Inactive