Discover and explore top open-source AI tools and projects—updated daily.
elusznikCode execution bridge for LLM agents
Top 95.9% on SourcePulse
This project provides a bridge for executing Python code within isolated, rootless containers, specifically designed to reduce the context window bloat associated with traditional Model Context Protocol (MCP) servers. It targets LLM agents and developers who need to leverage numerous MCP tools without incurring high token costs or compromising security. The primary benefit is a drastic reduction in prompt token usage (from ~30K to ~200 tokens) while enabling native Python data science capabilities and robust code execution sandboxing.
How It Works
The bridge implements a "discovery-first" architecture, inspired by Anthropic's and Cloudflare's approaches. Instead of exposing hundreds of individual MCP tool schemas to the LLM, it exposes a single run_python tool. The LLM then writes Python code that dynamically discovers, hydrates schemas for, and calls other MCP servers. This approach leverages rootless containers (Podman/Docker) for security, ensuring code execution is isolated with minimal privileges. Tool schemas are fetched on demand, keeping the LLM's context window consistently small regardless of the number of proxied servers.
Quick Start & Requirements
uv for dependency management: uv sync.uvx --from git+https://github.com/elusznik/mcp-server-code-execution-mode mcp-server-code-execution-mode run.mcp_config.json).README.md, GUIDE.md, ARCHITECTURE.md, HISTORY.md, STATUS.md.Highlighted Details
--cap-drop=ALL, read-only filesystem, and no-new-privileges for enterprise-grade security.Maintenance & Community
The primary community interaction point is the GitHub repository. No specific details regarding maintainers, sponsorships, or dedicated community channels (like Discord/Slack) are provided in the README.
Licensing & Compatibility
Limitations & Caveats
Automated testing, observability features (logging, metrics), policy controls, and runtime diagnostics are currently in progress. Support for discovering server definitions from individual agent configuration files (e.g., .claude.json) is postponed, with ~/MCPs/*.json being the recommended location. Self-server recursion requires explicit configuration (MCP_BRIDGE_ALLOW_SELF_SERVER=1).
1 week ago
Inactive
emcf
zerocore-ai
nomic-ai
modelcontextprotocol