mcp-server-code-execution-mode  by elusznik

Code execution bridge for LLM agents

Created 1 month ago
268 stars

Top 95.9% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a bridge for executing Python code within isolated, rootless containers, specifically designed to reduce the context window bloat associated with traditional Model Context Protocol (MCP) servers. It targets LLM agents and developers who need to leverage numerous MCP tools without incurring high token costs or compromising security. The primary benefit is a drastic reduction in prompt token usage (from ~30K to ~200 tokens) while enabling native Python data science capabilities and robust code execution sandboxing.

How It Works

The bridge implements a "discovery-first" architecture, inspired by Anthropic's and Cloudflare's approaches. Instead of exposing hundreds of individual MCP tool schemas to the LLM, it exposes a single run_python tool. The LLM then writes Python code that dynamically discovers, hydrates schemas for, and calls other MCP servers. This approach leverages rootless containers (Podman/Docker) for security, ensuring code execution is isolated with minimal privileges. Tool schemas are fetched on demand, keeping the LLM's context window consistently small regardless of the number of proxied servers.

Quick Start & Requirements

  • Prerequisites: macOS or Linux, Python 3.14+, Podman or Docker.
  • Installation: Use uv for dependency management: uv sync.
  • Launch: uvx --from git+https://github.com/elusznik/mcp-server-code-execution-mode mcp-server-code-execution-mode run.
  • Agent Registration: Configure the bridge in your agent's MCP settings file (e.g., mcp_config.json).
  • Documentation: README.md, GUIDE.md, ARCHITECTURE.md, HISTORY.md, STATUS.md.

Highlighted Details

  • Zero-Context Discovery: Reduces LLM context overhead to approximately 200 tokens, irrespective of the number of MCP servers configured.
  • Rootless Container Sandbox: Executes code within containers using --cap-drop=ALL, read-only filesystem, and no-new-privileges for enterprise-grade security.
  • Native Data Science: Supports Python libraries like pandas, numpy, and scikit-learn directly within the isolated environment.
  • Universal MCP Proxying: Capable of proxying any standard input/output (stdio) MCP server.
  • Code-First Execution: Enables LLMs to write single Python scripts for complex workflows, combining discovery, logic, and execution, contrasting with multi-step "chatty" LLM interactions.

Maintenance & Community

The primary community interaction point is the GitHub repository. No specific details regarding maintainers, sponsorships, or dedicated community channels (like Discord/Slack) are provided in the README.

Licensing & Compatibility

  • License: GPLv3.
  • Compatibility: As a GPLv3 licensed project, it is subject to strong copyleft provisions, which may impact its use in closed-source or proprietary applications.

Limitations & Caveats

Automated testing, observability features (logging, metrics), policy controls, and runtime diagnostics are currently in progress. Support for discovering server definitions from individual agent configuration files (e.g., .claude.json) is postponed, with ~/MCPs/*.json being the recommended location. Self-server recursion requires explicit configuration (MCP_BRIDGE_ALLOW_SELF_SERVER=1).

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
5
Star History
257 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.