LLM bridge for Model Context Protocol (MCP) servers
Top 82.8% on sourcepulse
This project provides a bridge between Model Context Protocol (MCP) servers and OpenAI-compatible Large Language Models (LLMs). It enables LLMs to interact with MCP-compliant tools by translating between MCP's tool specifications and OpenAI's function-calling interface, allowing seamless integration of local or cloud-based LLMs with MCP-enabled systems.
How It Works
The bridge acts as a bidirectional translation layer. It converts MCP tool specifications into OpenAI function schemas, facilitating LLM understanding of available tools. When the LLM invokes a function, the bridge maps this invocation back to an MCP tool execution, processing the results and returning them to the LLM. This approach standardizes tool interaction, making any OpenAI-compatible LLM capable of leveraging MCP tools.
Quick Start & Requirements
uv
for package management: curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
, uv venv
, source .venv/bin/activate
uv pip install -e .
python -m mcp_llm_bridge.create_test_db
Highlighted Details
gpt-4o
) and local endpoints like Ollama.uvx
command.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The README notes that mistral-nemo:12b-instruct-2407-q8_0
handles complex queries more effectively than other tested models like llama3.2:3b-instruct-fp16
. LM Studio compatibility is stated as a likely success but not explicitly tested.
4 months ago
1+ week