mcp-client-for-ollama  by jonigl

A TUI client for local LLMs to interact with MCP servers

Created 5 months ago
303 stars

Top 88.1% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a text-based user interface (TUI) client for interacting with Model Context Protocol (MCP) servers via Ollama, enabling local LLMs to leverage tools. It targets developers working with local LLMs, offering a streamlined, code-free way to manage complex tool-use workflows, enhancing productivity and control.

How It Works

The client acts as a bridge between Ollama's local LLM instances and external MCP servers. It utilizes a rich terminal interface built with Rich and Prompt Toolkit, supporting multiple transport protocols (STDIO, SSE, Streamable HTTP) for server connections. Key functionalities include dynamic model switching, real-time streaming responses, and comprehensive configuration of LLM parameters and system prompts.

Quick Start & Requirements

  • Requirements: Python 3.10+, Ollama running locally, UV package manager.
  • Installation: Install via pip (pip install --upgrade ollmcp) or use the one-step uvx ollmcp. Source installation is also supported.
  • Links: Python Installation, Ollama Installation.

Highlighted Details

  • Multi-Server & Transport Support: Connect to numerous MCP servers simultaneously via local scripts or URLs (SSE, Streamable HTTP).
  • Human-in-the-Loop (HIL): Enables manual approval of tool executions for enhanced safety and control.
  • Advanced Configuration: Fine-tune over 15 LLM parameters, customize system prompts, and manage context window settings.
  • "Thinking Mode": Facilitates advanced reasoning with visible thought processes for supported models.
  • Performance Metrics: Displays detailed token counts and generation timings post-query.
  • Dynamic Model Switching: Seamlessly switch between installed Ollama models without restarting the client.
  • Development Features: Includes server hot-reloading and shell autocompletion via Typer.

Maintenance & Community

The project is maintained by jonigl. No specific community channels (like Discord/Slack) or roadmaps are detailed in the README.

Licensing & Compatibility

Licensed under the permissive MIT License, allowing for commercial use and integration with closed-source applications.

Limitations & Caveats

Requires a local Ollama instance. The "thinking mode" feature is model-specific. Auto-discovery relies on a specific configuration file path, potentially limiting its universality across different user setups.

Health Check
Last Commit

3 days ago

Responsiveness

Inactive

Pull Requests (30d)
17
Issues (30d)
4
Star History
77 stars in the last 30 days

Explore Similar Projects

Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.0%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 month ago
Feedback? Help us improve.