Discover and explore top open-source AI tools and projects—updated daily.
CLI tool for LLM prompting with Model Context Protocol (MCP) client
Top 52.3% on SourcePulse
This CLI tool enables users to interact with Large Language Models (LLMs) and Model Context Protocol (MCP) compatible servers directly from their terminal. It supports various LLM providers (OpenAI, Groq, local models) and integrates with MCP servers for functionalities like web search and YouTube summarization, offering a versatile alternative to desktop clients.
How It Works
The client leverages a configuration file (~/.llm/config.json
) to manage LLM provider details, system prompts, and MCP server integrations. It supports piping text and image input, using prompt templates for common tasks, and dynamically calling external tools (MCP servers) based on user queries. Tool execution can be confirmed manually or bypassed with a flag, and intermediate tool messages can be suppressed for cleaner script output.
Quick Start & Requirements
pip install mcp-client-cli
~/.llm/config.json
file for LLM and MCP server configuration.pngpaste
for image clipboard support on macOS (brew install pngpaste
).xclip
for clipboard support on Linux (sudo apt install xclip
).Highlighted Details
uvx
or npx
for tools like Brave Search and YouTube summarization.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Clipboard support on Linux requires the xclip
utility to be installed. Image handling via clipboard on macOS is optional and requires pngpaste
.
4 months ago
1 week