claudish  by MadAppGang

Claude Code proxy for any AI model

Created 1 month ago
361 stars

Top 77.8% on SourcePulse

GitHubView on GitHub
Project Summary

Claudish is a CLI tool that extends Claude Code's capabilities beyond Anthropic models, enabling users to leverage a wide array of AI models through a unified proxy. It targets developers and power users seeking flexibility in their AI coding agents, offering access to over 100 models via OpenRouter, direct Gemini/OpenAI APIs, and local LLMs, thereby enhancing coding workflows with diverse AI backends.

How It Works

Claudish acts as a local proxy server, translating Claude Code's native Anthropic API requests into formats compatible with various AI providers. It uses a prefix-based routing system (e.g., g/ for Gemini, oai/ for OpenAI, ollama/ for local models) to direct traffic. This approach allows Claude Code to interact seamlessly with models hosted on OpenRouter, Google Gemini, OpenAI, or local inference servers like Ollama, vLLM, and LM Studio, all while maintaining 1:1 compatibility with the Claude Code communication protocol.

Quick Start & Requirements

  • Installation: Available via shell script (curl ... | bash), Homebrew (brew install claudish), npm (npm install -g claudish), or Bun (bun install -g claudish). Can also be run without installation using npx or bunx.
  • Prerequisites: Claude Code CLI must be installed. At least one API key is required for most providers (OpenRouter, Gemini, OpenAI). A placeholder ANTHROPIC_API_KEY is also necessary to prevent Claude Code dialogs.
  • Setup: Interactive setup prompts for API keys and models if not provided.

Highlighted Details

  • Multi-Provider Support: Seamlessly integrates with OpenRouter (100+ models), Google Gemini, OpenAI, and local models (Ollama, LM Studio, vLLM, MLX).
  • Universal Compatibility: Runs via npx or bunx without installation, supporting Node.js and Bun.
  • Agent Support: Enables the use of specialized AI agents within Claude Code, with features for delegation and file-based instruction patterns.
  • Monitor Mode: Proxies to the real Anthropic API to log all traffic for debugging and understanding Claude Code's internal workings.
  • Thinking Translation Model: Intelligently maps Claude Code's thinking budget controls to various provider-specific parameters, ensuring consistent reasoning behavior across different models.
  • Context Scaling: Dynamically adjusts token accounting to support massive context windows (up to 2M+) while maintaining Claude Code's auto-compaction behavior.

Maintenance & Community

The README does not provide specific details on notable contributors, sponsorships, or community channels like Discord/Slack.

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: Permissive license suitable for commercial use and integration with closed-source projects.

Limitations & Caveats

Requires the Claude Code CLI to be installed separately. Most providers necessitate API keys, and a placeholder ANTHROPIC_API_KEY is mandatory for seamless operation. Monitor mode requires a real Anthropic API key and incurs charges on the Anthropic account.

Health Check
Last Commit

4 days ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
12
Star History
250 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.