WindsurfAPI  by dwgx

LLM API proxy with dual OpenAI/Anthropic compatibility

Created 2 weeks ago

New!

669 stars

Top 50.1% on SourcePulse

GitHubView on GitHub
Project Summary

A Node.js proxy, WindsurfAPI, provides unified access to over 100 AI models (including OpenAI, Anthropic, Gemini, DeepSeek, Grok, Qwen, Kimi, GLM) via both OpenAI and Anthropic compatible APIs. It targets developers integrating diverse LLMs into applications or IDEs, simplifying API management with features like account pooling, rate limiting, and failover.

How It Works

A local Node.js HTTP server exposes OpenAI (/v1/chat/completions) and Anthropic (/v1/messages) endpoints. It translates incoming requests to Windsurf's gRPC protocol, forwarding them to a local Language Server (LS) binary that communicates with the Windsurf cloud service. Key features include account pooling, rate limiting, failover, and stripping upstream model identities. It supports IDE agent workflows by relaying tool_use and tool_result between the model and client applications.

Quick Start & Requirements

  • Installation: Clone the repository, run bash setup.sh for configuration, and start the service with node src/index.js. An update.sh script handles updates.
  • Prerequisites: Node.js, git, bash. The Windsurf Language Server binary requires separate installation via bash install-ls.sh or manual download.
  • Configuration: Environment variables (e.g., PORT, API_KEY, LS_BINARY_PATH) are managed via a .env file.
  • Account Setup: Windsurf accounts must be added via the web dashboard (supporting Google/GitHub OAuth or credentials) or by using API tokens.
  • Access: A dashboard is available at http://<your-ip>:3003/dashboard.

Highlighted Details

  • Supports 107 models, including various versions of Claude, GPT, Gemini, DeepSeek, Grok, Qwen, Kimi, and GLM.
  • Features zero npm dependencies, relying on Node.js built-ins and a custom protobuf implementation.
  • Simultaneously provides OpenAI (/v1/chat/completions) and Anthropic (/v1/messages) API endpoints.
  • Includes account pooling, rotation, rate limiting, failover, and a dashboard for login and administration.
  • Distinguishes chat API functionality from IDE agent capabilities; file operations are client-side.

Maintenance & Community

An update.sh script facilitates easy updates, indicating active maintenance by the author. No explicit community channels (e.g., Discord, Slack) are listed in the README.

Licensing & Compatibility

Nominally licensed under MIT. However, a prominent "Solemn Statement" strictly prohibits commercial use, resale, paid deployment, or acting as a proxy without explicit written permission, severely restricting commercial adoption despite the MIT license.

Limitations & Caveats

Commercial use is heavily restricted by the author's explicit terms, overriding the MIT license. Access to most of the 107 supported models requires a Windsurf Pro subscription; free accounts are limited to gpt-4o-mini and gemini-2.5-flash. The system proxies chat and tool-use interactions; file system operations are executed by client-side IDE agents, not the API itself.

Health Check
Last Commit

4 hours ago

Responsiveness

Inactive

Pull Requests (30d)
8
Issues (30d)
40
Star History
676 stars in the last 15 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems") and David Cramer David Cramer(Cofounder of Sentry).

llmgateway by theopenco

2.9%
1k
LLM API gateway for unified provider access
Created 1 year ago
Updated 4 hours ago
Feedback? Help us improve.