Proxy server for AI code completion and chat
Top 34.1% on sourcepulse
This repository provides a proxy service to reroute requests from AI coding assistants like GitHub Copilot to various backend LLM providers, including OpenAI, DeepSeek, Siliconflow, and local Ollama instances. It targets developers seeking flexibility in their AI coding tools, enabling them to leverage different models and APIs without vendor lock-in.
How It Works
The project acts as a local HTTP server that intercepts requests from IDE plugins (VSCode, JetBrains) and forwards them to configured LLM APIs. It supports both chat completions and code generation (via codex
endpoints), allowing for customization of API endpoints, models, and authentication. The configuration is managed via a config.json
file or environment variables, offering granular control over API usage and model selection.
Quick Start & Requirements
http://127.0.0.1:8181
). Run the provided scripts for IDE patching (VSCode) or follow configuration steps for JetBrains.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
codex_max_tokens
functionality is noted as not working perfectly and has been removed.9 months ago
1 day