Unified API proxy for multiple LLMs
New!
Top 76.8% on sourcepulse
Gemini-CLI-2-API is a Node.js-based proxy server that unifies access to various large language model APIs, including Gemini, OpenAI, and Claude, presenting them through a single, OpenAI-compatible local endpoint. It targets developers and users who need to integrate multiple LLMs into their applications or workflows without managing disparate API formats and authentication methods, offering a streamlined and flexible solution.
How It Works
The project employs a modular architecture leveraging the Adapter and Strategy design patterns. A central HTTP server handles incoming requests, identifying the target LLM provider based on configuration or request headers. It then uses provider-specific adapters and strategies to translate requests into the format expected by the backend LLM API (e.g., Gemini, OpenAI, Claude) and converts the responses back to an OpenAI-compatible format. This approach ensures seamless integration and simplifies the addition of new model providers.
Quick Start & Requirements
npm install
.config.json
file or use command-line arguments to specify the model provider, API keys, ports, and other settings.node src/api-server.js
.Highlighted Details
Maintenance & Community
The project is actively maintained by justlovemaki. Further community interaction details are not explicitly provided in the README.
Licensing & Compatibility
The project is licensed under the GNU General Public License v3 (GPLv3). This license is copyleft, meaning derivative works must also be open-sourced under GPLv3. Commercial use or linking with closed-source applications may require careful consideration due to the GPLv3's strong copyleft provisions.
Limitations & Caveats
The project is licensed under GPLv3, which may restrict its use in proprietary software. While it supports Gemini CLI's OAuth, the original Gemini CLI's built-in command functions are not available through this proxy.
1 day ago
Inactive