gcli2api  by su-kaka

Gemini API proxy for OpenAI and Gemini formats

Created 2 months ago
1,180 stars

Top 33.0% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a bridge between Google's Gemini models and APIs that expect OpenAI-compatible or native Gemini formats. It's designed for individual developers, researchers, and non-profit organizations who want to leverage Gemini's capabilities within existing OpenAI-based workflows or directly with Gemini's native API, offering a flexible and unified interface.

How It Works

The core of gcli2api is a web server that exposes two main API endpoints: one compatible with OpenAI's chat completions and models, and another that directly mirrors the Gemini native API. It intelligently handles requests, automatically converting between OpenAI's messages format and Gemini's contents structure. The project supports various authentication methods, including Bearer tokens and API keys, and features a fallback mechanism for streaming responses. It also includes a web-based authentication interface for simplifying the OAuth credential management process.

Quick Start & Requirements

Installation is supported across Termux, Windows, Linux, and Docker.

  • Termux: bash curl -o termux-install.sh "https://raw.githubusercontent.com/su-kaka/gcli2api/refs/heads/master/termux-install.sh" && chmod +x termux-install.sh && ./termux-install.sh
  • Windows: iex (iwr "https://raw.githubusercontent.com/su-kaka/gcli2api/refs/heads/master/install.ps1" -UseBasicParsing).Content
  • Linux: bash curl -o install.sh "https://raw.githubusercontent.com/su-kaka/gcli2api/refs/heads/master/install.sh" && chmod +x install.sh && ./install.sh
  • Docker: docker run -d --name gcli2api --network host -e PASSWORD=pwd -e PORT=7861 -v $(pwd)/data/creds:/app/creds ghcr.io/su-kaka/gcli2api:latest

Prerequisites include a Google account for authentication and potentially Docker. Setup is generally quick, with instructions provided for each platform.

Highlighted Details

  • Supports both OpenAI-compatible (/v1/chat/completions) and native Gemini endpoints (/v1/models/{model}:generateContent).
  • Automatic format detection and conversion between OpenAI messages and Gemini contents.
  • Multiple authentication methods supported (Bearer Token, x-goog-api-key, URL parameters).
  • Intelligent credential management with automatic rotation and load balancing for multiple Google OAuth credentials.
  • All supported models (e.g., gemini-2.5-pro) feature a 1M context window.

Maintenance & Community

Information on maintainers, community channels (like Discord/Slack), or a roadmap is not explicitly detailed in the README.

Licensing & Compatibility

The project is licensed under the Cooperative Non-Commercial License (CNC-1.0). This license strictly prohibits commercial use, use by companies with annual revenues exceeding $1 million USD, venture-backed or publicly traded companies, and offering paid services. Permitted uses include personal learning, research, education, non-profit organizations, and academic research.

Limitations & Caveats

The OAuth authentication flow is currently limited to localhost access. For remote deployments, users must authenticate locally first, then upload the generated credential file. The license is highly restrictive, preventing any form of commercial application or use by larger corporations.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
7
Issues (30d)
25
Star History
422 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.