API proxy for OpenAI to Google Gemini protocol conversion
Top 51.2% on sourcepulse
This project provides a proxy server that translates requests from the OpenAI API protocol to the Google Gemini Pro protocol. It allows applications designed for OpenAI's API to seamlessly interact with Google's Gemini models, supporting chat completions and embeddings. The target audience includes developers who want to leverage Gemini's capabilities within existing OpenAI-based workflows or applications.
How It Works
The proxy acts as an intermediary, receiving requests formatted for OpenAI's API (e.g., /v1/chat/completions
, /v1/embeddings
). It then maps these requests, including model names and parameters, to their corresponding Gemini API equivalents. The core advantage is enabling compatibility without modifying applications that are already integrated with the OpenAI API specification.
Quick Start & Requirements
docker run --restart=unless-stopped -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest
Highlighted Details
gpt-4
to gemini-1.5-flash-002
, gpt-4-vision-preview
to gemini-1.5-flash-latest
or gemini-1.5-pro-latest
).DISABLE_MODEL_MAPPING=1
.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Google AI Studio now offers its own official OpenAI-compatible API endpoint, which may supersede the need for this proxy. The README notes that users are recommended to use the official compatibility API.
1 month ago
1 day