API proxy for OpenAI-style LLM access
Top 71.5% on sourcepulse
This project provides a unified API gateway for various large language models (LLMs), abstracting away differences in their APIs and allowing users to interact with them using a single, OpenAI-compatible interface. It's designed for developers and users who need to integrate multiple LLMs into their applications or manage API key distribution efficiently.
How It Works
The project acts as a reverse proxy, accepting requests in the OpenAI API format and routing them to the appropriate LLM backend based on configuration. It supports various LLM providers (OpenAI, Azure OpenAI, Claude, Gemini, Kimi, Zhipu AI, Xunfei Spark, Qwen) and allows for custom parameter mapping and load balancing across multiple API keys or models.
Quick Start & Requirements
docker run -d -p 8090:8090 --name openai-style-api -e ADMIN-TOKEN=admin -v /path/to/your/model-config.json:/app/model-config.json tianminghui/openai-style-api /path/to/your/model-config.json
git clone https://github.com/tian-minghui/openai-style-api.git
, pip install -r requirements.txt
, python open-api.py
model-config.json
file is required for API key and model configuration.Highlighted Details
model_name
.Maintenance & Community
The project is maintained by a single developer, with a note encouraging community contributions (issues and PRs). No specific community channels or roadmap links are provided in the README.
Licensing & Compatibility
The project's license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking would depend on the underlying licenses of the LLM APIs it integrates with.
Limitations & Caveats
The developer notes that model updates may not be timely due to limited personal bandwidth. Claude API support is pending testing due to an API waitlist. Some models (Baidu Wenxin Yiyan) are not yet supported.
1 year ago
1 week