API package for multi-provider LLM requests (GPT-4.1, Gemini 2.5, Deepseek R1)
Top 0.3% on sourcepulse
This repository provides a Python package and Docker image for accessing various large language models (LLMs) and image generation models, acting as a proof-of-concept for a multi-provider API. It targets developers and researchers looking for flexible access to AI models beyond proprietary APIs, offering features like load balancing and timeouts.
How It Works
The project aggregates access to numerous LLM providers, abstracting away individual API complexities. It utilizes a client-server architecture, with a FastAPI backend providing an OpenAI-compatible API endpoint and a web UI. The core advantage lies in its dynamic provider selection and fallback mechanisms, enabling users to access a wide range of models through a unified interface.
Quick Start & Requirements
pip install -U g4f[all]
docker pull hlohaus789/g4f
and docker run ...
(see README for full command)Highlighted Details
http://localhost:1337/v1
).Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is explicitly described as a "proof of concept" and relies on unofficial access methods to various AI providers, which may be unstable or subject to change without notice. Some providers may require specific browser installations or configurations.
1 day ago
1 day