Simple API server for LLM access
Top 46.5% on sourcepulse
Duck2api provides a unified API endpoint for various large language models, acting as a proxy to simplify integration for developers. It supports models like Claude-3-Haiku, Llama-3.3-70b, Mixtral-8x7b, and GPT-4o-mini, offering a single interface to interact with different LLM providers.
How It Works
Duck2api functions as a reverse proxy, forwarding requests to specified LLM APIs. It abstracts away the complexities of individual model integrations, presenting a consistent API structure. This approach allows users to switch between different LLM backends without modifying their client applications, promoting flexibility and reducing vendor lock-in.
Quick Start & Requirements
git clone https://github.com/aurora-develop/duck2api && cd duck2api && go build -o duck2api && chmod +x ./duck2api && ./duck2api
docker run -d --name duck2api -p 8080:8080 ghcr.io/aurora-develop/duck2api:latest
docker-compose.yml
and run docker-compose up -d
.curl --location 'http://<your-server-ip>:8080/v1/chat/completions' --header 'Content-Type: application/json' --data '{ "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Say this is a test!"}], "stream": true }'
http://<your-server-ip>:8080/web
.Highlighted Details
/v1/chat/completions
API endpoint.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project mentions that gpt-3.5-turbo
is no longer supported due to the DuckDuckGo API removing support. The README does not detail specific performance benchmarks or advanced configuration options beyond environment variables.
3 months ago
1 week