Proxy API for AI challenges, channeling solutions for client applications
Top 39.2% on sourcepulse
This project provides a unified API proxy for various AI challenges in text, audio, image, and video understanding and generation, aiming to consolidate well-known solutions for easy client access. It's designed for developers and researchers seeking a consolidated interface to AI functionalities, with a long-term goal of enabling offline, on-device AI capabilities, particularly for the SUSI project.
How It Works
The API is structured around challenge fields (text, audio, image, video) and mirrors existing provider API definitions for consistency. It includes drop-in replacements for specific OpenAI API endpoints, facilitating a smooth transition. The project's ultimate aim is to replace these proxy functions with self-hosted AI models, leveraging recent advancements in transformer models for feasibility.
Quick Start & Requirements
pip3 install -r requirements.txt
python3 src/main.py --openai_api_key <OPENAI-API-KEY>
docker build -t susi_api .
and run with docker run -d -p 8080:8080 -e OPENAI_API_KEY=<apikey> --name susi_api susi_api
.cd test/src && ./test_voice.sh
.Highlighted Details
Maintenance & Community
No specific contributors, sponsorships, or community links (Discord/Slack, roadmap) are mentioned in the README.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The project currently requires an OpenAI API key, though future versions plan to offer open replacements. The README does not detail specific AI models used or performance benchmarks.
1 year ago
Inactive