Self-hosted API for OpenAI's stateful Assistants API
Top 59.2% on sourcepulse
This project provides a self-hosted, OpenAI Assistants API-compatible backend, enabling users to run custom models, implement custom RAG, or deploy Assistants on-premise. It targets developers and researchers seeking greater control and flexibility over their AI assistant deployments, offering full compatibility with OpenAI's SDKs.
How It Works
The backend leverages OpenAI's official OpenAPI specification to auto-generate API route definitions and types. This ensures wire-level compatibility with OpenAI's SDKs, allowing seamless switching between the official API and the self-hosted version by simply changing the baseURL
. It utilizes Postgres for data storage via Prisma, Redis for task queuing with BullMQ, S3-compatible storage for files, and Hono for serving the REST API.
Quick Start & Requirements
pnpm install
pnpm generate
npx tsx src/server
(API) and npx tsx src/runner
(task queue).Highlighted Details
open-interpreter
.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Support for the built-in code_interpreter
tool and non-text files with the retrieval tool are currently missing. The retrieval implementation is basic and returns full file contents rather than relevant chunks. Resource IDs do not use OpenAI's prefix format.
4 months ago
1 day