Discover and explore top open-source AI tools and projects—updated daily.
OrionStarAIAI proxy server for unified LLM provider access
New!
Top 96.5% on SourcePulse
<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> DeepV Code Server (deepx-mini-server) is a lightweight AI proxy server designed to unify access to multiple Large Language Model (LLM) providers, including Google Vertex AI and OpenRouter. It offers a standardized API interface for the DeepV Code Client ecosystem, simplifying integration and accelerating development by abstracting away provider-specific complexities.
How It Works
Built with Node.js and TypeScript, this server acts as an intermediary, forwarding requests to various LLM backends. It provides a unified chat interface supporting both streaming and non-streaming responses, automatically converting diverse provider outputs into a consistent Google AI format. This approach streamlines client development by offering a single, predictable API endpoint, regardless of the underlying LLM service. A built-in mock JWT login interface facilitates development and testing.
Quick Start & Requirements
npm install.env file with OPENROUTER_API_KEY and VERTEX_CREDENTIALS_PATHS.npm run dev for development; production deployment via PM2, Docker, or Systemd is recommended.Highlighted Details
Maintenance & Community
The project welcomes forks and pull requests. Specific community channels (e.g., Discord, Slack) or a roadmap are not detailed in the README. The project is associated with Google Cloud Certified Partners.
Licensing & Compatibility
Limitations & Caveats
The project includes mock authentication and hardcoded test data in the login interface, requiring anonymization for production use. Security best practices emphasize avoiding committing environment variables and credential files to version control. Debug logs may contain sensitive data and should be handled carefully.
1 week ago
Inactive
theopenco