done-hub  by deanxv

Enhanced AI model API gateway

Created 4 months ago
268 stars

Top 95.6% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

deanxv/done-hub is a fork of the one-hub project, enhancing an AI model aggregation platform. It targets users and developers managing multiple AI service backends, offering expanded channel support (Claude, VertexAI, Gemini), improved configuration flexibility, and new features like invite codes and detailed analytics, aiming to provide a more robust and feature-rich proxy solution.

How It Works

This project builds upon the one-hub architecture, introducing significant modifications and additions. Key enhancements include native routing support for Claude (via ClaudeCode) and Gemini (via GeminiCli) within VertexAI channels, enabling direct access to specific models and features like video generation (VEO) and image generation compatible with OpenAI interfaces. It also refactors the system information module and adds granular control over channel parameters, model naming, and regional endpoints for VertexAI.

Quick Start & Requirements

Deployment involves replacing the original one-hub Docker image with deanxv/done-hub. The project maintains database compatibility, allowing direct migration from the original version. Specific prerequisites are inherited from the base one-hub project; consult its documentation for details.

  • Primary install/run command: Replace the Docker image tag with deanxv/done-hub.
  • Prerequisites: Inherited from one-hub (likely requires Docker, a database). Consult original project docs.
  • Links: Original project docs: https://one-hub-doc.vercel.app/

Highlighted Details

  • Expanded channel support: Native routing for Claude (ClaudeCode) and VertexAI (GeminiCli, ClaudeCode), including multi-region support and random selection.
  • Advanced Gemini integration: Supports native video generation (VEO models) and image generation (gemini-2.0-flash-preview) via the /gemini endpoint, compatible with OpenAI chat interfaces.
  • Enhanced configuration: Case-insensitive model names, unified request/response model naming, ability to remove parameters from channel extra args, and variable replacement in channel BaseURLs.
  • New features: Invite code system, batch user/channel grouping, configurable billing for empty replies, and detailed analytics (RPM/TPM/CPM, recharge stats).

Maintenance & Community

The project is presented as a community effort ("AI Wave 社群") and is a derivative of one-hub. Specific maintenance details, active contributors, or dedicated community channels (like Discord/Slack) are not detailed in the provided text, beyond referencing the original project's documentation and community context.

Licensing & Compatibility

The license is not explicitly stated in the provided README. As a derivative work of one-hub, its licensing status depends on the original project's license and any modifications made. Users should verify the licensing terms for both the base project and this fork before integration, especially for commercial use.

Limitations & Caveats

The README focuses on additions and fixes, offering limited insight into potential regressions or unsupported features compared to the latest one-hub release. The lack of explicit licensing information presents an adoption blocker for users requiring clear legal terms. Documentation primarily points to the original one-hub project, requiring users to infer compatibility and specific operational details.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
6
Star History
53 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.