Discover and explore top open-source AI tools and projects—updated daily.
ztx888Web UI for advanced LLM interaction and management
Top 48.7% on SourcePulse
This project enhances the official Open WebUI by providing deep Chinese localization, advanced model billing and usage statistics, and multi-architecture support. It targets users seeking a refined Chinese experience, precise cost control, and seamless deployment on diverse hardware, especially ARM devices.
How It Works
Built upon Open WebUI, this version integrates novel features like OpenAI's Responses API for streaming thought processes and native Gemini SDK support for parameters and tool usage. It introduces a fine-grained billing system (per-call/token, input/output pricing) and offers optimized ARM64 builds, addressing specific pain points for local AI deployment.
Quick Start & Requirements
x86_64 (latest) and ARM64 (latest-arm64) architectures. Docker Compose is also supported.pip install open-webui-leon and manage with pm2 start "open-webui serve".Highlighted Details
/v1/responses API, enabling real-time streaming of model thought processes.thinking_budget, native web search, and tool calls.Maintenance & Community
The project syncs regularly with the upstream Open WebUI main branch. Feedback is welcomed via GitHub Issues and Discussions. No specific community channels (like Discord/Slack) or contributor details beyond the original author are listed.
Licensing & Compatibility
Limitations & Caveats
Users on ARM devices must explicitly select the -arm64 Docker image to avoid "exec format error". Pip installations require uninstalling any existing official open-webui package to prevent conflicts.
16 hours ago
Inactive
Josh-XT
danny-avila