Self-hosted AI platform for local LLM deployment
Top 0.1% on sourcepulse
Open WebUI provides a user-friendly, self-hosted AI platform for interacting with large language models (LLMs). It targets users who want a feature-rich, offline-capable interface for managing and conversing with various LLM runners, including Ollama and OpenAI-compatible APIs, with built-in RAG capabilities.
How It Works
The platform leverages a Docker-first deployment strategy, offering official images for various configurations including Ollama integration and CUDA acceleration. It supports OpenAI-compatible APIs, allowing flexible integration with services like LMStudio, GroqCloud, and Mistral. Key features include granular permissions, a responsive PWA for mobile, extensive Markdown/LaTeX support, and native Python function calling for custom tool integration.
Quick Start & Requirements
pip install open-webui
then open-webui serve
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
(for Ollama on host)docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
-v open-webui:/app/backend/data
) is crucial for data persistence.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
:dev
Docker tag is for unstable, bleeding-edge features and may contain bugs.--network=host
).1 day ago
1 day