CLI tool for local LLM stack orchestration
Top 23.0% on sourcepulse
Harbor is a containerized toolkit designed to simplify the setup and management of local Large Language Model (LLM) environments. It targets developers and researchers who need to quickly experiment with various LLM backends, frontends, and related services like RAG, TTS, and image generation, offering a unified CLI for effortless orchestration.
How It Works
Harbor leverages Docker and a custom CLI to manage a catalog of LLM-related services. It pre-configures and connects popular inference engines (Ollama, vLLM, TGI), frontends (Open WebUI, LibreChat), and tools (SearXNG for RAG, ComfyUI for image generation, Speaches for voice) with minimal user intervention. The CLI allows users to start, stop, and manage these services with simple commands, abstracting away complex Docker Compose configurations.
Quick Start & Requirements
pip install harbor-llm
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is positioned as a helper for local development environments, not a production deployment solution. Users comfortable with Docker and Linux administration may find its abstractions unnecessary. The README warns about the security implications of exposing services to the internet via the tunneling feature.
3 weeks ago
1 day