Self-hosted AI package for local LLMs and low-code development
Top 19.4% on sourcepulse
This project provides a comprehensive, self-hosted AI development environment using Docker Compose, integrating Ollama for local LLMs, Open WebUI for a chat interface, Supabase for database and vector storage, and n8n for low-code workflow automation. It targets developers and power users looking to build and manage local AI applications efficiently, offering a unified platform with over 400 integrations.
How It Works
The package leverages Docker Compose to orchestrate multiple AI services, including Ollama, Supabase, Open WebUI, Flowise, Langfuse, SearXNG, and Caddy. A Python script (start_services.py
) simplifies deployment and GPU configuration (Nvidia, AMD, CPU, or none). Supabase acts as a central hub for data, vectors, and authentication, while n8n provides a visual interface for building complex AI workflows with pre-built AI nodes and integrations.
Quick Start & Requirements
python start_services.py --profile <gpu-nvidia|gpu-amd|cpu|none>
..env
file for n8n, Supabase, and Langfuse secrets.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The starter kit is designed for proof-of-concept projects and may require customization for production environments. GPU support on Mac/Apple Silicon is limited to CPU or external Ollama instances. Ensure no "@" symbol is present in Supabase Postgres passwords to avoid connection issues.
1 month ago
1 week