Local AI environment via Docker Compose template
Top 4.7% on sourcepulse
This open-source Docker Compose template provides a quick setup for a local, self-hosted AI and low-code development environment. Curated by n8n, it targets developers and users looking to build secure AI workflows locally, offering a comprehensive toolkit with n8n, Ollama, Qdrant, and PostgreSQL.
How It Works
The kit leverages Docker Compose to orchestrate multiple AI components. It integrates n8n for low-code workflow automation, Ollama for running local LLMs, Qdrant as a vector store for AI data, and PostgreSQL for data management. This combination allows users to build applications like AI agents, secure document summarizers, and enhanced communication bots without relying on external cloud services.
Quick Start & Requirements
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git && cd self-hosted-ai-starter-kit
docker compose --profile gpu-nvidia up
docker compose --profile gpu-amd up
docker compose up
docker compose --profile cpu up
http://localhost:5678/
. Initial LLM download (e.g., Llama3.2) may take time.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The starter kit is designed for proof-of-concept projects and is not fully optimized for production environments. Mac users cannot directly expose their GPU to Docker instances.
2 weeks ago
Inactive