local-ai-packaged  by coleam00

Self-hosted AI package for local LLMs and low-code development

Created 10 months ago
3,456 stars

Top 14.0% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a comprehensive, self-hosted AI development environment using Docker Compose, integrating Ollama for local LLMs, Open WebUI for a chat interface, Supabase for database and vector storage, and n8n for low-code workflow automation. It targets developers and power users looking to build and manage local AI applications efficiently, offering a unified platform with over 400 integrations.

How It Works

The package leverages Docker Compose to orchestrate multiple AI services, including Ollama, Supabase, Open WebUI, Flowise, Langfuse, SearXNG, and Caddy. A Python script (start_services.py) simplifies deployment and GPU configuration (Nvidia, AMD, CPU, or none). Supabase acts as a central hub for data, vectors, and authentication, while n8n provides a visual interface for building complex AI workflows with pre-built AI nodes and integrations.

Quick Start & Requirements

  • Install: Clone the repository, navigate to the directory, and run python start_services.py --profile <gpu-nvidia|gpu-amd|cpu|none>.
  • Prerequisites: Python, Git, Docker/Docker Desktop.
  • Configuration: Requires setting environment variables in a .env file for n8n, Supabase, and Langfuse secrets.
  • Docs: Original Local AI Starter Kit, Cole's Guide

Highlighted Details

  • Integrates n8n with over 400 integrations and AI components.
  • Includes Supabase for database, vector store, and authentication.
  • Supports local LLMs via Ollama with GPU acceleration options.
  • Provides Open WebUI for a ChatGPT-like interface.
  • Adds Flowise for no/low-code AI agent building.
  • Incorporates Qdrant for high-performance vector storage.
  • Features SearXNG for private internet search and Caddy for managed HTTPS.
  • Includes Langfuse for LLM observability.

Maintenance & Community

  • Curated by n8n-io and coleam00.
  • Community forum available via oTTomator Think Tank.
  • GitHub Kanban board for feature implementation and bug tracking.

Licensing & Compatibility

  • Licensed under Apache License 2.0.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

The starter kit is designed for proof-of-concept projects and may require customization for production environments. GPU support on Mac/Apple Silicon is limited to CPU or external Ollama instances. Ensure no "@" symbol is present in Supabase Postgres passwords to avoid connection issues.

Health Check
Last Commit

1 week ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
21
Star History
109 stars in the last 30 days

Explore Similar Projects

Starred by Yaowei Zheng Yaowei Zheng(Author of LLaMA-Factory), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
7 more.

fragments by e2b-dev

0.2%
6k
Next.js template for AI-generated apps
Created 1 year ago
Updated 3 weeks ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), and
27 more.

open-webui by open-webui

0.5%
120k
Self-hosted AI platform for local LLM deployment
Created 2 years ago
Updated 1 day ago
Feedback? Help us improve.