eclaire  by eclaire-labs

AI assistant for your private data

Created 1 month ago
371 stars

Top 76.2% on SourcePulse

GitHubView on GitHub
Project Summary

Local-first, open-source AI assistant designed to unify and manage personal data across tasks, notes, documents, photos, and bookmarks. It offers a privacy-focused, self-hosted alternative to closed ecosystems, enabling users to organize, search, and automate their digital life using local AI models. The project targets power users and developers seeking control over their data and AI interactions.

How It Works

Eclaire employs a modular, layered architecture separating frontend, backend API, and background workers. It leverages local AI models for privacy and broad compatibility, supporting various LLM backends like llama.cpp, vLLM, and Ollama via an OpenAI-compatible API. Data is unified from diverse sources and stored in PostgreSQL, with assets managed locally. AI capabilities include content understanding, search, OCR, and automation, with AI conversations providing source citations.

Quick Start & Requirements

  • Installation: Recommended via Docker (npm run setup:prod then docker compose up) or for development (npm run setup:dev then npm run dev).
  • Prerequisites: Node.js ≥ 22, Docker Desktop, PM2, PostgreSQL ≥ 17.5, Redis ≥ 8. AI/ML backends like llama.cpp/llama-server and docling-serve are required. Development mode additionally needs LibreOffice, Poppler Utils, GraphicsMagick/ImageMagick, Ghostscript, and libheif.
  • Setup: AI model downloads can take 5-10 minutes. Resource requirements scale with model size.
  • Links: Official documentation, contributing guide, and API details are referenced but direct URLs are not provided in the README.

Highlighted Details

  • Cross-platform support for macOS, Linux, and Windows.
  • Privacy-first design with local model execution and data storage by default.
  • Unified data management across tasks, notes, documents, photos, and bookmarks.
  • Features AI-powered search, classification, OCR, automation, and conversational chat with tool-calling capabilities.
  • Provides an OpenAI-compatible REST API for integration.
  • Supports a wide range of LLM backends and models (text and vision), with hardware acceleration options (NVIDIA CUDA, Apple MLX).
  • Integrations include Telegram, GitHub, and Reddit.
  • Offers Mobile & PWA support.

Maintenance & Community

The project is under active, pre-release development with frequent updates and evolving APIs. Contributions are welcomed via the Contributing Guide. Support and issue tracking are managed through GitHub Issues. No dedicated community channels like Discord or Slack are mentioned.

Licensing & Compatibility

The specific open-source license is not explicitly stated in the provided README. Users should exercise caution regarding commercial use or integration into closed-source projects until a license is clarified. The project is designed for self-hosting and includes security warnings against direct public internet exposure.

Limitations & Caveats

Currently in pre-release status, users should anticipate frequent updates, potential breaking changes, and evolving APIs, necessitating regular data backups. The system is not hardened for direct internet exposure and requires additional security layers like VPNs or reverse proxies. Setup involves managing several infrastructure services and potentially complex AI backend installations.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
0
Star History
376 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.