moltis  by moltis-org

Personal AI gateway for local-first, multi-modal assistance

Created 2 weeks ago

New!

1,094 stars

Top 34.6% on SourcePulse

GitHubView on GitHub
Project Summary

Moltis is a personal AI gateway designed as a single, self-contained Rust binary, offering a local-first alternative to cloud-based AI assistants. It targets engineers, researchers, and power users seeking a flexible, controllable platform for interacting with multiple Large Language Models (LLMs) and integrating AI capabilities into their workflows. The primary benefit is a unified, no-runtime, no-npm solution that runs entirely on the user's machine, providing features like long-term memory, sandboxed execution, and multi-channel access.

How It Works

Moltis functions as a local gateway server, built with Rust and the Axum framework, exposing an HTTP and WebSocket API. It orchestrates interactions between clients (Web UI, Telegram, API) and various LLM providers through a trait-based architecture. Core components include an agent runner that manages LLM calls, tool execution within sandboxed Docker or Apple containers, and session persistence via SQLite and JSONL files. It supports real-time token streaming, sub-agent delegation, and an extensible hook system for lifecycle event management, enabling deep customization and control over AI agent behavior.

Quick Start & Requirements

Installation is streamlined via a one-liner script for macOS/Linux (curl -fsSL https://www.moltis.org/install.sh | sh), Homebrew (brew install moltis-org/tap/moltis), Docker (docker pull ghcr.io/moltis-org/moltis:latest), or direct Cargo installation (cargo install moltis --git https://github.com/moltis-org/moltis). A Rust toolchain (1.91+) is required for building from source. Docker is necessary for the sandboxed command execution feature.

Highlighted Details

  • Multi-provider LLM Support: Integrates with OpenAI Codex, GitHub Copilot, and local LLMs.
  • Real-time Streaming: Delivers token streaming for responsive user experiences.
  • Communication Channels: Supports Telegram, a built-in Web UI, and an extensible channel abstraction.
  • Persistence & Memory: Manages conversation history via SQLite and JSONL, with embeddings-powered long-term memory.
  • Sandboxed Execution: Runs user commands within isolated Docker or Apple containers.
  • Extensibility: Features a robust hook system for lifecycle events and an extensible skill system.
  • Web Browsing & Voice: Includes web search capabilities with SSRF protection and multi-provider TTS/STT support.
  • Security: Implements authentication (password, passkey), WebSocket origin validation, and tool result sanitization.

Maintenance & Community

The project is actively developed, indicated by CI/CD pipelines (GitHub Actions) and code quality metrics (Codecov, CodSpeed). A Discord community is available for support and discussion at https://discord.gg/t873en2E.

Licensing & Compatibility

Moltis is released under the MIT license. This permissive license allows for broad adoption, including commercial use and integration into closed-source projects without significant restrictions.

Limitations & Caveats

Sandboxed execution necessitates mounting the Docker or OrbStack socket, which grants Moltis full access to the host's container runtime; users must trust the source code. The default self-signed TLS certificates require manual installation into the system's trust store to avoid browser warnings. While SSRF protection is in place for web fetching, complex network interactions might still require careful configuration.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
123
Issues (30d)
51
Star History
1,123 stars in the last 20 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems") and Yaowei Zheng Yaowei Zheng(Author of LLaMA-Factory).

AstrBot by AstrBotDevs

3.7%
16k
LLM chatbot/framework for multiple platforms
Created 3 years ago
Updated 16 hours ago
Feedback? Help us improve.