RCLI  by RunanywhereAI

Local voice AI pipeline for macOS

Created 1 week ago

New!

944 stars

Top 38.8% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

Summary

RCLI is an on-device, voice-first AI assistant for macOS, engineered for Apple Silicon. It delivers a complete Speech-to-Text (STT), Large Language Model (LLM), and Text-to-Speech (TTS) pipeline, enabling voice control of macOS, local document querying via RAG, and natural language interaction without cloud reliance or API keys.

How It Works

RCLI executes a full STT+LLM+TTS pipeline natively on Apple Silicon, powered by the proprietary MetalRT GPU inference engine for sub-200ms end-to-end latency. It integrates local RAG for document Q&A using a hybrid retrieval approach and supports LLM-native tool calling for executing 38 macOS actions via AppleScript and shell commands. This on-device architecture prioritizes privacy and offline functionality.

Quick Start & Requirements

  • Installation: Via curl script (curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash) or Homebrew (brew tap RunanywhereAI/rcli && brew install rcli).
  • Prerequisites: macOS 13+ on Apple Silicon. MetalRT engine requires M3 or later; M1/M2 Macs automatically fall back to llama.cpp.
  • Setup: Run rcli setup to download AI models (~1GB, one-time).
  • Docs/Demos: Blog posts detailing MetalRT performance and the FastVoice pipeline are available at runanywhere.ai.

Highlighted Details

  • On-Device Pipeline: Full STT (Zipformer/Whisper) + LLM (Qwen/Llama) + TTS (Piper/Kokoro) running locally.
  • macOS Actions: Control 38 macOS functions (apps, media, system, communication) via voice or text.
  • Local RAG: Hybrid vector + BM25 retrieval for document Q&A with ~4ms latency over 5K+ chunks.
  • MetalRT Engine: Proprietary GPU inference engine for Apple Silicon, achieving sub-200ms latency and up to 550 tok/s LLM throughput.

Maintenance & Community

Developed by RunAnywhere, Inc. The README does not specify community channels (e.g., Discord, Slack) or a public roadmap.

Licensing & Compatibility

  • RCLI Core: MIT License, permitting commercial use and closed-source linking.
  • MetalRT Engine: Proprietary license from RunAnywhere, Inc. Licensing inquiries can be directed to founder@runanywhere.ai.
  • Compatibility: While RCLI components are permissive, MetalRT's proprietary nature may impose restrictions on integration.

Limitations & Caveats

MetalRT engine requires Apple M3 or later; M1/M2 Macs use a llama.cpp fallback, potentially impacting performance. Tool-calling reliability may degrade with accumulated conversation context; clearing context is recommended for optimal performance. MetalRT's proprietary license necessitates separate licensing for commercial use or integration beyond personal use.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
17
Issues (30d)
7
Star History
981 stars in the last 9 days

Explore Similar Projects

Feedback? Help us improve.