Discover and explore top open-source AI tools and projects—updated daily.
RunanywhereAILocal voice AI pipeline for macOS
New!
Top 38.8% on SourcePulse
Summary
RCLI is an on-device, voice-first AI assistant for macOS, engineered for Apple Silicon. It delivers a complete Speech-to-Text (STT), Large Language Model (LLM), and Text-to-Speech (TTS) pipeline, enabling voice control of macOS, local document querying via RAG, and natural language interaction without cloud reliance or API keys.
How It Works
RCLI executes a full STT+LLM+TTS pipeline natively on Apple Silicon, powered by the proprietary MetalRT GPU inference engine for sub-200ms end-to-end latency. It integrates local RAG for document Q&A using a hybrid retrieval approach and supports LLM-native tool calling for executing 38 macOS actions via AppleScript and shell commands. This on-device architecture prioritizes privacy and offline functionality.
Quick Start & Requirements
curl script (curl -fsSL https://raw.githubusercontent.com/RunanywhereAI/RCLI/main/install.sh | bash) or Homebrew (brew tap RunanywhereAI/rcli && brew install rcli).llama.cpp.rcli setup to download AI models (~1GB, one-time).Highlighted Details
Maintenance & Community
Developed by RunAnywhere, Inc. The README does not specify community channels (e.g., Discord, Slack) or a public roadmap.
Licensing & Compatibility
Limitations & Caveats
MetalRT engine requires Apple M3 or later; M1/M2 Macs use a llama.cpp fallback, potentially impacting performance. Tool-calling reliability may degrade with accumulated conversation context; clearing context is recommended for optimal performance. MetalRT's proprietary license necessitates separate licensing for commercial use or integration beyond personal use.
1 day ago
Inactive