Discover and explore top open-source AI tools and projects—updated daily.
Local LLM server for Apple Silicon
Top 32.5% on SourcePulse
Oosaurus is a native, Apple Silicon-only local LLM server designed to maximize performance on M-series Macs. It offers an OpenAI-compatible API, enabling seamless integration with existing tools and workflows, and provides a user-friendly SwiftUI interface for managing models and monitoring system resources.
How It Works
Oosaurus leverages Apple's MLX framework for optimized performance on Apple Silicon, utilizing MLXLLM for efficient LLM execution. It features a SwiftNIO server for handling requests and Server-Sent Events for low-latency token streaming. Key architectural choices include session reuse via KV cache for faster multi-turn conversations and automatic handling of chat templates from model configurations for accurate prompt formatting.
Quick Start & Requirements
Oosaurus
target in Xcode. Configure port in the UI (default 8080). Download models via the Model Manager.Highlighted Details
/v1/models
and /v1/chat/completions
(streaming and non-streaming).mlx-community
.Maintenance & Community
dinoki.ai
).wizardeur
(first PR creator).Licensing & Compatibility
Limitations & Caveats
The project is Apple Silicon only and does not support Intel Macs. /transcribe
endpoints are placeholders pending Whisper integration. The README does not specify the license type, which may impact commercial use or closed-source linking.
5 days ago
Inactive