Discover and explore top open-source AI tools and projects—updated daily.
scouzi1966Local AI API and OCR for macOS
Top 100.0% on SourcePulse
Summary
This project addresses the need for local, private AI processing on macOS by exposing Apple's Foundation Models and other MLX models through a unified, OpenAI-compatible API. It targets AI enthusiasts and developers seeking to leverage on-device LLMs without Python or cloud dependencies, offering significant privacy and performance benefits.
How It Works
Built entirely in Swift, the project leverages Metal for maximum GPU acceleration on Apple Silicon. It provides an OpenAI-compatible API endpoint that can serve Apple's on-device Foundation Models or any Hugging Face MLX model. The system also functions as an API gateway, aggregating requests for other local LLM backends like Ollama and LM Studio, and includes Vision OCR capabilities for image and PDF text extraction.
Quick Start & Requirements
brew install scouzi1966/afm/afm) is recommended. Pip (pip install macafm) and building from source are also supported.Highlighted Details
afm vision.Maintenance & Community
The project welcomes contributions via pull requests. While no dedicated community channels like Discord or Slack are listed, the GitHub repository serves as the primary hub for issues and development discussions. Related projects like Vesta AI Explorer and AFMTrainer are also mentioned.
Licensing & Compatibility
The project is licensed under the permissive MIT License, allowing for broad compatibility, including commercial use and linking within closed-source applications.
Limitations & Caveats
The Apple Foundation Model is a 3B parameter model optimized for on-device performance. The software requires macOS 26 or later and Apple Intelligence to be enabled. Token counting for the Foundation model uses a word-based approximation, unlike proxied backends which report accurate counts. The nightly build is currently synchronized with the stable release.
2 days ago
Inactive