Discover and explore top open-source AI tools and projects—updated daily.
Arthur-FicialCommand-line access to on-device Apple Intelligence LLMs
New!
Top 19.0% on SourcePulse
Summary
apfel provides command-line access to Apple's on-device Large Language Model (LLM) available on Apple Silicon Macs. It targets developers and power users seeking to leverage local, private AI capabilities without cloud dependencies or API costs. The primary benefit is enabling sophisticated AI interactions directly from the terminal or via an OpenAI-compatible server, enhancing productivity and privacy.
How It Works
The project utilizes Apple's FoundationModels framework (macOS 26+) to interface with the LLM pre-installed on Apple Silicon hardware. apfel acts as a wrapper, exposing this model through a pipe-friendly command-line interface and a local HTTP server. All inference is performed entirely on-device, ensuring data privacy and eliminating network latency or costs associated with cloud-based AI services. Its architecture supports advanced features like tool calling and integrates seamlessly with existing OpenAI SDKs.
Quick Start & Requirements
brew tap Arthur-Ficial/tap
brew install Arthur-Ficial/tap/apfel
git clone https://github.com/Arthur-Ficial/apfel.git
cd apfel
make install
docs/install.md.Highlighted Details
-f), JSON output (-o json), system prompts (-s), and quiet mode (-q).localhost:11434, acting as a drop-in replacement for OpenAI API endpoints and compatible with official SDKs.apfel-gui: A native macOS SwiftUI application provides a graphical interface for chatting, debugging requests, and managing settings.demo/cmd for natural language to shell command conversion.Maintenance & Community
No specific details regarding maintainers, community channels (e.g., Discord, Slack), or project roadmaps are provided in the README. The project appears to be maintained through standard GitHub development practices.
Licensing & Compatibility
Limitations & Caveats
The system is restricted to macOS 26+ on Apple Silicon, utilizing a single, non-configurable apple-foundationmodel. The context window is capped at 4096 tokens (input + output combined), approximately 3000 English words. Users may encounter false positives from Apple's built-in safety guardrails. On-device inference results in response times typically measured in seconds, and the model does not support embeddings or multi-modal (vision) inputs. Certain OpenAI API parameters and features, such as n>1 or embeddings, are explicitly unsupported.
12 hours ago
Inactive
abhishekkrthakur
mattt