Discover and explore top open-source AI tools and projects—updated daily.
nicedreamzappRun Claude Code locally on Apple Silicon
New!
Top 62.9% on SourcePulse
Summary
This project enables running Claude Code and other large language models entirely locally on Apple Silicon Macs, eliminating cloud dependencies and API fees. It targets users prioritizing privacy, offline capability, and cost savings, offering a full Claude Code experience powered by on-device AI.
How It Works
The core is a custom MLX server that directly interfaces with local models (Gemma, Llama 3.3, Qwen) using Apple's Metal GPU acceleration. By speaking the Anthropic API natively, it bypasses proxy latency, achieving significantly faster inference. The system supports various models optimized for different needs, from quick coding to complex reasoning.
Quick Start & Requirements
bash setup.sh for a one-command installer, or manually clone, set up Python 3.12+ virtualenv, download models (scripts/download-and-import.sh), and start the server (scripts/start-mlx-server.sh).npm install -g @anthropic-ai/claude-code.Highlighted Details
Maintenance & Community
No specific community links (Discord/Slack) or detailed contributor information beyond the primary repository owner and model uploaders are present in the README.
Licensing & Compatibility
Limitations & Caveats
Strictly limited to Apple Silicon Macs. Larger models demand significant RAM (96GB+ recommended for Llama/Qwen). Local models may not match the advanced reasoning capabilities of top-tier cloud offerings. "Abliterated" models require responsible usage and adherence to upstream licenses.
4 hours ago
Inactive
pytorch
openai