mlxstudio  by jjang-ai

Local AI desktop app for Apple Silicon Macs

Created 1 month ago
384 stars

Top 74.3% on SourcePulse

GitHubView on GitHub
Project Summary

MLX Studio is a native macOS desktop application designed for running AI models locally on Apple Silicon, eliminating the need for Python, terminals, or configuration files. It provides a user-friendly interface for leveraging LLMs, VLMs, and image generation models privately, ensuring data never leaves the user's machine. This offers a significant benefit for users prioritizing data security and on-device AI capabilities.

How It Works

Built upon Apple's MLX framework and the vMLX Engine, the application facilitates direct, local execution of AI models on Macs with Apple Silicon. It supports a vast ecosystem of models from HuggingFace and JANGQ-AI. A core innovation is JANG adaptive mixed-precision quantization, which intelligently applies different bit-widths to model layers, achieving superior performance and efficiency over standard MLX quantization methods.

Quick Start & Requirements

  • Installation: Recommended: Download the latest DMG file and drag to Applications. Alternative: Install the vmlx inference engine via pip (using uv, pipx, or a virtual environment).
  • Prerequisites: macOS 14.0 Sonoma or later, Apple Silicon (M1/M2/M3/M4), 8 GB RAM (16 GB+ recommended for larger models). Model sizes vary from 1 GB to 50 GB.
  • Links: Source Code: github.com/jjang-ai/vmlx, MLX Models: huggingface.co/mlx-community, JANG Models: huggingface.co/JANGQ-AI, Website: vmlx.net.

Highlighted Details

  • Local & Private AI: Runs all AI models entirely on-device, ensuring complete data privacy and security.
  • API Compatibility: Offers OpenAI and Anthropic compatible API endpoints, allowing seamless integration with existing SDKs and tools.
  • Advanced Tooling & Agentic Workflows: Integrates over 30 built-in tools for file operations, web search, shell execution, and developer utilities, enabling complex agentic capabilities.
  • JANG Quantization: Features adaptive mixed-precision quantization (e.g., JANG_2L) that demonstrably outperforms standard MLX quantization on key benchmarks.
  • Extensive Model Support: Supports 65+ model families, including text, vision, mixture-of-experts, and hybrid SSM architectures, from HuggingFace and JANGQ-AI.

Maintenance & Community

The project is developed by Jinho Jang of JANGQ AI. Support is available via Ko-fi.

Licensing & Compatibility

Licensed under the Apache License 2.0. This license permits commercial use and integration into closed-source applications.

Limitations & Caveats

The application is strictly limited to macOS 14.0+ and Apple Silicon hardware. Running larger models requires substantial RAM and disk space.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
47
Star History
372 stars in the last 30 days

Explore Similar Projects

Starred by Jasper Zhang Jasper Zhang(Cofounder of Hyperbolic), Addy Osmani Addy Osmani(Head of Chrome Developer Experience at Google), and
3 more.

chatbox by chatboxai

0.4%
39k
Desktop client app for AI models/LLMs
Created 3 years ago
Updated 3 days ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), and
27 more.

open-webui by open-webui

0.9%
131k
Self-hosted AI platform for local LLM deployment
Created 2 years ago
Updated 1 day ago
Feedback? Help us improve.