ChatMLX  by johnmai-dev

MacOS chat application using local LLMs

created 1 year ago
799 stars

Top 45.0% on sourcepulse

GitHubView on GitHub
Project Summary

ChatMLX is a high-performance, open-source chat application for macOS, designed to run large language models locally. It targets macOS users seeking a private, secure, and versatile chat experience powered by Apple silicon and the MLX framework.

How It Works

ChatMLX leverages the MLX framework, optimized for Apple silicon, to achieve high performance for local LLM inference. This approach ensures user privacy and security by keeping all computations and data on the user's device. It supports a variety of popular LLM architectures, offering flexibility in model choice.

Quick Start & Requirements

  • Install: Download the application bundle.
  • Prerequisites: macOS 14.0 and later.
  • Troubleshooting: If the application is blocked due to macOS security, run xattr -cr /Applications/ChatMLX.app in the terminal.

Highlighted Details

  • Supports 39 major App Store languages.
  • Offers multiple LLM models including Llama, OpenELM, Phi, Qwen, Starcoder, Cohere, and Gemma.
  • Emphasizes high performance through MLX and Apple silicon optimization.
  • Prioritizes user privacy and security by running LLMs locally.

Maintenance & Community

The project is open-source and welcomes contributions. No specific community channels or contributor details are provided in the README.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility is limited to macOS 14.0+.

Limitations & Caveats

The application is not signed by default, requiring manual steps to bypass macOS security checks. The README does not detail specific model performance or resource requirements for running different LLMs.

Health Check
Last commit

4 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
28 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.