LlamaChat  by alexrozanski

macOS app for local LLM chats

Created 2 years ago
1,516 stars

Top 27.4% on SourcePulse

GitHubView on GitHub
Project Summary

LlamaChat is a native macOS application for interacting with large language models like LLaMA, Alpaca, and GPT4All locally. It targets macOS users who want to leverage powerful AI models without relying on cloud services, offering a convenient and private chat experience.

How It Works

LlamaChat is built on llama.cpp and llama.swift, enabling efficient local inference. It supports models in raw PyTorch .pth checkpoints or the optimized .ggml format. The application includes a conversion utility for PyTorch models to .ggml, streamlining the setup process. Its architecture utilizes MVVM, Combine, and Swift Concurrency for a modern macOS development approach.

Quick Start & Requirements

  • Install: Download the .dmg from llamachat.app.
  • Build from Source: git clone https://github.com/alexrozanski/LlamaChat.git && cd LlamaChat && open LlamaChat.xcodeproj. Ensure build configuration is set to Release for performance.
  • Prerequisites: macOS 13 Ventura or later, Intel or Apple Silicon Mac.
  • Models: Users must obtain model files separately.

Highlighted Details

  • Supports LLaMA, Alpaca, and GPT4All models, with planned support for Vicuna and Koala.
  • Integrates llama.cpp and llama.swift for efficient local inference.
  • Includes in-app conversion of PyTorch models to .ggml format.
  • Persists chat history and allows context viewing for debugging.

Maintenance & Community

The project is primarily maintained by alexrozanski. Contributions via Pull Requests and Issues are welcomed.

Licensing & Compatibility

LlamaChat is licensed under the MIT license, permitting commercial use and integration with closed-source applications.

Limitations & Caveats

The application requires macOS 13 Ventura. Debug builds exhibit slow inference performance. Users must source model files independently and may need to use llama.cpp conversion scripts for compatibility.

Health Check
Last Commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Starred by Junyang Lin Junyang Lin(Core Maintainer at Alibaba Qwen), Georgi Gerganov Georgi Gerganov(Author of llama.cpp, whisper.cpp), and
1 more.

LLMFarm by guinmoon

0.4%
2k
iOS/MacOS app for local LLM inference
Created 2 years ago
Updated 1 month ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.1%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 week ago
Feedback? Help us improve.