FreedomGPT  by ohmplatform

Electron/React app for local LLM execution with chat interface

Created 2 years ago
2,679 stars

Top 17.6% on SourcePulse

GitHubView on GitHub
Project Summary

FreedomGPT provides a desktop application for running Large Language Models (LLMs) locally and privately on macOS and Windows. It targets users who prioritize offline operation and data privacy, offering a chat-based interface powered by Electron and React.

How It Works

The application leverages the llama.cpp C++ library for efficient LLM execution on local hardware. This approach allows for lower latency and reduced resource consumption compared to cloud-based solutions. The React frontend provides a user-friendly chat interface, while Electron packages it as a cross-platform desktop application.

Quick Start & Requirements

  • Install: git clone --recursive https://github.com/ohmplatform/FreedomGPT.git freedom-gpt followed by cd freedom-gpt and npx yarn install.
  • Prerequisites: Node.js, Yarn, Git, Make, G++, npm. Building llama.cpp requires CMake on Windows.
  • Setup: Building llama.cpp and installing dependencies may take several minutes.
  • Docs: https://github.com/ohmplatform/FreedomGPT

Highlighted Details

  • Supports running LLMs locally and offline.
  • Integrates with llama.cpp for efficient inference.
  • Includes an optional mining earnings feature using XMRig.
  • Built with Electron and React for a desktop chat interface.

Maintenance & Community

  • Community support is available via a Discord server.
  • The project credits llama.cpp, Facebook's LLAMA, and Chatbot UI.

Licensing & Compatibility

  • The project's license is not explicitly stated in the README, but it utilizes components with various open-source licenses. Compatibility for commercial use or closed-source linking requires careful review of all dependencies.

Limitations & Caveats

Linux installation instructions are provided but may require manual setup of Node.js and Yarn. The mining feature requires manual placement of the XMRig binary. The specific LLM models supported and their performance characteristics are not detailed.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
17 stars in the last 30 days

Explore Similar Projects

Starred by Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), Tim J. Baek Tim J. Baek(Founder of Open WebUI), and
1 more.

harbor by av

1.3%
2k
CLI tool for local LLM stack orchestration
Created 1 year ago
Updated 3 days ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.1%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 week ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Vincent Weisser Vincent Weisser(Cofounder of Prime Intellect), and
7 more.

dalai by cocktailpeanut

0%
13k
Local LLM inference via CLI tool and Node.js API
Created 2 years ago
Updated 1 year ago
Feedback? Help us improve.