Electron/React app for local LLM execution with chat interface
Top 18.1% on sourcepulse
FreedomGPT provides a desktop application for running Large Language Models (LLMs) locally and privately on macOS and Windows. It targets users who prioritize offline operation and data privacy, offering a chat-based interface powered by Electron and React.
How It Works
The application leverages the llama.cpp
C++ library for efficient LLM execution on local hardware. This approach allows for lower latency and reduced resource consumption compared to cloud-based solutions. The React frontend provides a user-friendly chat interface, while Electron packages it as a cross-platform desktop application.
Quick Start & Requirements
git clone --recursive https://github.com/ohmplatform/FreedomGPT.git freedom-gpt
followed by cd freedom-gpt
and npx yarn install
.llama.cpp
requires CMake on Windows.llama.cpp
and installing dependencies may take several minutes.Highlighted Details
llama.cpp
for efficient inference.Maintenance & Community
llama.cpp
, Facebook's LLAMA
, and Chatbot UI
.Licensing & Compatibility
Limitations & Caveats
Linux installation instructions are provided but may require manual setup of Node.js and Yarn. The mining feature requires manual placement of the XMRig binary. The specific LLM models supported and their performance characteristics are not detailed.
1 year ago
1+ week