gpt4all  by nomic-ai

Desktop app for local LLM inference, no GPU/API needed

Created 2 years ago
76,862 stars

Top 0.2% on SourcePulse

GitHubView on GitHub
Project Summary

GPT4All enables users to run large language models (LLMs) privately on everyday desktops and laptops without requiring GPUs or API calls. It targets individuals and developers seeking accessible, on-device AI capabilities, offering a user-friendly application and a Python client for seamless integration.

How It Works

GPT4All leverages optimized LLM implementations, notably integrating with llama.cpp for efficient inference. This approach allows models to run on standard CPUs, making LLMs accessible on a wide range of hardware. Recent updates include Vulkan support for NVIDIA and AMD GPUs, further enhancing performance for users with compatible graphics cards.

Quick Start & Requirements

  • Install: pip install gpt4all
  • Prerequisites: Python 3.x. macOS requires Monterey 12.6+. Linux build is x86-64 only. Windows/Linux require Intel Core i3 2nd Gen / AMD Bulldozer or better. Windows ARM supports Snapdragon and SQ1/SQ2. Apple Silicon M-series recommended for macOS.
  • Resources: Models are typically 4-7GB.
  • Links: Website, Documentation, Discord

Highlighted Details

  • Supports GGUF model format, including quantizations like Q4_0 and Q4_1.
  • Features "LocalDocs" for private, local chat with user data.
  • Offers a Docker-based API server with an OpenAI-compatible HTTP endpoint.
  • Integrates with Langchain and Weaviate Vector Database.

Maintenance & Community

Nomic AI actively contributes to open-source projects like llama.cpp. The project has a Discord community for discussion and contributions.

Licensing & Compatibility

The project is open-source and available for commercial use.

Limitations & Caveats

The Linux build is restricted to x86-64 architecture. While CPU inference is supported, performance may vary significantly based on hardware.

Health Check
Last Commit

5 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
6
Star History
237 stars in the last 30 days

Explore Similar Projects

Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
2 more.

torchchat by pytorch

0.1%
4k
PyTorch-native SDK for local LLM inference across diverse platforms
Created 1 year ago
Updated 1 month ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Anil Dash Anil Dash(Former CEO of Glitch), and
23 more.

llamafile by mozilla-ai

0.3%
23k
Single-file LLM distribution and runtime via `llama.cpp` and Cosmopolitan Libc
Created 2 years ago
Updated 18 hours ago
Feedback? Help us improve.