minillm  by kuleshov

Minimal system for running LLMs on consumer GPUs (research project)

created 2 years ago
918 stars

Top 40.5% on sourcepulse

GitHubView on GitHub
Project Summary

MiniLLM provides a minimal, Python-centric system for running large language models (LLMs) on consumer-grade NVIDIA GPUs. It targets researchers and power users seeking an accessible platform for experimentation with LLMs, focusing on efficient inference and alignment research.

How It Works

MiniLLM leverages the GPTQ algorithm for model compression, enabling significant reductions in GPU memory usage. This approach allows larger models, up to 170B parameters, to run on hardware typically found in consumer setups. The system supports multiple LLM architectures, including LLAMA, BLOOM, and OPT, with a codebase designed for simplicity and ease of use.

Quick Start & Requirements

  • Install: pip install -r requirements.txt followed by python setup.py install. A conda environment is recommended.
  • Prerequisites: Python 3.8+, PyTorch (tested with 1.13.1+cu116), NVIDIA GPU (Pascal architecture or newer), CUDA toolkit.
  • Setup: Requires compiling a custom CUDA kernel.
  • Models: Download weights using minillm download --model <model_name> --weights <weights_path>.
  • Docs: https://github.com/kuleshov/minillm

Highlighted Details

  • Supports LLAMA, BLOOM, and OPT models up to 170B parameters.
  • Achieves significant memory reduction via 3-bit GPTQ compression.
  • Demonstrates chain-of-thought reasoning capabilities on consumer GPUs.
  • Offers both command-line and programmatic interfaces.

Maintenance & Community

This is a research project from Cornell Tech and Cornell University. Feedback can be sent to Volodymyr Kuleshov.

Licensing & Compatibility

The repository is licensed under the MIT License, permitting commercial use and integration with closed-source projects.

Limitations & Caveats

Currently, only NVIDIA GPUs are supported. The project is experimental and in progress, with plans to add support for more LLMs, automated quantization, and fine-tuning capabilities. Some generated outputs may require manual selection from multiple samples.

Health Check
Last commit

2 years ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
14 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Omar Sanseviero Omar Sanseviero(DevRel at Google DeepMind), and
5 more.

TensorRT-LLM by NVIDIA

0.6%
11k
LLM inference optimization SDK for NVIDIA GPUs
created 1 year ago
updated 18 hours ago
Feedback? Help us improve.