quip-sharp  by Cornell-RelaxML

LLM quantization for extreme compression

created 1 year ago
549 stars

Top 59.0% on sourcepulse

GitHubView on GitHub
Project Summary

QuIP# is a weight-only post-training quantization method designed for extreme compression of Large Language Models (LLMs) down to 4 bits per weight or less. It targets researchers and practitioners seeking to deploy LLMs with significantly reduced memory footprints and improved inference speeds, offering state-of-the-art performance in highly compressed regimes.

How It Works

QuIP# employs a novel approach combining a randomized Hadamard transform (RHT) for efficient incoherence processing, $E_8$ lattice-based codebooks for fast vector quantization, and a fine-tuning scheme to capture inter-layer dependencies. This combination allows for superior quantization quality, particularly at very low bitrates, outperforming theoretical lossless methods at 4 bits with its 3-bit models.

Quick Start & Requirements

  • Install via pip install -r requirements.txt and build CUDA inference kernels (cd quiptools && python setup.py install).
  • Requires CUDA-enabled GPU.
  • Pre-quantized models and Hessians are available on Hugging Face.
  • Official documentation and examples for Llama models are provided.

Highlighted Details

  • Achieves state-of-the-art performance at $\le 4$ bits per weight.
  • Demonstrates superior scaling for 3-bit models compared to 4-bit models.
  • Offers CUDA kernels for fast inference, with ongoing work to improve 3-bit inference speed.
  • Supports fine-tuning during quantization to capture inter-layer interactions.

Maintenance & Community

This codebase is no longer under active development, with QTIP being the successor method. Users are encouraged to open GitHub tickets for questions. Pre-quantized models are available on Hugging Face.

Licensing & Compatibility

The code is licensed under GNU GPL v3. Use of underlying LLM models (Llama, Mistral) is governed by their respective licenses. The GPLv3 license may impose copyleft restrictions on derivative works.

Limitations & Caveats

The project is not under active development. Optimized CUDA kernels for 1-bit matrix-vector multiplication are missing, impacting 3-bit inference speed. While adaptable to non-Llama architectures, it requires manual script modification.

Health Check
Last commit

9 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
1
Star History
18 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jeremy Howard Jeremy Howard(Cofounder of fast.ai), and
4 more.

llm-awq by mit-han-lab

0.4%
3k
Weight quantization research paper for LLM compression/acceleration
created 2 years ago
updated 2 weeks ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Georgios Konstantopoulos Georgios Konstantopoulos(CTO, General Partner at Paradigm), and
2 more.

GPTQ-for-LLaMa by qwopqwop200

0.0%
3k
4-bit quantization for LLaMA models using GPTQ
created 2 years ago
updated 1 year ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
10 more.

qlora by artidoro

0.2%
11k
Finetuning tool for quantized LLMs
created 2 years ago
updated 1 year ago
Feedback? Help us improve.