alpaca_lora_4bit  by johnsmith0031

Fine-tuning and inference tool for quantized LLaMA models

created 2 years ago
535 stars

Top 60.1% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This repository provides a method for LoRA fine-tuning of large language models quantized to 4-bit precision, enabling efficient training on consumer hardware. It targets researchers and developers working with LLMs who need to adapt models to specific tasks with limited VRAM.

How It Works

The project modifies existing libraries like PEFT and GPTQ-for-LLaMA to enable LoRA fine-tuning on models already quantized to 4-bit. It reconstructs FP16 matrices from 4-bit data and utilizes torch.matmul for significantly faster inference. The approach supports various bit quantizations (2, 3, 4, 8 bits) and includes optimizations like gradient checkpointing, Flash Attention, and Triton backends for enhanced performance and reduced memory usage.

Quick Start & Requirements

  • Install: pip install . (after cloning and checking out the winglian-setup_pip branch).
  • Prerequisites: Python, PyTorch. GPU with CUDA is highly recommended for performance.
  • Docker: Available, but noted as not currently working.
  • Docs: Installation manual available.

Highlighted Details

  • Enables 4-bit LoRA fine-tuning on models like Llama and Llama 2.
  • Achieves faster inference (e.g., 20 tokens/sec on a 7B model with optimizations).
  • Supports gradient checkpointing for fine-tuning 30B models on 24GB VRAM.
  • Integrates with text-generation-webui via monkey patching for improved inference performance.
  • Offers Flash Attention 2 and Triton backend support.

Maintenance & Community

The project has seen contributions from multiple users, indicating active development and community interest. Specific community links (Discord/Slack) are not provided in the README.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The Docker build is noted as not currently working. The monkey patch for text-generation-webui may break certain web UI features like model selection, LORA selection, and training. Quantization attention and fused MLP patches require PyTorch 2.0+ and only support simple LoRA injections (q_proj, v_proj).

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jaret Burkett Jaret Burkett(Founder of Ostris), and
1 more.

nunchaku by nunchaku-tech

2.1%
3k
High-performance 4-bit diffusion model inference engine
created 8 months ago
updated 14 hours ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
10 more.

qlora by artidoro

0.2%
11k
Finetuning tool for quantized LLMs
created 2 years ago
updated 1 year ago
Feedback? Help us improve.