peft  by huggingface

Parameter-efficient fine-tuning (PEFT) library

Created 2 years ago
19,604 stars

Top 2.2% on SourcePulse

GitHubView on GitHub
Project Summary

PEFT (Parameter-Efficient Fine-Tuning) is a library designed to significantly reduce the computational and storage costs associated with fine-tuning large pre-trained models. It enables users to adapt massive models to specific downstream tasks by training only a small subset of parameters, achieving performance comparable to full fine-tuning. This library is ideal for researchers and developers working with large language models (LLMs) and diffusion models who need to optimize resource usage.

How It Works

PEFT implements various state-of-the-art parameter-efficient fine-tuning techniques, such as LoRA (Low-Rank Adaptation), soft prompts, and IA³ (Infused Adapter by Inhibiting and Amplifying Activations). These methods introduce a small number of trainable parameters, often as low-rank matrices or adapter layers, into the pre-trained model architecture. This approach drastically reduces memory requirements and checkpoint sizes, making it feasible to fine-tune very large models on consumer hardware.

Quick Start & Requirements

  • Install via pip: pip install peft
  • Requires Python and standard Hugging Face Transformers/Diffusers/Accelerate libraries.
  • GPU is highly recommended for practical fine-tuning.
  • See Quickstart for code examples.

Highlighted Details

  • Enables fine-tuning 12B parameter models on an 80GB GPU, a task impossible with full fine-tuning.
  • Achieves comparable performance to full fine-tuning with significantly reduced trainable parameters (e.g., 0.19% for mt0-large).
  • Final PEFT adapter checkpoints are drastically smaller (e.g., 19MB vs. 11GB for a full model).
  • Integrates seamlessly with Hugging Face Transformers, Diffusers, and Accelerate for distributed training and inference.

Maintenance & Community

  • Actively developed by Hugging Face contributors.
  • Extensive documentation and examples available.
  • Contribution guide provided for community involvement.
  • BibTeX citation available for academic use.

Licensing & Compatibility

  • Licensed under Apache 2.0.
  • Permissive license allows for commercial use and integration into closed-source projects.

Limitations & Caveats

The library focuses on parameter-efficient methods; users seeking full model fine-tuning will need alternative solutions. While PEFT methods aim for comparable performance, specific task performance may vary and require hyperparameter tuning.

Health Check
Last Commit

2 days ago

Responsiveness

1 day

Pull Requests (30d)
32
Issues (30d)
32
Star History
278 stars in the last 30 days

Explore Similar Projects

Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
6 more.

xTuring by stochasticai

0.0%
3k
SDK for fine-tuning and customizing open-source LLMs
Created 2 years ago
Updated 1 day ago
Feedback? Help us improve.