PEFT technique for efficient LLM fine-tuning
Top 78.0% on sourcepulse
PiSSA is a parameter-efficient fine-tuning (PEFT) method for large language models that optimizes principal singular values and vectors, aiming for faster convergence and improved performance over methods like LoRA. It targets researchers and practitioners seeking efficient LLM adaptation.
How It Works
PiSSA adapts LLMs by focusing on the most significant components of weight matrices, identified via Singular Value Decomposition (SVD). Unlike LoRA, which updates a low-rank "noise" matrix added to the original weights, PiSSA directly optimizes the principal singular values and vectors, effectively freezing the less impactful parts of the model. This approach is claimed to lead to faster convergence and superior performance, particularly in quantized settings.
Quick Start & Requirements
pip install -r requirements.txt
after cloning the repository and setting HF_ENDPOINT
.flash-attn
.Highlighted Details
peft
library as an optional LoRA initialization method.Conv2d
and Embedding
layers, with examples for SDXL.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The README does not specify any limitations or known issues. The project appears to be actively developed, with recent updates adding support for new layer types and integrations.
1 month ago
1 week