DoRA  by NVlabs

PyTorch code for weight-decomposed low-rank adaptation (DoRA)

created 1 year ago
818 stars

Top 44.3% on sourcepulse

GitHubView on GitHub
Project Summary

DoRA (Weight-Decomposed Low-Rank Adaptation) is a PyTorch implementation for efficient fine-tuning of large language and vision-language models. It targets researchers and practitioners seeking to improve LoRA's performance and stability without increasing inference costs, offering enhanced learning capacity and training stability.

How It Works

DoRA decomposes pre-trained weights into magnitude and direction components. It then applies LoRA specifically to the directional component. This approach aims to improve upon standard LoRA by decoupling the magnitude and direction of weight updates, leading to better fine-tuning results and stability, especially at lower ranks.

Quick Start & Requirements

Health Check
Last commit

10 months ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
0
Star History
48 stars in the last 90 days

Explore Similar Projects

Starred by Lewis Tunstall Lewis Tunstall(Researcher at Hugging Face), Patrick von Platen Patrick von Platen(Core Contributor to Hugging Face Transformers and Diffusers), and
5 more.

torchtune by pytorch

0.2%
5k
PyTorch library for LLM post-training and experimentation
created 1 year ago
updated 1 day ago
Feedback? Help us improve.