DoRA  by NVlabs

PyTorch code for weight-decomposed low-rank adaptation (DoRA)

Created 1 year ago
873 stars

Top 41.1% on SourcePulse

GitHubView on GitHub
Project Summary

DoRA (Weight-Decomposed Low-Rank Adaptation) is a PyTorch implementation for efficient fine-tuning of large language and vision-language models. It targets researchers and practitioners seeking to improve LoRA's performance and stability without increasing inference costs, offering enhanced learning capacity and training stability.

How It Works

DoRA decomposes pre-trained weights into magnitude and direction components. It then applies LoRA specifically to the directional component. This approach aims to improve upon standard LoRA by decoupling the magnitude and direction of weight updates, leading to better fine-tuning results and stability, especially at lower ranks.

Quick Start & Requirements

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
14 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Patrick von Platen Patrick von Platen(Author of Hugging Face Diffusers; Research Engineer at Mistral), and
15 more.

LoRA by microsoft

0.3%
13k
PyTorch library for low-rank adaptation (LoRA) of LLMs
Created 4 years ago
Updated 10 months ago
Starred by Patrick von Platen Patrick von Platen(Author of Hugging Face Diffusers; Research Engineer at Mistral), Alex Chen Alex Chen(Cofounder of Nexa AI), and
28 more.

LLaMA-Factory by hiyouga

1.3%
62k
Unified fine-tuning tool for 100+ LLMs & VLMs (ACL 2024)
Created 2 years ago
Updated 10 hours ago
Feedback? Help us improve.