DoRA  by NVlabs

PyTorch code for weight-decomposed low-rank adaptation (DoRA)

Created 1 year ago
854 stars

Top 41.9% on SourcePulse

GitHubView on GitHub
Project Summary

DoRA (Weight-Decomposed Low-Rank Adaptation) is a PyTorch implementation for efficient fine-tuning of large language and vision-language models. It targets researchers and practitioners seeking to improve LoRA's performance and stability without increasing inference costs, offering enhanced learning capacity and training stability.

How It Works

DoRA decomposes pre-trained weights into magnitude and direction components. It then applies LoRA specifically to the directional component. This approach aims to improve upon standard LoRA by decoupling the magnitude and direction of weight updates, leading to better fine-tuning results and stability, especially at lower ranks.

Quick Start & Requirements

Health Check
Last Commit

11 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
2
Star History
24 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Patrick von Platen Patrick von Platen(Author of Hugging Face Diffusers; Research Engineer at Mistral), and
15 more.

LoRA by microsoft

0.3%
13k
PyTorch library for low-rank adaptation (LoRA) of LLMs
Created 4 years ago
Updated 9 months ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Yineng Zhang Yineng Zhang(Inference Lead at SGLang; Research Scientist at Together AI), and
26 more.

axolotl by axolotl-ai-cloud

0.5%
10k
CLI tool for streamlined post-training of AI models
Created 2 years ago
Updated 13 hours ago
Starred by Tony Lee Tony Lee(Author of HELM; Research Engineer at Meta), Lysandre Debut Lysandre Debut(Chief Open-Source Officer at Hugging Face), and
24 more.

LLaMA-Factory by hiyouga

1.1%
58k
Unified fine-tuning tool for 100+ LLMs & VLMs (ACL 2024)
Created 2 years ago
Updated 2 days ago
Feedback? Help us improve.