MoRA  by kongds

Parameter-efficient fine-tuning via high-rank updating (MoRA)

created 1 year ago
358 stars

Top 79.2% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

MoRA (Modular Rank-wise Adaptation) is a parameter-efficient fine-tuning (PEFT) technique designed to enhance the efficiency and effectiveness of adapting large language models. It targets researchers and practitioners working with LLMs who need to fine-tune models with fewer parameters and computational resources, offering a more flexible and potentially superior alternative to standard LoRA.

How It Works

MoRA decomposes the low-rank update matrices into multiple smaller, low-rank matrices, allowing for a more granular control over the adaptation process. It introduces two types of updates: Type 1 shares parameters across multiple low-rank matrices for large ranks, reducing parameter count, while Type 6 uses a novel RoPE-based approach for smaller ranks, offering improved performance. This modularity allows for a trade-off between parameter efficiency and expressiveness.

Quick Start & Requirements

  • Install via pip install -e ./peft-mora.
  • Requires Hugging Face peft library.
  • Examples provided for fine-tuning and pretraining using deepspeed.
  • Supports bf16 and 16bit precision.

Highlighted Details

  • Implemented as an extension to Hugging Face's peft library.
  • Supports two distinct update types (Type 1 and Type 6) for different rank scenarios.
  • Can be merged into the base model using merge_and_unload().
  • Examples demonstrate integration with deepspeed for distributed training.

Maintenance & Community

The project is based on popular libraries like peft, alpaca-lora, and ReLoRA. Further community engagement details (Discord, Slack, roadmap) are not explicitly provided in the README.

Licensing & Compatibility

The README does not explicitly state the license. Given its reliance on Hugging Face peft and alpaca-lora, it is likely compatible with common open-source licenses, but explicit verification is recommended for commercial use.

Limitations & Caveats

The project is presented as an implementation within peft-mora, suggesting it might be an experimental or research-oriented extension rather than a fully integrated feature of the main peft library. The specific performance benefits and stability compared to standard LoRA or other PEFT methods would require further evaluation.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 90 days

Explore Similar Projects

Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake).

HALOs by ContextualAI

0.2%
873
Library for aligning LLMs using human-aware loss functions
created 1 year ago
updated 2 weeks ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Ying Sheng Ying Sheng(Author of SGLang), and
9 more.

alpaca-lora by tloen

0.0%
19k
LoRA fine-tuning for LLaMA
created 2 years ago
updated 1 year ago
Feedback? Help us improve.