Awesome-LoRAs  by ZJU-LLMs

LLM adaptation via Low-Rank Adaptation (LoRA)

Created 1 year ago
256 stars

Top 98.5% on SourcePulse

GitHubView on GitHub
Project Summary

A curated list of papers and resources on Low-Rank Adaptation (LoRA) for Large Language Models (LLMs), this repository addresses the need for a structured overview of parameter-efficient fine-tuning techniques. It targets researchers and practitioners by organizing advancements in LoRA, highlighting its benefits for model adaptation, cross-task generalization, and privacy preservation in LLMs.

How It Works

The project organizes LoRA research into distinct categories: downstream adaptation for improved task performance, cross-task generalization via module composition, efficiency enhancements, and privacy-preserving applications in federated learning. It systematically reviews advancements and applications across language, vision, and multimodal domains, providing a structured overview of LoRA's evolution.

Quick Start & Requirements

This repository is a curated list of research papers and resources, not a runnable software project. It does not provide installation or execution commands.

Highlighted Details

  • Features a structured survey of LoRA research, covering downstream adaptation, cross-task generalization, efficiency, and privacy.
  • Provides direct links to papers (PDFs) and associated code for a wide array of LoRA techniques.
  • Encompasses applications across diverse fields: traditional NLP, code tasks, vision (image generation, segmentation), and multimodal tasks.
  • The repository is continuously updated, reflecting the dynamic nature of LoRA research.

Maintenance & Community

The repository is actively maintained and continuously updated with new research. Contributions are welcomed through GitHub issues and pull requests. No specific community channels like Discord or Slack are mentioned.

Licensing & Compatibility

The README does not specify a license for the repository's content.

Limitations & Caveats

This repository is a literature survey and does not provide a direct implementation or framework for using LoRA. Users must refer to the linked papers and their respective codebases for practical implementation details. The scope is limited to research papers and resources related to LoRA, not a general survey of all parameter-efficient fine-tuning methods.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
5 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Yaowei Zheng Yaowei Zheng(Author of LLaMA-Factory), and
1 more.

DoRA by NVlabs

0.3%
936
PyTorch code for weight-decomposed low-rank adaptation (DoRA)
Created 1 year ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems") and Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera).

self-adaptive-llms by SakanaAI

0.1%
1k
Self-adaptation framework for real-time LLM adaptation
Created 1 year ago
Updated 1 year ago
Feedback? Help us improve.