Discover and explore top open-source AI tools and projects—updated daily.
ZJU-LLMsLLM adaptation via Low-Rank Adaptation (LoRA)
Top 98.5% on SourcePulse
A curated list of papers and resources on Low-Rank Adaptation (LoRA) for Large Language Models (LLMs), this repository addresses the need for a structured overview of parameter-efficient fine-tuning techniques. It targets researchers and practitioners by organizing advancements in LoRA, highlighting its benefits for model adaptation, cross-task generalization, and privacy preservation in LLMs.
How It Works
The project organizes LoRA research into distinct categories: downstream adaptation for improved task performance, cross-task generalization via module composition, efficiency enhancements, and privacy-preserving applications in federated learning. It systematically reviews advancements and applications across language, vision, and multimodal domains, providing a structured overview of LoRA's evolution.
Quick Start & Requirements
This repository is a curated list of research papers and resources, not a runnable software project. It does not provide installation or execution commands.
Highlighted Details
Maintenance & Community
The repository is actively maintained and continuously updated with new research. Contributions are welcomed through GitHub issues and pull requests. No specific community channels like Discord or Slack are mentioned.
Licensing & Compatibility
The README does not specify a license for the repository's content.
Limitations & Caveats
This repository is a literature survey and does not provide a direct implementation or framework for using LoRA. Users must refer to the linked papers and their respective codebases for practical implementation details. The scope is limited to research papers and resources related to LoRA, not a general survey of all parameter-efficient fine-tuning methods.
1 year ago
Inactive
KarelDO
sail-sg
NVlabs
SakanaAI