Framework for efficient cross-task generalization via dynamic LoRA composition
Top 52.7% on sourcepulse
LoraHub is a framework designed to enhance the efficiency of fine-tuning large language models (LLMs) for new tasks by dynamically composing pre-trained Low-Rank Adaptation (LoRA) modules. It targets researchers and developers seeking to achieve strong few-shot performance on unseen tasks without requiring additional training or parameters, effectively acting as a "marketplace" for reusable LoRA modules.
How It Works
LoraHub employs a two-stage process: Compose and Adapt. In the Compose stage, multiple LoRA modules trained on different tasks are integrated into a single, unified module using learned coefficients. The Adapt stage then refines these coefficients on a few examples from the target task using a gradient-free optimization algorithm. This approach aims to match the performance of in-context learning (ICL) while maintaining the inference throughput of zero-shot learning.
Quick Start & Requirements
pip install lorahub
https://huggingface.co/models?search=lorahub
).example.py
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 year ago
1+ week