LoRA-ViT  by JamesQFreeman

LoRA SDK for Vision Transformer models

Created 2 years ago
420 stars

Top 70.0% on SourcePulse

GitHubView on GitHub
Project Summary

MeLo provides a low-rank adaptation (LoRA) implementation specifically for Vision Transformers (ViT), offering a more parameter-efficient fine-tuning alternative to full model fine-tuning for tasks like medical image diagnosis. It targets researchers and practitioners working with ViTs who need to adapt models to new datasets or tasks with reduced computational cost and memory footprint.

How It Works

MeLo injects low-rank matrices into the attention layers of ViT models. This approach decomposes the weight updates into smaller, trainable matrices, significantly reducing the number of parameters that need to be updated during fine-tuning. This method is advantageous as it maintains performance comparable to full fine-tuning while drastically cutting down on memory usage and training time.

Quick Start & Requirements

  • Install via pip (assuming the lora package is available or the repo is cloned).
  • Requires PyTorch version 1.10.0 or newer.
  • Safetensors library is needed for saving/loading LoRA weights.
  • Official quick-start examples are available in examples.ipynb.
  • Homepage
  • arXiv

Highlighted Details

  • Supports integration with timm library for various ViT architectures.
  • Enables adaptation for segmentation tasks using DeepLab wrappers.
  • Implements multi-LoRA functionality for complex adaptations.
  • Claims 1.8x-1.9x speedup on M1 Pro compared to full fine-tuning.

Maintenance & Community

  • The project is associated with the paper "MeLo: Low-rank Adaptation is Better than Fine-tuning for Medical Image Diagnosis".
  • Credit is given to lukemelas/PyTorch-Pretrained-ViT for ViT code and weights.
  • No explicit community links (Discord/Slack) or roadmap are provided in the README.

Licensing & Compatibility

  • The README does not explicitly state a license. The project is hosted on GitHub, implying a default open-source license, but specific terms are not mentioned.
  • Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The project is marked with a "[ ] Repo clean up" task, suggesting it may be under active development or not yet fully polished. The README also notes that compatibility with PyTorch versions newer than 1.10.0 is an assumption ("should also work, I guess").

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Yaowei Zheng Yaowei Zheng(Author of LLaMA-Factory), and
1 more.

DoRA by NVlabs

0.3%
854
PyTorch code for weight-decomposed low-rank adaptation (DoRA)
Created 1 year ago
Updated 11 months ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
5 more.

ai-toolkit by ostris

0.9%
6k
Training toolkit for finetuning diffusion models
Created 2 years ago
Updated 16 hours ago
Feedback? Help us improve.