LoRA-ViT  by JamesQFreeman

LoRA SDK for Vision Transformer models

created 2 years ago
418 stars

Top 71.2% on sourcepulse

GitHubView on GitHub
Project Summary

MeLo provides a low-rank adaptation (LoRA) implementation specifically for Vision Transformers (ViT), offering a more parameter-efficient fine-tuning alternative to full model fine-tuning for tasks like medical image diagnosis. It targets researchers and practitioners working with ViTs who need to adapt models to new datasets or tasks with reduced computational cost and memory footprint.

How It Works

MeLo injects low-rank matrices into the attention layers of ViT models. This approach decomposes the weight updates into smaller, trainable matrices, significantly reducing the number of parameters that need to be updated during fine-tuning. This method is advantageous as it maintains performance comparable to full fine-tuning while drastically cutting down on memory usage and training time.

Quick Start & Requirements

  • Install via pip (assuming the lora package is available or the repo is cloned).
  • Requires PyTorch version 1.10.0 or newer.
  • Safetensors library is needed for saving/loading LoRA weights.
  • Official quick-start examples are available in examples.ipynb.
  • Homepage
  • arXiv

Highlighted Details

  • Supports integration with timm library for various ViT architectures.
  • Enables adaptation for segmentation tasks using DeepLab wrappers.
  • Implements multi-LoRA functionality for complex adaptations.
  • Claims 1.8x-1.9x speedup on M1 Pro compared to full fine-tuning.

Maintenance & Community

  • The project is associated with the paper "MeLo: Low-rank Adaptation is Better than Fine-tuning for Medical Image Diagnosis".
  • Credit is given to lukemelas/PyTorch-Pretrained-ViT for ViT code and weights.
  • No explicit community links (Discord/Slack) or roadmap are provided in the README.

Licensing & Compatibility

  • The README does not explicitly state a license. The project is hosted on GitHub, implying a default open-source license, but specific terms are not mentioned.
  • Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The project is marked with a "[ ] Repo clean up" task, suggesting it may be under active development or not yet fully polished. The README also notes that compatibility with PyTorch versions newer than 1.10.0 is an assumption ("should also work, I guess").

Health Check
Last commit

1 year ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
0
Star History
18 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Patrick von Platen Patrick von Platen(Core Contributor to Hugging Face Transformers and Diffusers), and
6 more.

LoRA by microsoft

0.3%
12k
PyTorch library for low-rank adaptation (LoRA) of LLMs
created 4 years ago
updated 7 months ago
Feedback? Help us improve.