ziplora-pytorch  by mkshing

PyTorch implementation for LoRA merging

created 1 year ago
543 stars

Top 59.5% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a PyTorch implementation of ZipLoRA, a method for merging multiple LoRA (Low-Rank Adaptation) models to achieve flexible subject and style control in text-to-image generation. It targets users familiar with Stable Diffusion XL and LoRA training, enabling them to combine specific subjects with diverse artistic styles efficiently.

How It Works

ZipLoRA merges multiple LoRA adapters by learning a low-rank decomposition of the difference between LoRA weights. This approach allows for effective combination of distinct LoRA models, enabling users to specify both a subject and a style simultaneously. The implementation leverages the diffusers library for SDXL model handling and LoRA training, with specific scripts for training individual LoRAs and then merging them using the ZipLoRA technique.

Quick Start & Requirements

  • Installation: git clone git@github.com:mkshing/ziplora-pytorch.git && cd ziplora-pytorch && pip install -r requirements.txt
  • Prerequisites: Python, PyTorch, diffusers, accelerate, transformers, xformers, bitsandbytes, wandb. Requires significant VRAM for SDXL training (fp16 recommended).
  • Usage: Scripts are provided for training individual LoRAs (using train_dreambooth_lora_sdxl.py) and then merging them with ZipLoRA (train_dreambooth_ziplora_sdxl.py). Inference is demonstrated via a Python script and a Gradio interface.
  • Links: Paper Summary

Highlighted Details

  • Implements the ZipLoRA merging technique for SDXL.
  • Supports training of individual subject and style LoRAs.
  • Provides inference scripts and a Gradio demo for easy interaction.
  • Mentions enable_xformers_memory_efficient_attention and use_8bit_adam for memory optimization during training.

Maintenance & Community

The repository is maintained by mkshing. No specific community channels (Discord/Slack) or roadmap are explicitly linked in the README.

Licensing & Compatibility

The repository's license is not explicitly stated in the provided README. Users should verify licensing for commercial use or integration into closed-source projects.

Limitations & Caveats

The README indicates "Pre-optimization lora weights" as a pending TODO item, suggesting potential for further performance improvements. The primary focus is on SDXL, and compatibility with other diffusion models is not detailed.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
6 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Patrick von Platen Patrick von Platen(Core Contributor to Hugging Face Transformers and Diffusers), and
6 more.

LoRA by microsoft

0.3%
12k
PyTorch library for low-rank adaptation (LoRA) of LLMs
created 4 years ago
updated 7 months ago
Feedback? Help us improve.