PyTorch implementation for LoRA merging
Top 59.5% on sourcepulse
This repository provides a PyTorch implementation of ZipLoRA, a method for merging multiple LoRA (Low-Rank Adaptation) models to achieve flexible subject and style control in text-to-image generation. It targets users familiar with Stable Diffusion XL and LoRA training, enabling them to combine specific subjects with diverse artistic styles efficiently.
How It Works
ZipLoRA merges multiple LoRA adapters by learning a low-rank decomposition of the difference between LoRA weights. This approach allows for effective combination of distinct LoRA models, enabling users to specify both a subject and a style simultaneously. The implementation leverages the diffusers
library for SDXL model handling and LoRA training, with specific scripts for training individual LoRAs and then merging them using the ZipLoRA technique.
Quick Start & Requirements
git clone git@github.com:mkshing/ziplora-pytorch.git && cd ziplora-pytorch && pip install -r requirements.txt
diffusers
, accelerate
, transformers
, xformers
, bitsandbytes
, wandb
. Requires significant VRAM for SDXL training (fp16 recommended).train_dreambooth_lora_sdxl.py
) and then merging them with ZipLoRA (train_dreambooth_ziplora_sdxl.py
). Inference is demonstrated via a Python script and a Gradio interface.Highlighted Details
enable_xformers_memory_efficient_attention
and use_8bit_adam
for memory optimization during training.Maintenance & Community
The repository is maintained by mkshing. No specific community channels (Discord/Slack) or roadmap are explicitly linked in the README.
Licensing & Compatibility
The repository's license is not explicitly stated in the provided README. Users should verify licensing for commercial use or integration into closed-source projects.
Limitations & Caveats
The README indicates "Pre-optimization lora weights" as a pending TODO item, suggesting potential for further performance improvements. The primary focus is on SDXL, and compatibility with other diffusion models is not detailed.
1 year ago
1 day