LECO  by p1atdev

LoRA training for concept manipulation in diffusion models

created 2 years ago
323 stars

Top 85.3% on sourcepulse

GitHubView on GitHub
Project Summary

LECO offers a method for fine-tuning diffusion models to erase, emphasize, or swap concepts using low-rank adaptation (LoRA). It targets users needing to control specific stylistic or object elements within generated images, providing a more nuanced approach than simple prompt engineering.

How It Works

LECO employs low-rank adaptation (LoRA) to modify specific layers of diffusion models. By training small, low-rank matrices, it efficiently adjusts the model's behavior to either remove ("erase") or enhance ("emphasize") user-defined concepts. This approach is advantageous as it requires significantly less VRAM and training time compared to full model fine-tuning, while still achieving targeted concept manipulation.

Quick Start & Requirements

  • Install: conda create -n leco python=3.10, pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118, pip install xformers, pip install -r requirements.txt
  • Prerequisites: Python 3.10, CUDA 11.8 (for PyTorch), xformers.
  • Training: Minimum 8GB VRAM. Recommended precision is bfloat16.
  • Docs: https://erasing.baulab.info/

Highlighted Details

  • Supports concept erasure, emphasis, and swapping via prompt engineering and LoRA weights.
  • Demonstrates effectiveness on Stable Diffusion v1.5 and v2.1, and Wonders Diffusion 1.5.
  • Pre-trained LoRA weights are available on Hugging Face for immediate use.
  • Configuration is managed through YAML files for prompts and training parameters.

Maintenance & Community

The project is inspired by and builds upon several other open-source repositories, including erasing, lora, sd-scripts, and conceptmod. No specific community channels or active maintainer information are provided in the README.

Licensing & Compatibility

The repository does not explicitly state a license. However, its dependencies include projects with various licenses (e.g., MIT for lora). Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The README notes that using float16 precision for training is unstable and not recommended. The project appears to be research-oriented, and its long-term maintenance status is unclear.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 90 days

Explore Similar Projects

Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake).

HALOs by ContextualAI

0.2%
873
Library for aligning LLMs using human-aware loss functions
created 1 year ago
updated 2 weeks ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Patrick von Platen Patrick von Platen(Core Contributor to Hugging Face Transformers and Diffusers), and
6 more.

LoRA by microsoft

0.3%
12k
PyTorch library for low-rank adaptation (LoRA) of LLMs
created 4 years ago
updated 7 months ago
Feedback? Help us improve.