LyCORIS  by KohakuBlueleaf

Parameter-efficient fine-tuning algorithms for Stable Diffusion

created 2 years ago
2,378 stars

Top 19.7% on sourcepulse

GitHubView on GitHub
Project Summary

LyCORIS is a Python library implementing various parameter-efficient fine-tuning (PEFT) algorithms for Stable Diffusion models, offering enhanced flexibility and control over model customization beyond standard LoRA. It targets researchers and users of Stable Diffusion who need advanced fine-tuning capabilities for tasks like character or style adaptation.

How It Works

LyCORIS integrates multiple PEFT methods, including LoRA (LoCon), LoHa, LoKr, (IA)^3, and DyLoRA, alongside native fine-tuning (Dreambooth). These methods introduce low-rank adaptations to specific layers, allowing for efficient fine-tuning with significantly fewer trainable parameters than full model fine-tuning. The library provides a unified interface for applying and training these diverse techniques, enabling users to select the best method based on trade-offs in fidelity, flexibility, diversity, and model size.

Quick Start & Requirements

  • Install via pip: pip install lycoris-lora
  • Or from source: git clone https://github.com/KohakuBlueleaf/LyCORIS && cd LyCORIS && pip install .
  • Requires Python and PyTorch. Specific dependencies for training or inference may vary.
  • Official documentation: docs/Network-Args.md

Highlighted Details

  • Supports multiple PEFT algorithms: LoRA, LoHa, LoKr, (IA)^3, DyLoRA, Native fine-tuning.
  • Integrates with popular Stable Diffusion UIs like A1111/sd-webui (v1.5.0+) and ComfyUI.
  • Offers utilities for extracting LoCon from models and merging LyCORIS weights back into checkpoints.
  • Includes conversion scripts for compatibility between different ecosystem formats (HCP-Diffusion, sd-webui).

Maintenance & Community

  • Active development with recent updates (v3.2.0) including new features like on-the-fly merging and LR scaling.
  • Discord server available for discussions.
  • Paper published at ICLR'24.

Licensing & Compatibility

  • The repository does not explicitly state a license in the provided README text. Further investigation into the repository's files is recommended for licensing details and commercial use compatibility.

Limitations & Caveats

  • HCP-Diffusion support was dropped in v3.0.0, requiring conversion scripts for compatibility.
  • Newer model types may not be immediately supported by all third-party interfaces, necessitating user requests to developers.
  • The README notes potential breaking changes in v3.2.0 regarding default wd_on_output behavior.
Health Check
Last commit

1 month ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
2
Star History
52 stars in the last 90 days

Explore Similar Projects

Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake).

HALOs by ContextualAI

0.3%
873
Library for aligning LLMs using human-aware loss functions
created 1 year ago
updated 2 weeks ago
Feedback? Help us improve.