stable-diffusion-xl-demo  by TonyLianLong

Gradio WebUI demo for Stable Diffusion XL 1.0

created 2 years ago
280 stars

Top 93.9% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a Gradio web UI demo for Stable Diffusion XL 1.0, targeting users who want to experiment with advanced image generation capabilities. It offers a user-friendly interface for generating high-quality images with features like a refiner model and multi-GPU support, simplifying the process of leveraging SDXL's power.

How It Works

The demo utilizes the Gradio SDK to create an interactive web interface. It loads both the base and refiner models for Stable Diffusion XL 1.0, allowing for a two-stage generation process that enhances image detail. The architecture supports data parallelism for multi-GPU acceleration and integrates optional features like Latent Consistency Models (LCM) LoRA and the faster SSD-1B model for improved performance.

Quick Start & Requirements

  • Install dependencies: pip install accelerate transformers invisible-watermark numpy opencv-python safetensors gradio==3.11.0 git+https://github.com/huggingface/diffusers.git
  • Launch: PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 python app.py
  • Requirements: PyTorch 2.0.1+, Python, GPU with CUDA.
  • Resources: Requires significant VRAM for SDXL models.
  • Links: Hugging Face Diffusers

Highlighted Details

  • Supports Latent Consistency Models (LCM) LoRA by default for faster generation.
  • Option to use SSD-1B (Segmind/SSD-1B) for even faster inference.
  • Multi-GPU support via data parallelism (set MULTI_GPU=True).
  • torch.compile support for potential inference speedups.
  • Model offloading options (OFFLOAD_BASE, OFFLOAD_REFINER) to reduce memory usage.

Maintenance & Community

  • Forked from the StableDiffusion v2.1 Demo WebUI.
  • Actively updated with new features and model support.
  • Encourages community stars and contributions.

Licensing & Compatibility

  • License: MIT.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

The demo's performance and memory requirements are substantial, necessitating powerful hardware. While torch.compile is supported, it adds initial compilation overhead. Model offloading saves memory but slows down generation.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
5 stars in the last 90 days

Explore Similar Projects

Starred by Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera) and Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake).

InternEvo by InternLM

1.0%
402
Lightweight training framework for model pre-training
created 1 year ago
updated 1 week ago
Feedback? Help us improve.