stable-diffusion-nvidia-docker  by NickLucche

GPU-enabled Dockerfile for Stable Diffusion v2 with web UI

created 2 years ago
369 stars

Top 77.7% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a Dockerized environment for running Stability.AI's Stable Diffusion v2 model, targeting artists and designers with limited coding experience. It simplifies the setup and execution of image generation tasks, including text-to-image, image-to-image, and inpainting, with an integrated web UI.

How It Works

The project leverages Docker and NVIDIA Container Toolkit to create a self-contained, GPU-accelerated environment. It utilizes the diffusers library from Hugging Face to load and run Stable Diffusion models. The architecture supports multi-GPU inference through a "Data Parallel" approach, replicating the model across available GPUs to increase throughput. Users can configure FP16 (half-precision) for reduced memory usage or FP32 (single-precision) for potentially higher accuracy.

Quick Start & Requirements

  • Install/Run: docker run --name stable-diffusion --pull=always --gpus all -it -p 7860:7860 nicklucche/stable-diffusion
  • Prerequisites: Ubuntu (20.04+) or Windows (WSL/Ubuntu recommended), NVIDIA GPU (>= 6GB VRAM), Docker, NVIDIA Container Toolkit, Hugging Face account (for some models).
  • Setup: Initial run downloads model weights (~2.8GB+).
  • Docs: https://github.com/NickLucche/stable-diffusion-nvidia-docker

Highlighted Details

  • Supports Stable Diffusion v2.0 with text-to-image, img2img, and inpainting.
  • Multi-GPU inference via Data Parallelism (-e DEVICES=0,1 or -e DEVICES=all).
  • FP16 mode for GPUs with <10GB VRAM.
  • Customizable model loading via MODEL_ID environment variable.
  • Persistent model weights via Docker volumes (-v /path/to/cache:/root/.cache/huggingface/diffusers).

Maintenance & Community

The project appears to be a personal effort with a TODO list indicating planned features. No specific community channels or active contributor information are listed in the README.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

Model Parallelism is currently disabled. The project is primarily tested on Ubuntu 20.04 and Windows 10 21H2. Performance may vary based on GPU memory distribution in multi-GPU setups.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 90 days

Explore Similar Projects

Starred by Patrick von Platen Patrick von Platen(Core Contributor to Hugging Face Transformers and Diffusers), Julien Chaumond Julien Chaumond(Cofounder of Hugging Face), and
1 more.

parallelformers by tunib-ai

0%
790
Toolkit for easy model parallelization
created 4 years ago
updated 2 years ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems) and Ying Sheng Ying Sheng(Author of SGLang).

fastllm by ztxz16

0.4%
4k
High-performance C++ LLM inference library
created 2 years ago
updated 2 weeks ago
Feedback? Help us improve.