stable-diffusion-nvidia-docker  by NickLucche

GPU-enabled Dockerfile for Stable Diffusion v2 with web UI

Created 3 years ago
369 stars

Top 76.5% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides a Dockerized environment for running Stability.AI's Stable Diffusion v2 model, targeting artists and designers with limited coding experience. It simplifies the setup and execution of image generation tasks, including text-to-image, image-to-image, and inpainting, with an integrated web UI.

How It Works

The project leverages Docker and NVIDIA Container Toolkit to create a self-contained, GPU-accelerated environment. It utilizes the diffusers library from Hugging Face to load and run Stable Diffusion models. The architecture supports multi-GPU inference through a "Data Parallel" approach, replicating the model across available GPUs to increase throughput. Users can configure FP16 (half-precision) for reduced memory usage or FP32 (single-precision) for potentially higher accuracy.

Quick Start & Requirements

  • Install/Run: docker run --name stable-diffusion --pull=always --gpus all -it -p 7860:7860 nicklucche/stable-diffusion
  • Prerequisites: Ubuntu (20.04+) or Windows (WSL/Ubuntu recommended), NVIDIA GPU (>= 6GB VRAM), Docker, NVIDIA Container Toolkit, Hugging Face account (for some models).
  • Setup: Initial run downloads model weights (~2.8GB+).
  • Docs: https://github.com/NickLucche/stable-diffusion-nvidia-docker

Highlighted Details

  • Supports Stable Diffusion v2.0 with text-to-image, img2img, and inpainting.
  • Multi-GPU inference via Data Parallelism (-e DEVICES=0,1 or -e DEVICES=all).
  • FP16 mode for GPUs with <10GB VRAM.
  • Customizable model loading via MODEL_ID environment variable.
  • Persistent model weights via Docker volumes (-v /path/to/cache:/root/.cache/huggingface/diffusers).

Maintenance & Community

The project appears to be a personal effort with a TODO list indicating planned features. No specific community channels or active contributor information are listed in the README.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

Model Parallelism is currently disabled. The project is primarily tested on Ubuntu 20.04 and Windows 10 21H2. Performance may vary based on GPU memory distribution in multi-GPU setups.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
0 stars in the last 30 days

Explore Similar Projects

Starred by Chaoyu Yang Chaoyu Yang(Founder of Bento), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
3 more.

nunchaku by nunchaku-tech

1.9%
3k
High-performance 4-bit diffusion model inference engine
Created 10 months ago
Updated 2 days ago
Starred by Alex Yu Alex Yu(Research Scientist at OpenAI; Former Cofounder of Luma AI) and Cody Yu Cody Yu(Coauthor of vLLM; MTS at OpenAI).

xDiT by xdit-project

0.7%
2k
Inference engine for parallel Diffusion Transformer (DiT) deployment
Created 1 year ago
Updated 1 day ago
Starred by Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), Stas Bekman Stas Bekman(Author of "Machine Learning Engineering Open Book"; Research Engineer at Snowflake), and
2 more.

gpustack by gpustack

1.3%
4k
GPU cluster manager for AI model deployment
Created 1 year ago
Updated 1 day ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems") and Ying Sheng Ying Sheng(Coauthor of SGLang).

fastllm by ztxz16

0.4%
4k
High-performance C++ LLM inference library
Created 2 years ago
Updated 1 week ago
Feedback? Help us improve.