ControlNet 1.1 is a research project for neural network control of diffusion models
Top 10.1% on sourcepulse
This repository provides the nightly release of ControlNet 1.1, a powerful extension for Stable Diffusion that enables fine-grained control over image generation using various conditioning inputs. It's primarily aimed at researchers and users who want to leverage advanced image-to-image translation and manipulation capabilities within the Stable Diffusion ecosystem.
How It Works
ControlNet 1.1 maintains the same architecture as its predecessor, focusing on improved robustness and quality across its suite of 14 models (11 production-ready, 3 experimental). These models condition Stable Diffusion generation on inputs like depth maps, Canny edges, segmentation maps, OpenPose skeletons, and more. The key advantage lies in its ability to inject spatial conditioning into the diffusion process, allowing for precise control over composition, structure, and style.
Quick Start & Requirements
conda env create -f environment.yaml
followed by conda activate control-v11
.v1-5-pruned.ckpt
) is also needed.save_memory = True
in config.py
.python gradio_*.py
scripts. For Automatic1111 Stable Diffusion WebUI integration, users should refer to the sd-webui-controlnet
repository.Highlighted Details
Maintenance & Community
This repository is actively being updated. For integration with Automatic1111's WebUI, users should follow the sd-webui-controlnet
repository.
Licensing & Compatibility
The repository does not explicitly state a license in the provided README. Users should verify licensing for commercial use.
Limitations & Caveats
This repository is for research use and academic experiments; it is not an A1111 extension and should not be directly installed into A1111. Official support for Multi-ControlNet is A1111-only. The "Instruct Pix2Pix" model is marked as experimental and may require cherry-picking. The "Tile" model's official support for tiled upscaling is A1111-only.
11 months ago
1+ week