Image inpainting model using decomposed dual-branch diffusion
Top 26.2% on sourcepulse
BrushNet is a plug-and-play diffusion model for image inpainting, designed to integrate seamlessly with existing pre-trained diffusion models like Stable Diffusion v1.5 and SDXL. It addresses the challenge of image inpainting by decomposing the learning process, allowing it to be applied to various inpainting scenarios with improved fidelity and control. The target audience includes researchers and developers working on image generation and editing tasks.
How It Works
BrushNet employs a dual-branch diffusion architecture that separates masked image features from noisy latent representations. This decomposition reduces the model's learning burden and enhances its ability to handle image inpainting tasks. By leveraging dense, per-pixel control throughout the pre-trained diffusion model, BrushNet achieves greater suitability for precise image manipulation.
Quick Start & Requirements
pip install -e .
and pip install -r examples/brushnet/requirements.txt
.python examples/brushnet/app_brushnet.py
.Highlighted Details
Maintenance & Community
The project is from TencentARC, with contributions from researchers at The Chinese University of Hong Kong. Updates include the release of BrushEdit and stronger BrushNetX models. Community interaction points are not explicitly listed, but the project is associated with ECCV 2024.
Licensing & Compatibility
The repository is released under an unspecified license. The data download agreement includes terms and conditions. Compatibility for commercial use or closed-source linking is not detailed.
Limitations & Caveats
The provided SDXL checkpoint is an early version trained with a small batch size and may not perform optimally. Users are advised to train on custom data for specific industrial applications. The evaluation script requires disabling an NSFW detector for accurate results, and image generation may vary across different hardware setups.
7 months ago
1 day