Image editing research paper using exemplar guidance and diffusion
Top 33.2% on sourcepulse
This repository provides code for "Paint by Example," an exemplar-based image editing technique that leverages diffusion models for precise control. It enables users to edit images by providing a reference image (exemplar) and a mask, allowing for high-fidelity modifications guided by the exemplar's style and content.
How It Works
The method utilizes a diffusion model, specifically a modified Stable Diffusion v1-4, to disentangle and reorganize source image and exemplar information. It addresses potential fusing artifacts by incorporating an information bottleneck and strong augmentations, preventing simple copy-pasting of exemplar content. An arbitrary shape mask for the exemplar and classifier-free guidance are employed to enhance controllability and similarity to the reference image, all within a single forward pass of the diffusion model.
Quick Start & Requirements
conda env create -f environment.yaml
and conda activate Paint-by-Example
.pretrained_models/
.scripts/modify_checkpoints.py
is needed to adapt the Stable Diffusion checkpoint.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project mentions a recent work, Asymmetric VQGAN, that improves detail preservation in non-masked regions, suggesting potential limitations in the current implementation's detail handling.
1 year ago
1 day