Line drawing generation via unpaired image translation (CVPR 2022)
Top 74.9% on sourcepulse
This project provides a PyTorch implementation for generating line drawings from unpaired image data, focusing on conveying geometry and semantics. It is targeted at researchers and practitioners in computer vision and graphics interested in controllable image synthesis and style transfer. The key benefit is the ability to create informative line drawings that capture essential structural and semantic information from input images.
How It Works
The approach leverages a Generative Adversarial Network (GAN) architecture, specifically adapted from pix2pixHD and CycleGAN. It learns to map image features to geometric representations (depth maps) and then uses these to generate line drawings. This method allows for unpaired training, meaning the model doesn't require perfectly aligned pairs of photos and drawings, making it more flexible and scalable.
Quick Start & Requirements
conda env create -f environment.yml
and activate with conda activate drawings
.pip install git+https://github.com/openai/CLIP.git
.checkpoints
directory.python test.py --name anime_style --dataroot examples/test
.Highlighted Details
Maintenance & Community
The project is associated with CVPR 2022 and cites academic work, indicating a research-oriented origin. No specific community channels or active maintenance indicators are present in the README.
Licensing & Compatibility
The README does not explicitly state a license. The code is adapted from pix2pixHD and pytorch-CycleGAN-and-pix2pix, which are typically released under permissive licenses like MIT. However, users should verify the exact licensing terms.
Limitations & Caveats
The project requires specific versions of PyTorch (1.7.1) and may have compatibility issues with newer versions. Generating high-quality depth maps for training is a prerequisite, which adds an extra step to the workflow.
11 months ago
Inactive