PyTorch code for v-objective diffusion model inference
Top 48.8% on sourcepulse
This repository provides PyTorch implementations for objective diffusion models, enabling users to generate images from noise using various sampling techniques. It is designed for researchers and practitioners interested in state-of-the-art generative models, offering flexibility in sampling methods and model conditioning.
How It Works
The project implements denoising diffusion probabilistic models that are trained to reverse a noising process. It utilizes the 'v' objective from Progressive Distillation for faster sampling and supports CLIP-guided diffusion, allowing generation conditioned on text embeddings. The code includes multiple sampling methods like DDPM, DDIM, PRK, and PLMS, offering trade-offs between speed and sample quality.
Quick Start & Requirements
pip install v-diffusion-pytorch
or clone and run pip install -e .
Highlighted Details
cfg_sample.py
) and CLIP-guided (clip_sample.py
) sampling.Maintenance & Community
Developed by Katherine Crowson and Chainbreakers AI. Compute resources for training were provided by stability.ai.
Licensing & Compatibility
The repository does not explicitly state a license in the README. Users should verify licensing for commercial or closed-source use.
Limitations & Caveats
The README does not specify a license, which may impact commercial adoption. Some sampling methods might require significantly more steps for optimal results.
2 years ago
1 day