PyTorch implementation for Vector Quantized VAE research paper
Top 40.3% on sourcepulse
This repository provides a PyTorch implementation of Vector Quantized Variational Autoencoders (VQ-VAEs), a generative model for learning discrete representations of data. It's suitable for researchers and practitioners interested in neural representation learning and generative modeling, offering a clear path to reproducing results from a course project.
How It Works
The implementation focuses on the VQ-VAE architecture, which uses a vector quantization layer to learn a discrete latent space. This approach is advantageous for its ability to capture complex data distributions and generate high-quality samples, as demonstrated by its application to image and video datasets. The project also includes a PixelCNN prior trained on the discrete latents for class-conditional generation.
Quick Start & Requirements
python vqvae.py --data-folder /tmp/miniimagenet --output-folder models/vqvae
Highlighted Details
Maintenance & Community
Authors include Rithesh Kumar, Tristan Deleu, and Evan Racah. No community links or roadmap information are provided in the README.
Licensing & Compatibility
The README does not specify a license. Compatibility for commercial use or closed-source linking is not addressed.
Limitations & Caveats
The project appears to be a course project reproduction, and its current maintenance status or long-term support is unclear. No explicit limitations or known issues are mentioned.
2 years ago
1 day