Video diffusion model for generative transition and prediction (ICLR 2024 paper)
Top 39.7% on sourcepulse
SEINE is a diffusion model for generating and predicting video content, specifically designed for transitioning between short clips and extending them into longer sequences. It targets researchers and developers in video generation, offering a novel approach to temporal consistency and content extension.
How It Works
SEINE employs a diffusion model architecture, building upon Stable Diffusion v1.4. Its core innovation lies in its ability to handle short-to-long video generation, enabling generative transitions and temporal prediction. This approach allows for extending existing video content or creating smooth transitions between different video segments.
Quick Start & Requirements
conda create -n seine python==3.9.16
, conda activate seine
, pip install -r requirement.txt
./pretrained/stable-diffusion-v1-4
), SEINE model checkpoint (downloaded to ./pretrained
).python sample_scripts/with_mask_sample.py --config configs/sample_i2v.yaml
python sample_scripts/with_mask_sample.py --config configs/sample_transition.yaml
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The model is not trained for realistic representation of people or events, and its use for generating pornographic, violent, or harmful content is prohibited and disclaimed by the authors. Users are solely liable for their actions.
8 months ago
Inactive