Discover and explore top open-source AI tools and projects—updated daily.
setarehcDiffusion models for flexible motion in-betweening and synthesis
Top 99.9% on SourcePulse
This project provides the official PyTorch implementation for "Flexible Motion In-betweening with Diffusion Models," presented at SIGGRAPH 2024. It enables researchers and developers to generate realistic 3D human motion sequences, offering flexible control through text prompts or specified keyframes. The core benefit lies in its diffusion-based approach for high-quality motion synthesis and editing.
How It Works
The system leverages diffusion models to perform motion in-betweening. It supports both unconditional text-to-motion generation and conditional generation, where motion sequences can be guided by user-provided text descriptions or specific spatial keyframes. This dual conditioning capability allows for precise control over motion synthesis, enabling tasks like interpolating between poses or generating novel motions based on semantic input.
Quick Start & Requirements
ffmpeg and spacy (with en_core_web_sm model). CLIP is installed via pip install git+https://github.com/openai/CLIP.git../save/ directory.Highlighted Details
Maintenance & Community
No specific details regarding community channels (Discord, Slack), active contributors, or roadmap are provided in the README.
Licensing & Compatibility
The code is distributed under an MIT License. However, users must also adhere to the licenses of its dependencies, including CLIP, SMPL, SMPL-X, PyTorch3D, and the HumanML3D dataset. Commercial use compatibility is subject to these underlying licenses.
Limitations & Caveats
The interactive flag for selecting keyframes during generation is noted as being "In development."
1 year ago
Inactive
GuyTevet