Human motion generation research paper
Top 82.6% on sourcepulse
OmniControl addresses the challenge of fine-grained control over human motion generation, enabling users to specify desired joint positions and orientations at any point in time. This project is targeted at researchers and developers in computer graphics, animation, and robotics who require precise control over synthesized human movements, offering a flexible framework for creating realistic and controllable character animations.
How It Works
OmniControl leverages a diffusion model conditioned on text and spatial control signals. It employs a novel approach that allows for arbitrary joint control by integrating spatial constraints directly into the diffusion process. This method enables precise manipulation of specific body parts without sacrificing the overall naturalness and coherence of the generated motion, offering a significant advantage over methods that rely on global or limited control mechanisms.
Quick Start & Requirements
conda env create -f environment.yml
), activate it (conda activate omnicontrol
), install spaCy (python -m spacy download en_core_web_sm
), and install CLIP (pip install git+https://github.com/openai/CLIP.git
).Highlighted Details
.mp4
animations and optionally .obj
meshes via SMPLify.Maintenance & Community
The project is associated with ICLR 2024. The code is based on MDM, MLD, and TMOS. No specific community channels or active maintenance signals are mentioned in the README.
Licensing & Compatibility
MIT License. However, the project depends on libraries like CLIP, SMPL, SMPL-X, and PyTorch3D, which have their own licenses that must also be followed. Commercial use may be restricted by these underlying dependencies.
Limitations & Caveats
Evaluation on HumanML3D takes approximately 45 hours on a single GPU. The motion rendering script requires a GPU and is borrowed from MDM. The .obj
files do not preserve vertex order, necessitating the use of saved SMPL parameters for animation.
1 year ago
Inactive