AnimateDiff-MotionDirector  by ExponentialML

MotionDirector training for AnimateDiff

created 1 year ago
302 stars

Top 89.3% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides code for training MotionDirector, a method to extract and apply motion from videos to drive animations generated by AnimateDiff. It's designed for researchers and developers looking to customize AnimateDiff's animation capabilities, enabling users to train custom motion LoRAs for specific movements.

How It Works

MotionDirector trains motion-specific LoRA (Low-Rank Adaptation) modules by processing input videos. It separates motion (temporal) and appearance (spatial) information, allowing users to apply learned motions to different subjects or styles. This modular approach, stripped down from original repositories for ease of use, enables flexible integration with existing AnimateDiff workflows.

Quick Start & Requirements

  • Install: Clone the repo, create a conda environment (conda env create -f environment.yaml), activate it (conda activate animatediff-motiondirector), and install requirements (pip install -r requirements.txt).
  • Prerequisites: Stable Diffusion V1.5 model (runwayml/stable-diffusion-v1-5), V3 Motion Module (guoyww/animatediff/blob/main/v3_sd15_mm.ckpt). Requires git lfs.
  • Training: Configure configs/training/motion_director/my_video.yaml and run python train.py --config ./configs/training/motion_director/my_video.yaml.
  • Resource Estimate: ~10-15 minutes convergence with 14GB VRAM for "preferred" quality.
  • Docs: Training instructions are within the my_video.yaml file. Inference is intended for ComfyUI Extension ComfyUI-AnimateDiff-Evolved.

Highlighted Details

  • Supports training single videos for specific motions or 3-5 similar videos for more robust motion capture.
  • Offers separate spatial LoRAs for appearance and temporal LoRAs for motion, compatible with MotionLoRA standards.
  • Trained LoRAs are saved in CompVis format for community repository compatibility.
  • Code is modularized and stripped down from original repositories for easier use and integration.

Maintenance & Community

The project is built upon AnimateDiff and Tune-a-Video. Further community integration and workflow examples are planned.

Licensing & Compatibility

Released for academic and creative usage. No explicit license is stated in the README, but the disclaimer suggests responsible use and disclaims liability for user-generated content. Compatibility is primarily with AnimateDiff and MotionLoRA-compatible systems.

Limitations & Caveats

Currently tested only with Stable Diffusion V1.5; SDXL support is in early stages. Long training runs with large, customized datasets are not thoroughly tested. Training multiple videos with different motions or subjects may yield inconsistent results.

Health Check
Last commit

11 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.