ComfyUI extension for inference speedup
Top 40.1% on sourcepulse
This repository provides ComfyUI nodes for ComfyUI-TeaCache, a training-free caching method designed to accelerate inference for diffusion models by leveraging timestep-specific output differences. It targets users of ComfyUI working with image, video, and audio diffusion models, offering significant speedups with minimal quality degradation.
How It Works
TeaCache estimates and caches fluctuating differences in model outputs across diffusion timesteps. This approach avoids recomputing redundant information, leading to faster inference. The integration into ComfyUI is seamless, requiring only node connections within existing workflows.
Quick Start & Requirements
custom_nodes
and running pip install -r requirements.txt
.TeaCache
node after model loading nodes. Recommended rel_l1_thresh
and max_skip_steps
values are provided for various models.examples
folder.Highlighted Details
Compile Model
node leveraging torch.compile
for further inference acceleration.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Compile Model
node can be time-consuming.3 weeks ago
1 day