Discover and explore top open-source AI tools and projects—updated daily.
pollockjjComfyUI extension for advanced multi-GPU memory management
Top 47.4% on SourcePulse
This custom node for ComfyUI addresses VRAM limitations by implementing "Virtual VRAM" and multi-GPU distribution for model components. It targets ComfyUI users seeking to maximize latent space processing, run larger models, or leverage multiple GPUs by intelligently offloading model layers (UNet, CLIP, VAE) to system RAM or secondary GPUs, thereby freeing up primary GPU VRAM for computation.
How It Works
The core of the project utilizes DisTorch (distributed torch) to manage model component placement across available devices. Instead of loading entire models onto a single GPU, DisTorch allows static parts of models to be offloaded to slower memory like CPU DRAM or other GPUs. This enhances memory management, not parallel processing, as workflow steps remain sequential. Users can select donor devices and specify offload amounts via a simple slider ("Normal Mode") or precise allocation strings ("Expert Mode"). Expert modes include 'bytes' for exact GB/MB allocation per device, 'ratio' for percentage-based splitting, and 'fraction' for device VRAM utilization percentages, enabling fine-grained control over model distribution.
Quick Start & Requirements
Installation is preferably done via the ComfyUI-Manager by searching for ComfyUI-MultiGPU. Manual installation involves cloning the repository into the ComfyUI/custom_nodes/ directory. The extension automatically creates multi-GPU versions of existing ComfyUI loader nodes and integrates with specific libraries like WanVideoWrapper, ComfyUI-GGUF, and others for expanded functionality. Tested setups include multi-GPU configurations (e.g., 2x 3090 + 1060ti, 4070, 3090/1070ti).
Highlighted Details
.safetensors and GGUF model formats.Maintenance & Community
The project is currently maintained by pollockjj and was originally created by Alexander Dzhoganov. No specific community channels (like Discord/Slack) or roadmap links are provided in the README.
Licensing & Compatibility
The provided README does not specify a software license. This lack of explicit licensing information may pose a barrier to adoption for users requiring clear commercial or open-source usage terms.
Limitations & Caveats
The extension focuses on memory management and component offloading, not parallelizing workflow execution steps, which remain sequential. Compatibility with specific model loaders depends on their availability and integration within the ComfyUI ecosystem. The absence of a stated license is a significant caveat for assessing adoption suitability.
1 week ago
Inactive
bghira
FMInference