Discover and explore top open-source AI tools and projects—updated daily.
ComfyUI nodes for GGUF model support
Top 17.7% on SourcePulse
This project provides custom nodes for ComfyUI to enable the use of GGUF-quantized models, specifically targeting transformer/DiT architectures. It aims to allow users to run diffusion models with significantly reduced VRAM requirements on lower-end GPUs, making advanced AI image generation more accessible.
How It Works
The nodes leverage the GGUF format, popularized by llama.cpp, to load and run quantized UNET models. This approach is advantageous because transformer/DiT models are less sensitive to quantization compared to traditional UNETs, enabling substantial VRAM savings through variable bitrate quants. Additionally, a node for loading quantized T5 text encoders is included for further memory optimization.
Quick Start & Requirements
git clone https://github.com/city96/ComfyUI-GGUF
into your ComfyUI custom_nodes
folder.pip install --upgrade gguf
..\python_embeded\python.exe -s -m pip install -r .\ComfyUI\custom_nodes\ComfyUI-GGUF\requirements.txt
.ComfyUI/models/unet
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is in an early stage of development ("very much WIP"). Compatibility issues with PyTorch versions on macOS have been noted, and the effectiveness of LoRA loading is experimental.
1 month ago
1 day