ComfyUI nodes for InstantID image generation
Top 29.2% on sourcepulse
This repository provides an unofficial implementation of InstantID for ComfyUI, enabling users to control image generation with face and pose references. It's designed for artists, researchers, and power users of Stable Diffusion who want to leverage advanced facial and pose conditioning within the ComfyUI node-based workflow. The primary benefit is enhanced control over character identity and pose in generated images.
How It Works
The implementation integrates InstantID's core functionality into ComfyUI nodes. It supports loading base SDXL models from Hugging Face Hub or locally, along with InsightFace models and specific ID ControlNet and IPAdapter models. A key feature is the ID Prompt_Styler
node, which allows users to apply various artistic styles (e.g., Watercolor, Film Noir, Neon) to the generated output, alongside positive and negative prompts. The InstantID Generation
node takes face and optional pose images, along with model and style configurations, to produce the final output.
Quick Start & Requirements
custom_nodes
directory and running pip install -r requirements.txt
. ComfyUI Manager installation is planned.onnxruntime-gpu==1.16.0
. For CUDA 12, onnxruntime-gpu==1.17.0
must be installed manually. Requires SDXL base models and specific InstantID/ControlNet/IPAdapter models to be downloaded and placed in designated directories.Highlighted Details
ID Prompt_Styler
node.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The pose reference functionality is noted to only affect the facial region, differing from standard OpenPose implementations. Compatibility with CUDA 12 requires manual onnxruntime-gpu
installation. The lack of an explicit license may pose restrictions for commercial adoption.
1 year ago
1 day