RunPod worker for ComfyUI as a serverless API
Top 61.3% on sourcepulse
This repository provides a serverless API for ComfyUI, enabling users to run complex Stable Diffusion workflows on demand via RunPod's infrastructure. It targets developers and power users of AI image generation who need scalable, API-driven access to ComfyUI without managing dedicated hardware. The primary benefit is the ability to integrate ComfyUI's advanced features into applications and pipelines through a simple API.
How It Works
The project packages ComfyUI into Docker images, pre-loaded with specific Stable Diffusion models (SDXL, SD3, FLUX.1) or as a base image for custom deployments. These Docker images are deployed as serverless endpoints on RunPod. Users interact with the endpoint via a REST API, submitting ComfyUI workflows as JSON and optionally providing input images. The API handles job queuing, GPU allocation, workflow execution, and returns generated images as base64 strings or uploads them to AWS S3.
Quick Start & Requirements
runpod/worker-comfyui:3.6.0-sdxl
).Highlighted Details
Maintenance & Community
The project is a fork of an earlier RunPod worker and acknowledges contributions from the ComfyUI creator and other RunPod worker projects. GitHub Actions are configured for automatic Docker image deployment.
Licensing & Compatibility
The repository does not explicitly state a license in the README. Compatibility for commercial use or closed-source linking would depend on the underlying ComfyUI license and any specific terms imposed by RunPod for serverless deployments.
Limitations & Caveats
The README mentions that local API access via WSL on Windows might not work, recommending direct Docker Desktop usage. Input image size is limited by RunPod's API request body size limits (10MB for /run
, 20MB for /runsync
).
1 week ago
Inactive