ComfyUI node suite for LLM workflow construction
Top 24.3% on sourcepulse
This project provides a comprehensive framework for building Large Language Model (LLM) workflows within ComfyUI, targeting users who want to integrate advanced AI capabilities into their existing image generation pipelines. It offers a wide array of nodes for multi-agent interactions, RAG (Retrieval-Augmented Generation), and social app integration, enabling the creation of sophisticated AI assistants and specialized workflows.
How It Works
The framework extends ComfyUI with custom nodes that abstract complex LLM interactions. It supports various LLM backends, including OpenAI-compatible APIs, Ollama for local models, and direct loading of Hugging Face or GGUF formats. Key features include agent-to-agent communication patterns (radial, ring), integration with TTS (Text-to-Speech) and OCR, and support for multimodal models (VLMs). This approach allows for flexible and modular construction of LLM-powered applications directly within a visual programming environment.
Quick Start & Requirements
custom_nodes
folder.pip install -r requirements.txt
within the project directory. Specific models may require updated libraries (e.g., pip install -U transformers
).config.ini
or directly within ComfyUI nodes.Highlighted Details
reasoning_content
output for separating model reasoning from responses.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 day ago
1 day