ComfyUI nodes for interacting with Ollama
Top 55.4% on sourcepulse
This repository provides custom nodes for ComfyUI, enabling seamless integration with Ollama for Large Language Model (LLM) inference and experimentation. It targets ComfyUI users looking to incorporate LLM capabilities into their visual workflows, offering a user-friendly way to leverage powerful AI models.
How It Works
The nodes interact with a running Ollama server via the ollama
Python client. Key nodes include OllamaGenerateV2
for text generation with system prompts and context management, OllamaConnectivityV2
for server connection, and OllamaOptionsV2
for fine-grained control over API parameters. A OllamaVision
node is also available for querying images with vision-capable models. The V2 nodes offer a more modular approach to chained LLM interactions.
Quick Start & Requirements
git clone
into custom_nodes/comfyui-ollama
.pip install -r requirements.txt
.Highlighted Details
OllamaGenerateAdvance
and OllamaGenerateV2
support context for chaining.OllamaGenerateV2
supports image inputs for multimodal workflows.OllamaOptionsV2
provides full control over Ollama API parameters.Maintenance & Community
stavsap
is the primary contributor.Licensing & Compatibility
Limitations & Caveats
The README does not specify the license, which may impact commercial use. It also lacks explicit details on tested ComfyUI versions or potential compatibility issues with other custom nodes.
2 weeks ago
1 week