Inference engine for agentic workflows using chained LLM steps
Top 94.1% on sourcepulse
COMandA is a Go-based inference engine designed for orchestrating complex agentic workflows by chaining together Large Language Model (LLM) operations. It empowers users to build sophisticated, multi-step processes using simple YAML configurations, enabling the combination of strengths from various LLM providers and custom actions.
How It Works
COMandA processes user-defined YAML "recipes" that specify sequences of operations. Each step in a recipe can involve inputs (files, URLs, screenshots, database queries), LLM calls (supporting multiple providers like OpenAI, Anthropic, Google, Ollama, X.AI), custom actions, and output handling. The engine intelligently chains these steps, passing outputs from one to the inputs of the next, and supports parallel execution of independent steps for performance gains. It also includes advanced features like web scraping, image analysis, and database integration.
Quick Start & Requirements
go install github.com/kris-hansen/comanda@latest
..env
file or the comanda configure
command.examples/
directory.Highlighted Details
Maintenance & Community
The project is actively maintained by Kris Hansen. Contributions are welcomed via pull requests.
Licensing & Compatibility
Licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
Currently, only PostgreSQL is supported for database operations. The roadmap indicates planned support for branching logic, routing, and additional providers, suggesting some advanced orchestration features are still under development.
4 days ago
Inactive