Agentic AI workflows via Docker containers
Top 84.2% on sourcepulse
This project provides an agentic workflow engine for developers, enabling complex AI-driven tasks using Markdown-defined prompts and Dockerized tools. It targets developers seeking to integrate LLMs into their workflows, offering a flexible, BYO-LLM approach that leverages Docker for sandboxing and tool execution.
How It Works
The core of the system is a "conversation loop" where Markdown prompts, LLM responses, and tool outputs are iteratively processed. It utilizes Docker containers as tools, allowing LLMs to interact with a wide range of environments and perform complex actions. Prompts are version-controlled artifacts, and the system supports multi-model agents, enabling the use of different LLMs for specific tasks within a workflow.
Quick Start & Requirements
docker run --rm --pull=always -it -v /var/run/docker.sock:/var/run/docker.sock --mount type=volume,source=docker-prompts,target=/prompts --mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key vonwig/prompts:latest run --host-dir $PWD --user $USER --platform "$(uname -o)" --prompts "github:docker/labs-githooks?ref=main&path=prompts/git_hooks"
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is described as a source for experiments, suggesting it may be in an early or evolving stage. The reliance on specific Docker configurations and the absence of a clear license could pose adoption challenges.
1 week ago
Inactive