Aden HQ Hive is an outcome-driven AI agent development framework that enables the creation of self-improving AI agents without hardcoding workflows. It targets developers and power users who need adaptable, production-ready agents that can evolve automatically upon failure, significantly reducing manual workflow design and reactive error handling.
How It Works
The framework utilizes a "coding agent" to translate natural language goals into executable agent graphs and connection code. Worker agents, composed of SDK-wrapped nodes, execute these graphs. A control plane monitors execution, enforces policies, and manages costs. Crucially, upon detecting failures, the system captures data, employs the coding agent to evolve the graph, and redeploys, facilitating continuous self-improvement without manual intervention.
Quick Start & Requirements
- Primary install/run command: Clone the repository, copy
config.yaml.example to config.yaml, run npm run setup, then docker compose up.
- Non-default prerequisites: Docker (v20.10+), Docker Compose (v2.0+).
- Access: Dashboard:
http://localhost:3000, API: http://localhost:4000.
- Links: Documentation:
adenhq.com, Self-Hosting Guide, Changelog, Report Issues.
Highlighted Details
- Goal-Driven Development: Define objectives in natural language; a coding agent generates the agent graph and connection code.
- Self-Adapting Agents: The framework captures failures, updates objectives, and evolves the agent graph automatically.
- Dynamic Node Connections: Connection code is generated by LLMs based on goals, not predefined.
- Human-in-the-Loop: Intervention nodes allow human input with configurable timeouts and escalation policies.
- Real-time Observability: WebSocket streaming provides live monitoring of agent execution, decisions, and node-to-node communication.
- Cost & Budget Control: Features spending limits, throttles, and automatic model degradation policies.
Maintenance & Community
- Community discussions and support are primarily handled via Discord.
- Active presence on Twitter/X (@adenhq) and LinkedIn.
- Contribution guidelines are available in
CONTRIBUTING.md.
- The project mentions open positions for engineering, research, and go-to-market roles.
Licensing & Compatibility
- License: Apache License 2.0.
- Compatibility: Designed for production and self-hosting. Supports Python and JavaScript/TypeScript SDKs. Integrates with 100+ LLM providers via LiteLLM, including local models (e.g., Ollama). Explicitly states no dependencies on LangChain, CrewAI, or similar frameworks.
Limitations & Caveats
- Cloud deployment and Kubernetes-ready configurations are noted as being on the roadmap.
- While telemetry data is collected for monitoring, content capture (prompts and responses) is configurable and remains within the user's infrastructure when self-hosted.