Discover and explore top open-source AI tools and projects—updated daily.
a-agmonHigh-performance Rust SDK for interactive AI agent workflows
Top 97.1% on SourcePulse
A high-performance, type-safe framework for building interactive multi-agent workflow systems in Rust. It combines a core graph execution engine (graph-flow) with Rust-native LLM integration (Rig) to enable complex, stateful AI agent orchestration for production environments, offering benefits like performance, flexibility, and robust state management.
How It Works
The framework is built around two core Rust crates: graph-flow for orchestrating stateful workflows via a graph execution engine, and Rig for seamless LLM agent capabilities. It adopts LangGraph's philosophy of combining graph execution with LLM integration but is implemented from scratch in Rust for enhanced performance and type safety. Workflows are defined as directed graphs where nodes are tasks, and edges represent transitions. The system supports flexible execution models, including step-by-step and continuous execution, with built-in features for session management, context sharing, conditional routing, and human-in-the-loop interactions.
Quick Start & Requirements
graph-flow to your Cargo.toml with the rig feature: graph-flow = {version = "0.2.3", features = ["rig"]}.OPENROUTER_API_KEY environment variable) and optionally a database URL (DATABASE_URL) for PostgreSQL persistence.examples/simple_example.rs, examples/complex_example.rs, examples/recommendation_flow.rs) and production-ready services like insurance-claims-service/ and recommendation-service/.Highlighted Details
Rig crate for natural language processing and generation within workflows.FanOutTask.NextAction::WaitForInput) to await human intervention or approval.Maintenance & Community
The repository does not explicitly detail community channels (like Discord/Slack), notable contributors, sponsorships, or a public roadmap within the provided README.
Licensing & Compatibility
Limitations & Caveats
The FanOutTask for parallel execution has limitations: child tasks cannot control flow (e.g., WaitForInput), all children share the same context requiring careful coordination, and it defaults to linear continuation. Customization involves rewriting tasks and modifying the graph structure within the core framework.
3 months ago
Inactive