Discover and explore top open-source AI tools and projects—updated daily.
liquidos-aiBuild and deploy autonomous AI systems with a Rust multi-agent framework
Top 93.5% on SourcePulse
A modern multi-agent framework written in Rust, AutoAgents enables developers to build, deploy, and coordinate intelligent agents powered by LLMs. It targets engineers and researchers creating complex AI systems, offering a performant, safe, and scalable foundation for cloud-native, edge, and browser-based applications. The framework's modular design and WASM compilation support are key benefits for flexible deployment.
How It Works
AutoAgents leverages Rust's performance and safety guarantees to provide a robust multi-agent system. Its core architecture is modular, allowing for swappable components like memory backends (currently supporting sliding window, with persistent storage planned) and executors (ReAct, basic, with streaming). It features provider-agnostic LLM integration, supporting numerous cloud and local models, and offers native WASM compilation for direct deployment in web browsers, enabling sandboxed tool execution via a WASM runtime.
Quick Start & Requirements
lefthook install), and build the project (cargo build --release). The autoagents-cli crate provides a command-line interface.cargo build --release for development. autoagents run --workflow <workflow.yaml> to execute workflows, and autoagents serve --workflow <workflow.yaml> to expose them via a REST API.Highlighted Details
autoagents CLI for running and serving workflows defined in YAML, facilitating easy deployment and management.Maintenance & Community
The project is actively developed by the Liquidos AI team and community contributors. Community engagement is encouraged via GitHub Issues, Discussions, and a Discord server.
Licensing & Compatibility
AutoAgents is dual-licensed under the MIT License and Apache License 2.0, offering flexibility for commercial use and integration into closed-source projects.
Limitations & Caveats
Persistent memory storage is listed as "Coming Soon." Some LLM backends, such as Mistral-rs, Burn, and Onnx, are marked as experimental or under development, indicating potential instability or incomplete features.
1 week ago
Inactive
TransformerOptimus