C++ framework for reinforcement learning and planning problems
Top 51.7% on sourcepulse
This C++ framework provides a comprehensive solution for representing and solving Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), targeting AI researchers and practitioners. It offers Python bindings for ease of use and extensibility, enabling efficient implementation and experimentation with various reinforcement learning and planning algorithms.
How It Works
The AI-Toolbox is built in C++ for performance and offers a clean, extensible interface. It supports defining models directly in code or by parsing Cassandra's POMDP format. A key advantage is its ability to integrate with native Python generative models, allowing seamless use of environments like OpenAI Gym. The framework includes a wide array of utilities for combinatorics, polytopes, linear programming, sampling, belief updating, and more, facilitating complex AI problem-solving.
Quick Start & Requirements
cmake ..
and make
within a build
directory. Python bindings can be enabled with -DMAKE_PYTHON=1
.Highlighted Details
Maintenance & Community
The project was published in the Journal of Machine Learning Research (JMLR) in 2020 by Bargiacchi, Roijers, and Nowé. Further community engagement details are not explicitly provided in the README.
Licensing & Compatibility
The project is licensed under the GNU General Public License v3.0 (GPL-3.0-or-later). This license is copyleft, meaning derivative works must also be open-sourced under the same license, potentially restricting commercial use or linking with closed-source software.
Limitations & Caveats
Factored/Joint Multi-Agent Bandits and MDPs are not yet available in the Python bindings. The GPL-3.0 license imposes significant restrictions on commercial use and integration with proprietary software. Building from source requires specific development tools and libraries.
4 months ago
1 day