RL framework for Webots robot simulator, offering OpenAI Gym-compatible interface
Top 98.4% on sourcepulse
Deepbots provides a Python framework for integrating Reinforcement Learning (RL) algorithms with the Webots robot simulator. It acts as middleware, exposing a familiar OpenAI Gym-like interface to RL agents, simplifying the process of training robots in a simulated environment for researchers and developers in robotics and AI.
How It Works
Deepbots implements the standard RL agent-environment loop, where the agent selects an action, and the environment returns observations, rewards, and a done status. It achieves this by abstracting the communication between Webots' Supervisor (which has global world knowledge and control) and the Robot Controller (which handles robot-specific sensors and actuators). Two communication schemes are offered: an emitter-receiver approach for flexible, multi-robot setups, and a combined Robot-Supervisor controller for high-dimensional observations where emitter-receiver overhead is prohibitive.
Quick Start & Requirements
pip install deepbots
Highlighted Details
Maintenance & Community
The project acknowledges contributions from Manos Kirtas, Kostas Tsampazis, and others. It has received funding from the European Union's Horizon 2020 program (grant agreement No 871449, OpenDR). Contributions are welcomed following the all-contributors specification.
Licensing & Compatibility
The repository does not explicitly state a license in the README. This requires clarification for commercial use or integration into closed-source projects.
Limitations & Caveats
The emitter-receiver communication scheme can introduce overhead for high-dimensional observations like camera images. The combined Robot-Supervisor scheme, while mitigating this, is less flexible and limited to one robot and one supervisor. The license is not specified, posing a potential adoption blocker.
1 year ago
1 day