Multi-agent RL benchmark for continuous robotic control
Top 79.4% on sourcepulse
This repository provides a benchmark for continuous multi-agent robotic control, extending OpenAI's Mujoco Gym environments. It is designed for researchers and practitioners in multi-agent reinforcement learning (MARL), offering a standardized platform for evaluating decentralized cooperative control algorithms. The primary benefit is a collection of pre-configured multi-agent scenarios based on popular robotic simulations.
How It Works
The library implements multi-agent configurations by partitioning single-agent Mujoco environments. It supports customizable agent observations based on proximity (agent_obsk
) and specific properties (k_categories
, global_categories
). This approach allows for flexible and scalable MARL experiments, enabling agents to perceive their local or global environment state as needed for cooperative tasks.
Quick Start & Requirements
./src
to your PYTHONPATH
.LD_LIBRARY_PATH
to ~/.mujoco/mujoco210/bin
and LD_PRELOAD
to /usr/lib/x86_64-linux-gnu/libGLEW.so
for rendering.Highlighted Details
Maintenance & Community
This repository is a fork of OpenAI's original Mujoco Gym environments. The README notes that a maintained version with fixes and broader support is available in Gymnasium Robotics (https://github.com/Farama-Foundation/Gymnasium-Robotics).
Licensing & Compatibility
The README does not explicitly state a license. However, it is based on OpenAI Gym, which was typically released under the MIT license. Compatibility with commercial or closed-source projects would require explicit license confirmation.
Limitations & Caveats
The project requires specific, older versions of OpenAI Gym (0.10.8) and Mujoco (2.1), which may pose installation challenges. The README points to Gymnasium Robotics as a more actively maintained and compatible alternative.
2 years ago
1+ week