This repository provides implementations of multi-agent deep reinforcement learning (MADRL) environments, specifically Pursuit Evasion, Waterworld, Multi-Agent Walker, and Multi-Ant. It is targeted at researchers and practitioners in multi-agent systems and reinforcement learning, offering a foundation for developing and testing cooperative and competitive multi-agent control strategies.
How It Works
MADRL leverages a custom fork of rllab, a reinforcement learning library, to implement its multi-agent environments. This approach allows for the integration of deep learning models for policy representation and learning within a multi-agent simulation framework. The use of rllab's infrastructure facilitates experimentation with various training protocols, such as decentralized control.
Quick Start & Requirements
git clone --recursive git@github.com:sisl/MADRL.git
rllab/environment.yml
.export PYTHONPATH=$(pwd):$(pwd)/rltools:$(pwd)/rllab:$PYTHONPATH
.python3 runners/run_multiwalker.py rllab --control decentralized --policy_hidden 100,50,25 --n_iter 200 --n_walkers 2 --batch_size 24000 --curriculum lessons/multiwalker/env.yaml
Highlighted Details
rllab/sandbox/rocky/tf/policies
.Maintenance & Community
The README notes that maintained versions of the first three environments are included with PettingZoo. No other community or maintenance information is provided.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The project relies on a specific, forked version of rllab, which may pose challenges for integration with current reinforcement learning ecosystems. The README also points to PettingZoo for more maintained versions of some environments, suggesting potential deprecation or reduced maintenance of the original MADRL implementations.
2 years ago
1 day