MARLlib  by Replicable-MARL

MARL library for developing, training, and testing multi-agent RL algorithms

created 3 years ago
1,130 stars

Top 34.6% on sourcepulse

GitHubView on GitHub
Project Summary

MARLlib is a comprehensive library for multi-agent reinforcement learning (MARL), designed to simplify the development, training, and testing of MARL algorithms. It targets researchers and practitioners in MARL, offering a unified platform built on Ray RLlib to handle diverse tasks and environments with a focus on scalability and ease of use.

How It Works

MARLlib leverages Ray RLlib for distributed execution and provides a unified API for various MARL algorithms and environments. It supports flexible parameter-sharing strategies (share, group, separate, customizable) and diverse model architectures (MLP, CNN, GRU, LSTM). This approach allows researchers to easily switch between environments and algorithms, experiment with different agent interactions, and customize model components without deep knowledge of underlying MARL complexities.

Quick Start & Requirements

  • Installation: Requires Linux OS. Install via pip install -r requirements.txt after cloning the repository. Ensure RLlib patches are applied using python marllib/patch/add_patch.py -y PyPI.
  • Dependencies: Python 3.8 or 3.9 recommended. gym version around 0.20.0 is suggested.
  • Resources: Docker image and devcontainer setup are provided. GPU acceleration is supported via num_gpus argument in fit().
  • Documentation: MARLlib Documentation

Highlighted Details

  • Supports 17 environments and 18 MARL algorithms, covering cooperative, collaborative, competitive, and mixed task modes.
  • Offers flexible parameter-sharing strategies and customizable agent model architectures.
  • Provides a Gym-like interface for easier environment integration and algorithm development.
  • Accepted for publication in JMLR, indicating academic rigor and community acceptance.

Maintenance & Community

  • Active development with recent updates in March-November 2023, including new environment support and compatibility updates.
  • Publication in JMLR and a roadmap available in ROADMAP.md.
  • Community support via GitHub Issues.

Licensing & Compatibility

  • The repository does not explicitly state a license in the README. This requires further investigation for commercial use or closed-source linking.

Limitations & Caveats

  • Currently only compatible with Linux operating systems.
  • The README mentions that results from older versions may lead to inconsistencies when compared to current results, suggesting potential for breaking changes or ongoing API evolution.
Health Check
Last commit

8 months ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
2
Star History
75 stars in the last 90 days

Explore Similar Projects

Starred by Thomas Wolf Thomas Wolf(Cofounder of Hugging Face), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
2 more.

Gymnasium by Farama-Foundation

0.5%
10k
Python API standard for single-agent reinforcement learning environments
created 2 years ago
updated 1 week ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Thomas Wolf Thomas Wolf(Cofounder of Hugging Face), and
1 more.

stable-baselines3 by DLR-RM

0.5%
11k
PyTorch library for reinforcement learning algorithm implementations
created 5 years ago
updated 1 week ago
Feedback? Help us improve.