Deep RL framework for StarCraft II tasks (Gym, Atari, MuJoCo also supported)
Top 58.3% on sourcepulse
Reaver is a modular deep reinforcement learning framework designed for StarCraft II tasks, offering a flexible and performant solution for researchers and hobbyists. It aims to replicate DeepMind's state-of-the-art results in complex game environments, while also supporting popular benchmarks like Atari and MuJoCo.
How It Works
Reaver employs a modular architecture, decoupling agents, models, and environments for easy swapping and extension. It leverages shared memory and lock-free multiprocessing for significant performance gains (up to 1.5x in StarCraft II) on single-machine setups, a key advantage over IPC-based multiprocessing approaches. Configuration is managed via gin-config
, allowing for easy hyperparameter tuning and experiment sharing.
Quick Start & Requirements
pip install reaver[gym,atari,mujoco]
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is no longer maintained, meaning no future updates or bug fixes are expected. While it supports multiple environments, the primary focus and most extensive testing appear to be on StarCraft II. The lack of a clear license may pose compatibility issues for commercial or closed-source projects.
4 years ago
1 day