RL agent collection using Stable Baselines (note: unmaintained, use RL-Baselines3 Zoo)
Top 33.7% on sourcepulse
This repository provides a collection of over 100 pre-trained Reinforcement Learning (RL) agents, complete with tuned hyperparameters and training scripts, built using the Stable Baselines library. It's designed for researchers and practitioners to easily benchmark RL algorithms, enjoy trained agents, and leverage existing hyperparameter configurations across various environments, including Atari, Classic Control, Box2D, PyBullet, and MiniGrid.
How It Works
The project leverages the Stable Baselines library for implementing various RL algorithms. It organizes training and hyperparameter configurations in YAML files, allowing for straightforward execution of training, enjoyment (inference), evaluation, and hyperparameter optimization using Optuna. The architecture supports environment wrappers and command-line argument overrides for flexibility.
Quick Start & Requirements
pip install -r requirements.txt
(requires stable-baselines[mpi] >= 2.10.0
)swig
, cmake
, libopenmpi-dev
, zlib1g-dev
, ffmpeg
. GPU support requires CUDA.docker pull stablebaselines/rl-baselines-zoo-cpu
, GPU: docker pull stablebaselines/rl-baselines-zoo
.Highlighted Details
Maintenance & Community
This repository is no longer maintained. Users are directed to the RL-Baselines3 Zoo for an up-to-date version.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
The project is explicitly marked as unmaintained. Hyperparameter search is not implemented for ACER and DQN. MiniGrid environments require specific wrappers for observation spaces not natively supported by Stable Baselines.
2 years ago
Inactive