Deep RL library integrating HER, PER, and D2SR for off-policy algos
Top 58.7% on sourcepulse
DRLib is a concise deep reinforcement learning library designed for off-policy algorithms, integrating Hindsight Experience Replay (HER) and Prioritized Experience Replay (PER). It targets researchers and practitioners in robotics and RL who need a streamlined, debug-friendly framework. The library offers a significant benefit by simplifying the implementation and experimentation of advanced RL techniques.
How It Works
DRLib is built upon OpenAI's Spinning Up, but with key features like multi-processing and experimental grid wrappers removed for ease of use and debugging. It provides implementations for DDPG, TD3, and SAC algorithms in both TensorFlow 1 and PyTorch, with PyTorch versions supporting GPU acceleration. The integration of HER and PER is a core advantage, making it particularly suitable for robotics tasks with sparse rewards.
Quick Start & Requirements
conda create -n DRLib_env python=3.6.9
), activate it, and install requirements (pip install -r pip_requirement.txt
).gym[all]
. mpi4py
installation may require conda install mpi4py
.Highlighted Details
Maintenance & Community
The project is actively developed by the author, with community engagement encouraged via a QQ group (799378128). The author also maintains active blogs on CSDN and Zhihu.
Licensing & Compatibility
The repository's licensing is not explicitly stated in the README, which could pose a compatibility issue for commercial or closed-source projects.
Limitations & Caveats
The PyTorch multi-processing implementation is noted as not fully tested and may contain errors. The project focuses on off-policy algorithms, with plans for PPO and DQN encapsulation seemingly deprioritized. The TensorFlow 1 dependency might be outdated for current deep learning practices.
1 year ago
1 day