RL/SRL research toolbox for robotics, evaluating state representation learning via RL
Top 52.9% on sourcepulse
This repository provides a toolbox for evaluating State Representation Learning (SRL) methods within Reinforcement Learning (RL) for robotics applications. It targets researchers and engineers working on robotics simulation and real-world robot control, offering a unified framework to compare various RL algorithms and SRL techniques.
How It Works
The toolbox integrates multiple RL algorithms (e.g., PPO, SAC, DQN) and SRL methods (e.g., autoencoders, inverse dynamics models) with a focus on efficient evaluation. It supports end-to-end learning from raw pixels and utilizes custom OpenAI Gym environments for simulation (PyBullet) and real robots (Baxter, Robobo via ROS), enabling rapid iteration and comparison of learning approaches.
Quick Start & Requirements
--recursive
and use conda env create --file environment.yml
.swig
, libopenmpi-dev
, openmpi-bin
, openmpi-doc
. GPU and CUDA recommended for performance.Highlighted Details
Maintenance & Community
This repository is no longer maintained. The authors recommend Stable-Baselines3 for RL implementations and RL Baselines3 Zoo for training frameworks.
Licensing & Compatibility
The repository's license is not explicitly stated in the README, but it references OpenAI Baselines, which was typically MIT licensed. Compatibility with commercial or closed-source projects would require verification of the specific license.
Limitations & Caveats
The project is explicitly marked as unmaintained. Users should be aware that dependencies may be outdated, and there is no active community support or ongoing development. A known issue exists with the inverse kinematics function for certain arm configurations.
4 years ago
1 day