Robot learning benchmark for vision-guided manipulation research
Top 28.6% on sourcepulse
RLBench is a large-scale benchmark and learning environment for robot manipulation research, focusing on vision-guided tasks like reinforcement learning, imitation learning, and few-shot learning. It provides a flexible framework for researchers to develop and evaluate algorithms in a simulated robotics setting.
How It Works
RLBench leverages CoppeliaSim for physics simulation and PyRep as a Python API to interact with the simulator. It offers a wide variety of pre-defined manipulation tasks and supports custom task creation. The environment is designed to facilitate research in areas such as few-shot learning through curated task sets and sim-to-real transfer via domain randomization capabilities.
Quick Start & Requirements
pip install git+https://github.com/stepjam/RLBench.git
COPPELIASIM_ROOT
environment variable).nvidia-xconfig
, X :99 &
).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The README advises caution when using low-dimensional task observations, as they may not capture critical state changes (e.g., object slipping) that image-based observations would. It also notes that changing default image observation sizes may require re-collecting demonstrations.
6 months ago
Inactive