Safe exploration research tools (archive)
Top 59.1% on sourcepulse
This repository provides Safety Gym, a suite of tools and environments for accelerating research in safe exploration for reinforcement learning. It is designed for researchers and practitioners in RL who need to evaluate and develop algorithms that adhere to safety constraints during learning and execution.
How It Works
Safety Gym leverages the MuJoCo physics simulator to create a variety of robotic manipulation and navigation tasks. It introduces the concept of "constraints" which are quantifiable safety measures that agents must avoid violating. The core innovation lies in its flexible Engine
class, allowing users to construct custom environments by specifying robot types, tasks, sensor configurations (including lidar and vision), object placements, and explicit safety constraints. This modular design facilitates systematic benchmarking of safe RL algorithms.
Quick Start & Requirements
pip install -e .
after installing mujoco_py
.mujoco_py
. Tested on macOS Mojave and Ubuntu 16.04 LTS.Highlighted Details
Maintenance & Community
The project is marked as "Archive" and no further updates are expected. It was developed by OpenAI.
Licensing & Compatibility
The repository does not explicitly state a license in the README. However, OpenAI's typical practice for research code is to release under a permissive license like MIT. Users should verify the license for commercial use.
Limitations & Caveats
The project is archived, meaning no further development or bug fixes are anticipated. Vision support is minimally implemented and considered low-priority. Random layout generation can occasionally fail to produce a valid scene configuration.
2 years ago
Inactive