Inverse RL implementations for imitation learning algorithms
Top 95.2% on sourcepulse
This repository provides implementations of Inverse Reinforcement Learning (IRL) and imitation learning algorithms, specifically GAIL and Guided Cost Learning (GCL), for researchers and practitioners in reinforcement learning. It aims to enable learning cost functions from expert demonstrations to reproduce desired behaviors.
How It Works
The library implements algorithms like Generative Adversarial Imitation Learning (GAIL) and Guided Cost Learning (GCL), which leverage deep learning frameworks to learn policies from expert trajectories. GAIL uses a discriminator to distinguish between expert and generated trajectories, while GCL learns a cost function that explains the expert's behavior.
Quick Start & Requirements
rllab
and tensorflow
.rllab
(https://github.com/openai/rllab), tensorflow
.scripts/pendulum_data_collect.py
to collect expert data for Pendulum-v0, then scripts/pendulum_gcl.py
to run GCL. Expected average return for Pendulum-v0 is around -100 to -150.Highlighted Details
Maintenance & Community
No information on maintenance or community channels is provided in the README.
Licensing & Compatibility
The README does not specify a license. Compatibility with commercial or closed-source projects is unknown.
Limitations & Caveats
The project depends on rllab
, which is an older framework and may have compatibility issues with current deep learning libraries or Python versions. The README does not mention specific version requirements for dependencies.
7 years ago
Inactive