PyTorch implementations of inverse reinforcement learning algorithms
Top 46.7% on sourcepulse
This repository provides PyTorch implementations of key Inverse Reinforcement Learning (IRL) algorithms, including APP, MaxEnt, GAIL, and VAIL. It's designed for researchers and practitioners in reinforcement learning who need to understand and apply these methods for learning from expert demonstrations, particularly in robotics and control tasks. The project offers practical examples and training scripts for common environments.
How It Works
The project implements IRL algorithms by learning a reward function from expert demonstrations, which is then used to train an agent via standard RL techniques. For the MountainCar environment, Q-learning is used as the underlying RL algorithm for APP and MaxEnt. For Mujoco environments like Hopper, Proximal Policy Optimization (PPO) is employed as the RL backbone for GAIL and VAIL. This dual approach allows for experimentation with different RL algorithms suited to discrete and continuous action spaces.
Quick Start & Requirements
lets-do-irl/mountaincar/app
).Highlighted Details
Maintenance & Community
The project lists four core team members with GitHub and Facebook links. There are no explicit mentions of ongoing maintenance, community channels (like Discord/Slack), or a public roadmap.
Licensing & Compatibility
The repository does not explicitly state a license. The code is written in Python using PyTorch, making it generally compatible with other Python ML libraries. However, the lack of a specified license may impose restrictions on commercial use or redistribution.
Limitations & Caveats
The project uses an older version of PyTorch (v0.4.1), which may present compatibility issues with newer libraries and hardware. The README is primarily in Korean, and some implementation details or explanations might be less accessible to non-Korean speakers. There is no explicit mention of testing on platforms other than those implied by the environment setups.
1 year ago
Inactive