rllab  by rll

Framework for reinforcement learning algorithm development and evaluation

created 9 years ago
2,990 stars

Top 16.3% on sourcepulse

GitHubView on GitHub
Project Summary

rllab is a framework for developing and evaluating reinforcement learning algorithms, primarily for continuous control tasks. It was designed for researchers and practitioners in RL, offering a structured approach to algorithm implementation and experimentation, with tools for distributed execution and visualization.

How It Works

rllab provides a modular structure for RL algorithms, abstracting common components like policies, value functions, and environments. It leverages Theano as its primary backend for automatic differentiation and computation, with experimental TensorFlow support available in a separate module. This design facilitates the implementation and comparison of various RL algorithms.

Quick Start & Requirements

Highlighted Details

  • Implements algorithms: REINFORCE, Truncated Natural Policy Gradient, Reward-Weighted Regression, Relative Entropy Policy Search, Trust Region Policy Optimization, Cross Entropy Method, Covariance Matrix Adaption Evolution Strategy, Deep Deterministic Policy Gradient.
  • Fully compatible with OpenAI Gym.
  • Includes support for running experiments on EC2 clusters and visualization tools.

Maintenance & Community

rllab is no longer under active development. It is maintained as garage by an alliance of university researchers.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The project is no longer actively developed and recommends migrating to its successor, garage, for new projects and updates. The primary backend is Theano, which is also not under active development.

Health Check
Last commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
34 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.