rl  by pytorch

PyTorch library for reinforcement learning research

created 3 years ago
2,972 stars

Top 16.5% on sourcepulse

GitHubView on GitHub
Project Summary

TorchRL is a modular, Python-first library for PyTorch designed to simplify and accelerate Reinforcement Learning research and applications. It offers a flexible, extensible architecture with minimal dependencies, targeting researchers and engineers who need a robust and efficient RL framework.

How It Works

TorchRL is built around the TensorDict data structure, which streamlines RL codebases by providing a unified way to handle observations, actions, rewards, and other metadata. This primitive-first approach allows for easy swapping and customization of components like environments, collectors, replay buffers, and loss functions, promoting code reusability across diverse RL settings (online/offline, state/pixel-based).

Quick Start & Requirements

  • Install via pip: pip3 install torchrl
  • Requires PyTorch (version >= 2.1 recommended, >= 2.7.0 for some replay buffer features).
  • Optional dependencies for environments (Gym, Atari, DeepMind Control), logging (TensorBoard, WandB), and more can be installed via pip install "torchrl[atari,dm_control,gym_continuous,rendering,tests,utils,marl,open_spiel,checkpointing]".
  • Official documentation: TorchRL Documentation
  • Getting Started tutorials: Examples, tutorials and demos

Highlighted Details

  • TensorDict: A core data structure enabling simplified, reusable RL code, compatible with functorch and torch.compile.
  • Modular Components: Provides reusable functionals for cost functions, returns, and data processing, along with flexible collectors, replay buffers, and loss modules.
  • Environment Abstraction: Offers a common interface for various environments (Gym, DeepMind Control) with support for parallel execution and state-less environments.
  • Performance: Features vectorized, on-device transforms and efficient data collectors, with benchmarks showing significant speed-ups over eager mode.

Maintenance & Community

  • Developed by Meta AI.
  • Active development with regular releases.
  • Community support via PyTorch forums for general RL questions.
  • Contribution guide available for developers.

Licensing & Compatibility

  • MIT License.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

  • The library is released as a PyTorch beta feature, with potential for breaking changes.
  • C++ binaries for features like prioritized replay buffers require PyTorch 2.7.0+.
  • Local installation via pip install -e . is not currently supported.
Health Check
Last commit

1 day ago

Responsiveness

1 day

Pull Requests (30d)
60
Issues (30d)
23
Star History
272 stars in the last 90 days

Explore Similar Projects

Starred by John Yang John Yang(Author of SWE-bench, SWE-agent), Lysandre Debut Lysandre Debut(Chief Open-Source Officer at Hugging Face), and
3 more.

cleanrl by vwxyzjn

0.5%
8k
RL algorithms implementation with research-friendly features
created 6 years ago
updated 3 weeks ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Thomas Wolf Thomas Wolf(Cofounder of Hugging Face), and
1 more.

stable-baselines3 by DLR-RM

0.5%
11k
PyTorch library for reinforcement learning algorithm implementations
created 5 years ago
updated 1 week ago
Feedback? Help us improve.