Offline RL library with single-file implementations of SOTA algorithms
Top 32.6% on sourcepulse
CORL is a Python library providing single-file, research-friendly implementations of state-of-the-art Offline and Offline-to-Online Reinforcement Learning algorithms. It aims to simplify experimentation and reproducibility for researchers and practitioners in offline RL, offering a clean codebase inspired by the popular cleanrl
library.
How It Works
CORL implements algorithms as self-contained Python files, promoting clarity and ease of modification. Each implementation is designed for reproducibility and includes integration with Weights and Biases for experiment tracking. The library covers a wide range of algorithms, including conservative Q-learning (CQL), implicit Q-learning (IQL), decision transformers (DT), and more, with benchmarks provided on standard datasets like D4RL.
Quick Start & Requirements
pip install -r requirements/requirements_dev.txt
or use Docker.Highlighted Details
Maintenance & Community
The project is maintained by Tinkoff AI. Further community engagement details are not explicitly provided in the README.
Licensing & Compatibility
The library is released under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
While comprehensive, the README notes that benchmark results can vary significantly between papers and implementations, suggesting users verify results. The project also points to a separate library, Katakomba, for discrete control tasks.
2 years ago
1 day