Discover and explore top open-source AI tools and projects—updated daily.
Curated list of Diffusion Model in RL resources
Top 30.1% on SourcePulse
This repository is a curated and continuously updated list of research papers and resources focusing on the application of Diffusion Models within Reinforcement Learning (RL). It serves as a valuable reference for researchers and practitioners exploring this emerging intersection, aiming to track the frontier of Diffusion RL advancements.
How It Works
Diffusion Models in RL are primarily applied in two ways: as a method for trajectory optimization and planning, and as an expressive policy class for offline RL. The former casts trajectory optimization as a diffusion probabilistic model that iteratively refines trajectories, bypassing bootstrapping and avoiding short-sighted behaviors. The latter frames policies as conditional diffusion models, leveraging the scalability and multi-modal adaptability of diffusion models for policy optimization.
Quick Start & Requirements
This repository is a collection of papers and does not have a direct installation or execution command. The primary requirement is an interest in Diffusion Models and Reinforcement Learning. Links to official codebases and experiment environments are provided for individual papers.
Highlighted Details
Maintenance & Community
The repository is marked as "continually updated" and welcomes contributions. Specific contributors or maintainers are not highlighted beyond the project's open nature.
Licensing & Compatibility
Awesome Diffusion Model in RL is released under the Apache 2.0 license. This license is permissive and generally compatible with commercial use and closed-source linking.
Limitations & Caveats
As a curated list, the repository itself does not provide implementations or benchmarks. The practical utility of the listed papers depends on the quality and availability of their associated codebases and experimental setups.
6 days ago
Inactive