Awesome-Robotics-Diffusion  by showlab

Diffusion models revolutionize robotics

Created 9 months ago
261 stars

Top 97.5% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

This repository serves as a curated bibliography of recent research papers applying diffusion models to robotics. It targets researchers and practitioners in robot learning, offering a structured overview of advancements in areas like manipulation, navigation, and planning, thereby accelerating the adoption and understanding of diffusion-based robotics techniques.

How It Works

The project organizes papers based on how diffusion models are utilized: as direct policies for robot control, as synthesizers for generating data or plans, or for specific task objectives. This categorization highlights diverse applications, from learning visuomotor policies and generating dexterous grasps to enabling language-conditioned manipulation and constrained motion planning, showcasing the versatility of diffusion models in solving complex robotic challenges.

Quick Start & Requirements

This repository is a curated list of research papers and does not provide a direct installation or execution command. Requirements for individual projects would vary significantly and are detailed within each cited paper. A comprehensive survey paper, "Diffusion Models in Robotics: A Survey," is linked for deeper review.

Highlighted Details

  • Broad Scope: Covers a wide array of robotics tasks including manipulation (dexterous, deformable, object-centric), navigation, planning, human-robot interaction, and mobile manipulation.
  • Architectural Innovations: Features papers exploring Diffusion Transformers (DiT), SE(3)-equivariant architectures, hierarchical designs, and integration with models like Mamba.
  • Multi-modal Integration: Demonstrates the use of diffusion models with vision, language, tactile/force data, and point clouds for enhanced robot perception and control.
  • Advanced Applications: Includes cutting-edge work on language-guided manipulation, constrained motion planning, and synthesizing actions from videos.

Maintenance & Community

Maintained by Show Lab at the National University of Singapore. The primary resource provided is a link to their survey paper. No community channels (e.g., Discord, Slack) or direct contributor information beyond the survey authors are listed.

Licensing & Compatibility

As this is a curated list of research papers, no specific software license is provided. Compatibility would depend on the licenses of the individual projects cited.

Limitations & Caveats

This is a bibliographic resource, not a runnable codebase, and thus offers no direct implementation. The field of diffusion models in robotics is rapidly evolving, requiring continuous updates to this list. It does not provide code, setup instructions, or benchmarks for the listed papers, necessitating individual investigation for each cited work.

Health Check
Last Commit

4 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
15 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.