Awesome-VLA-Robotics  by Jiaaqiliu

Curated VLA robotics resources

created 3 months ago
365 stars

Top 78.2% on sourcepulse

GitHubView on GitHub
Project Summary

This repository serves as a curated, comprehensive list of resources for Vision-Language-Action (VLA) models in robotics. It targets researchers and engineers in Embodied AI and robotics, providing a structured overview of papers, models, datasets, and technical approaches to enable rapid understanding and adoption of VLA technologies for robot control.

How It Works

The repository categorizes VLA research by application area (manipulation, navigation, HRI) and technical approach (architectures, action representation, learning paradigms). It highlights key models like RT-2, OpenVLA, and Octo, detailing their base architectures (Transformers, Diffusion), action generation methods, and contributions. The core idea is to leverage large pre-trained models (LLMs, VLMs) and ground their capabilities in the physical world for robot action generation, moving beyond simple VLM adaptation to specialized VLA architectures.

Quick Start & Requirements

This is a curated list, not a runnable project. Resources linked within may have their own installation and execution requirements.

Highlighted Details

  • Comprehensive categorization of VLA models and research papers by application and technical approach.
  • Detailed table comparing key VLA models (RT-1, RT-2, OpenVLA, Octo, etc.) on features, base models, and action generation methods.
  • Extensive lists of datasets (OpenX, DROID, CALVIN) and simulators (Isaac Sim, Habitat) crucial for VLA development and evaluation.
  • Discussion of current challenges and future directions, including data efficiency, inference speed, robustness, generalization, and safety.

Maintenance & Community

Contributions are welcome. The list is updated with recent research, indicated by many 2025 entries. Links to related "Awesome" lists are provided for broader context.

Licensing & Compatibility

This repository is a list of links and does not have its own license. Individual papers and code repositories linked within will have their own licenses, which may include restrictions on commercial use or closed-source linking.

Limitations & Caveats

As a curated list, it does not provide runnable code or direct access to models. The rapid pace of VLA research means some information, particularly regarding "future" (2025) papers, may be preliminary or subject to change.

Health Check
Last commit

6 days ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
2
Star History
274 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.