Curated list for embodied AI/robotics research using VLMs & LLMs
Top 28.6% on sourcepulse
This repository is a curated list of research papers and projects focused on Embodied AI and robotics, specifically those integrating Large Language Models (LLMs) and Vision-Language Models (VLMs). It serves as a valuable resource for researchers and practitioners in the fields of embodied AI, robotics, and artificial intelligence, providing a centralized hub for the latest advancements and key publications.
How It Works
The repository organizes a vast collection of research papers, categorized by sub-fields such as Vision-Language-Action Models, Self-Evolving Agents, Planning and Manipulation, and Multi-Agent Learning. Each entry typically includes a link to the paper (often arXiv), associated code repositories (GitHub), project pages, and sometimes video demonstrations. This structure allows users to easily discover, access, and explore relevant work in the rapidly evolving domain of LLM-powered embodied agents.
Quick Start & Requirements
This is a curated list, not a software package. No installation or execution is required. The primary requirement is an interest in the field of Embodied AI and LLMs. Links to official project pages and code repositories are provided for individual research papers.
Highlighted Details
Maintenance & Community
The repository is maintained by "haonan" and encourages community contributions via pull requests for new papers. Updates are posted regularly, indicating active curation.
Licensing & Compatibility
As a curated list of research papers, it does not have a specific license. Individual linked projects will have their own licenses, which users must consult.
Limitations & Caveats
This is a passive resource; it does not provide any executable code or frameworks itself. Users must individually find, install, and run the code for each research project they are interested in. The breadth of topics means some areas may be more comprehensively covered than others.
2 weeks ago
1 day