Survey paper for 3D occupancy perception in autonomous driving
Top 68.5% on sourcepulse
This repository provides a comprehensive survey of 3D Occupancy Perception for Autonomous Driving, focusing on information fusion techniques. It targets researchers and engineers in autonomous driving and computer vision, offering a structured overview of the field, including methodologies, datasets, and applications.
How It Works
The survey systematically categorizes 3D occupancy perception methods into LiDAR-centric, Vision-centric, Radar-centric, and Multi-Modal approaches. It details network pipelines, fusion techniques, and training strategies, providing in-depth analyses and performance comparisons. The repository also curates relevant datasets and discusses various occupancy-based applications like segmentation, detection, tracking, and scene generation.
Quick Start & Requirements
This repository is a survey and does not contain executable code for a specific model. It links to numerous research papers, many of which provide code repositories for their respective implementations. Requirements vary per linked project.
Highlighted Details
Maintenance & Community
This is an active repository, regularly updated with new research. Contributions and suggestions are welcomed via pull requests or direct contact. The primary contact is Professor Lap-Pui Chau.
Licensing & Compatibility
The repository itself is not licensed as it is a collection of survey information and links. Individual linked projects will have their own licenses.
Limitations & Caveats
As a survey, this repository does not offer a single, unified implementation. Users must refer to individual linked papers for specific code, dependencies, and usage instructions. The rapid pace of research means the survey is a snapshot in time.
4 days ago
Inactive