3D perception benchmark for autonomous driving robustness
Top 81.3% on sourcepulse
Robo3D is an evaluation suite and benchmark for assessing the robustness of 3D perception models in autonomous driving against real-world corruptions. It targets researchers and engineers working on 3D perception, providing a standardized method to measure model reliability under adverse conditions like fog, snow, motion blur, and sensor failures.
How It Works
Robo3D introduces a comprehensive benchmark by applying various corruption types (e.g., fog, wet ground, snow, motion blur, beam missing, crosstalk, incomplete echo, cross-sensor) at multiple severity levels to established 3D perception datasets (KITTI, SemanticKITTI, nuScenes, Waymo Open Dataset). It evaluates both 3D object detection and LiDAR semantic segmentation models, using metrics like average corruption error (mCE) and average resilience rate (mRR) to quantify performance degradation and recovery.
Quick Start & Requirements
INSTALL.md
.DATA_PREPARE.md
for details.Highlighted Details
Maintenance & Community
The project is associated with ICCV 2023 and has hosted the RoboDrive Challenge. Updates are regularly posted, including competition results and technical reports. The project is built upon the MMDetection3D codebase.
Licensing & Compatibility
Licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Commercial use requires careful checking of specific codebase licenses.
Limitations & Caveats
The project is primarily an evaluation benchmark and dataset. While it includes code for creating corruption sets, it does not directly provide pre-trained models for all listed architectures, with some models marked as "TODO" for code release.
1 year ago
1 day