Benchmark for BEV perception robustness in autonomous driving
Top 77.0% on sourcepulse
This repository provides RoboBEV, the first benchmark for evaluating the robustness of camera-based Bird's Eye View (BEV) perception systems in autonomous driving against natural data corruptions and domain shifts. It targets researchers and engineers in autonomous driving who need to assess and improve the reliability of BEV perception models in real-world, unpredictable conditions.
How It Works
RoboBEV systematically evaluates existing BEV perception models across eight common corruption types (e.g., sensor failure, motion blur, fog, snow) and three domain shift scenarios (city-to-city, day-to-night, dry-to-rain). It introduces two key metrics: mCE (mean Corruption Error) and mRR (mean Resilience Rate) to quantify model degradation and recovery capabilities under these adverse conditions.
Quick Start & Requirements
INSTALL.md
.DATA_PREPARE.md
.GET_STARTED.md
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
LICENSE.md
for details.Limitations & Caveats
The project is built upon MMDetection3D, inheriting its dependencies and potential complexities. While extensive, the benchmark focuses on camera-based perception; LiDAR-camera fusion models are included in the model zoo but not explicitly benchmarked within the corruption framework in the provided tables.
5 months ago
1 day