RoboBEV  by Daniel-xsy

Benchmark for BEV perception robustness in autonomous driving

created 2 years ago
375 stars

Top 77.0% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides RoboBEV, the first benchmark for evaluating the robustness of camera-based Bird's Eye View (BEV) perception systems in autonomous driving against natural data corruptions and domain shifts. It targets researchers and engineers in autonomous driving who need to assess and improve the reliability of BEV perception models in real-world, unpredictable conditions.

How It Works

RoboBEV systematically evaluates existing BEV perception models across eight common corruption types (e.g., sensor failure, motion blur, fog, snow) and three domain shift scenarios (city-to-city, day-to-night, dry-to-rain). It introduces two key metrics: mCE (mean Corruption Error) and mRR (mean Resilience Rate) to quantify model degradation and recovery capabilities under these adverse conditions.

Quick Start & Requirements

  • Installation: Refer to INSTALL.md.
  • Data Preparation: Datasets (nuScenes and nuScenes-C) are hosted on OpenDataLab. Refer to DATA_PREPARE.md.
  • Getting Started: Refer to GET_STARTED.md.
  • Prerequisites: PyTorch, MMDetection3D.

Highlighted Details

  • Benchmarks 11 BEV detection algorithms and 1 monocular 3D detection algorithm.
  • Includes leaderboards for BEV map segmentation, multi-camera depth estimation, and semantic occupancy prediction.
  • Provides detailed performance tables comparing various models across corruption types using mCE and mRR metrics.
  • Offers scripts to create custom corruption sets.

Maintenance & Community

  • The project is associated with the RoboDrive Challenge, with recent updates including workshop slides and video recordings.
  • Leaderboards are available on Paper-with-Code.
  • Links to OpenDataLab for datasets are provided.

Licensing & Compatibility

  • Licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
  • Commercial use may be restricted; refer to LICENSE.md for details.

Limitations & Caveats

The project is built upon MMDetection3D, inheriting its dependencies and potential complexities. While extensive, the benchmark focuses on camera-based perception; LiDAR-camera fusion models are included in the model zoo but not explicitly benchmarked within the corruption framework in the provided tables.

Health Check
Last commit

5 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
16 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.