Autonomous driving benchmark dataset for depth estimation research
Top 61.1% on sourcepulse
The DDAD dataset provides a benchmark for long-range, dense depth estimation in autonomous driving scenarios, targeting researchers and engineers in computer vision and robotics. It offers a comprehensive collection of synchronized monocular video and high-density LiDAR data, enabling the development and evaluation of self-supervised and semi-supervised depth estimation models in diverse urban environments across the US and Japan.
How It Works
DDAD leverages data from six synchronized, high-resolution cameras (2.4MP, global shutter) and Luminar-H2 LiDAR sensors (250m range, sub-1cm precision) mounted on self-driving vehicles. The LiDAR data is aggregated into a 360-degree point cloud, time-synchronized with the camera frames. The dataset is accessed via the TRI Dataset Governance Policy (DGP) codebase, which facilitates loading synchronized camera and LiDAR frames, projecting point clouds, and accessing intrinsic/extrinsic calibration data.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The dataset is licensed for non-commercial use only, restricting its application in commercial autonomous driving systems. Ground truth depth for the test split is not publicly released, requiring submission to the challenge for evaluation.
4 years ago
1 week