DDAD  by TRI-ML

Autonomous driving benchmark dataset for depth estimation research

Created 5 years ago
532 stars

Top 59.7% on SourcePulse

GitHubView on GitHub
Project Summary

The DDAD dataset provides a benchmark for long-range, dense depth estimation in autonomous driving scenarios, targeting researchers and engineers in computer vision and robotics. It offers a comprehensive collection of synchronized monocular video and high-density LiDAR data, enabling the development and evaluation of self-supervised and semi-supervised depth estimation models in diverse urban environments across the US and Japan.

How It Works

DDAD leverages data from six synchronized, high-resolution cameras (2.4MP, global shutter) and Luminar-H2 LiDAR sensors (250m range, sub-1cm precision) mounted on self-driving vehicles. The LiDAR data is aggregated into a 360-degree point cloud, time-synchronized with the camera frames. The dataset is accessed via the TRI Dataset Governance Policy (DGP) codebase, which facilitates loading synchronized camera and LiDAR frames, projecting point clouds, and accessing intrinsic/extrinsic calibration data.

Quick Start & Requirements

  • Install/Run: Use the TRI Dataset Governance Policy (DGP) codebase.
  • Prerequisites: Python, DGP codebase.
  • Data Download: train+val (257 GB), test (size not specified).
  • Resources: Requires significant disk space for dataset download.
  • Links: DDAD depth challenge, IPython notebook, Packnet-SfM codebase

Highlighted Details

  • Supports self-supervised and semi-supervised monocular depth estimation tracks.
  • Evaluates methods against ground-truth LiDAR depth and reports metrics per semantic class.
  • Features long-range (up to 250m) and dense depth estimation in challenging urban conditions.
  • Includes data from cross-continental settings (US and Japan) with 360-degree camera coverage.

Maintenance & Community

  • Developed by Toyota Research Institute (TRI).
  • Associated with the CVPR 2021 Workshop “Frontiers of Monocular 3D Perception”.
  • Citation provided for the associated "3D Packing for Self-Supervised Monocular Depth Estimation" paper.

Licensing & Compatibility

  • License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
  • Restrictions: Non-commercial use only.

Limitations & Caveats

The dataset is licensed for non-commercial use only, restricting its application in commercial autonomous driving systems. Ground truth depth for the test split is not publicly released, requiring submission to the challenge for evaluation.

Health Check
Last Commit

4 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.