This repository contains the evaluation scripts for the stereo challenge of the ApolloScapes dataset, A test dataset for each new scene will be withheld for benchmark. (Notice we will not have point cloud for the very large data due to size of dataset)
Details and download of data from different roads are available.
We extend the dataset from combination of 3D scanner and human labelled 3D car instance for depth generation.
The dataset has the following structure
{split}/{data_type}/{image_name}
data_type:
- fg_mask: foreground mask
- bg_mask: background mask
- disparity: the ground truth disparity
Training data
stereo_train_1 stereo_train_2 stereo_train_3
Testing data
Code for test evaluation:
#!/bin/bash
python eval_stereo.py
For each image, given the predicted disparity and the ground truth
, the metric for evaluation is defined as:
Here the can be either foreground (fg), background (bg) or the whole region (merge of fg and bg).
is the number of image
Result benchmark will be:
Method | D1_fg | D1_bg | D1_all |
---|---|---|---|
Deepxxx | xx | xx | xx |
{split}/{data_type}/{image_name}
data_type:
- disparity: the estimated disparity
Please cite our paper in your publications.
DVI: Depth Guided Video Inpainting for Autonomous Driving. Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang. PDF, Code, Video, Presentation
ECCV 2020.
@article{liao2020dvi,
title={DVI: Depth Guided Video Inpainting for Autonomous Driving},
author={Liao, Miao and Lu, Feixiang and Zhou, Dingfu and Zhang, Sibo and Li, Wei and Yang, Ruigang},
journal={arXiv preprint arXiv:2007.08854},
year={2020}
}