Datasets:

ArXiv:
The Dataset Viewer has been disabled on this dataset.

Overview

Project page | Paper | Code

This repo contains datasets involved in the DepthCues in .zip format. There is one zip file for the subset corresponding to each of the six depth cues: elevation, light-shadow, occlusion, perspective, size, and texture-grad. While five zip files contain all the images and annotations needed for benchmarking, one contain only DepthCues annotations, and for that the images need to be downloaded from the original official sources.

We provide the code for evaluating models on DepthCues here.

Download DepthCues

First download the six zip files in this repository that correspond to the six subsets for six depth cues, then unzip them.

For Perspective please download the images from the project page of the original paper. Then move the two folders ava/ and flickr/ to perspective_v1/images/

mv path/to/source/dataset/ava path/to/source/dataset/flickr path/to/perspective_v1/images/

After downloading and unzipping everything, the dataset directories should look like:

<your data dir>/
└── elevation_v1/
    └── images/
    └── train_data.pkl
    └── val_data.pkl
    └── test_data.pkl
└── lightshadow_v1/     
    └── images/
    └── train_annotations.pkl
    └── val_annotations.pkl
    └── test_annotations.pkl
└── occlusion_v4/    
    └── images_BSDS/
    └── images_COCO/
    └── train_data.pkl
    └── val_data.pkl
    └── test_data.pkl
└── perspective_v1/    
    └── images/
    └── train_val_test_split.json
└── size_v2/
    └── images_indoor/
    └── images_outdoor/
    └── train_data_indoor.pkl
    └── train_data_outdoor.pkl
    └── val_data_indoor.pkl  
    └── val_data_outdoor.pkl
    └── test_data_indoor.pkl
    └── test_data_outdoor.pkl
└── texturegrad_v1/
    └── images/
    └── train_data.pkl
    └── val_data.pkl
    └── test_data.pkl

Copyright and Disclaimer

This dataset is derived from multiple source datasets, each governed by its own copyright and licensing terms. All rights and credit remain with the original copyright holders. This derivative dataset is intended for non-commercial research and educational purposes unless otherwise explicitly permitted by the original licenses. Any redistribution or derivative use of any part of this dataset must comply with the respective license terms of the original sources. This dataset is provided β€œas is” without warranty of any kind. The creators of this dataset expressly disclaim any liability for damages arising from its use. By using this dataset, you agree to comply with the terms and conditions set forth by each original data source. In no event shall the creators of this dataset be liable for any misuse, infringement, or violation of the underlying copyrights.

Please review the specific terms for each component below.

Dataset Publication Copyright
Elevation Workman, S., Zhai, M., & Jacobs, N. Horizon lines in the wild. BMVC 2016. link
Light-shadow Wang, T., Hu, X., Wang, Q., Heng, P. A., & Fu, C. W. Instance shadow detection. CVPR 2020. link
Occlusion Zhu, Y., Tian, Y., Metaxas, D., & DollΓ‘r, P. Semantic amodal segmentation. CVPR 2017.
Lin, T. Y., et al. Microsoft COCO: Common objects in context. ECCV 2014.
Arbelaez, P., et al. Contour detection and hierarchical image segmentation. T-PAMI 2010.
COCO-A link, COCO link, BSDS link
Perspective Zhou, Z., Farhat, F., & Wang, J. Z. Detecting dominant vanishing points in natural scenes with application to composition-sensitive image retrieval. IEEE T-MM 2017. AVA link, Flickr link
Size Geiger, A., Lenz, P., & Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. CVPR 2012.
Song, S., Lichtenberg, S. P., & Xiao, J. SUN RGB-D: A RGB-D scene understanding benchmark suite. CVPR 2015.
KITTI link, SUN-RGBD link
Texture-grad Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., & Vedaldi, A. Describing textures in the wild. CVPR 2014. link
Downloads last month
36