Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dataset Card for 3DIEBench-T
This dataset was created and released alongside the paper EquiCaps: Predictor-Free Pose-Aware Pre-Trained Capsule Networks.
Dataset Summary
3DIEBench-T (3D Invariant Equivariant BenchmarkβTranslated) is a synthetic vision benchmark designed to evaluate invariant and equivariant representations under 3D transformations. Extending the original 3DIEBench rotation benchmark, 3DIEBench-T incorporates translations to the existing rotations by controlling the scene and object parameters for 3D rotation and translation prediction (equivariance) while retaining sufficient complexity for classification (invariance). Moving from SO(3) transformations to the more general SE(3) group, 3DIEBench-T increases the difficulty of both invariance and equivariance tasks, enabling the evaluation of representation methods under more realistic, simultaneous, multi-geometric settings.
Dataset Description
We adhere to the original 3DIEBench data generation protocol, ensuring any observed performance differences are attributed to the inclusion of translations rather than broader dataset modifications. We use 52,472 3D object instances spanning 55 classes from ShapeNetCoreV2, originally sourced from 3D Warehouse. For each instance, we generate 50 uniformly sampled views within specified ranges via Blender-Proc, yielding 2,623,600 images of size 256x256. Note that the 55 classes are not balanced.
Dataset Structure
The data are structured as indicated below.
βββ SYNSET_1
β βββ OBJ_ID_1_1
| | ββ image_0.jpg
| | ββ latent_0.npy
| | :
| | :
| | ββ image_49.jpg
| | ββ latent_49.npy
| :
| :
β βββ OBJ_ID_1_N
| | ββ image_0.jpg
| | ββ latent_0.npy
| | :
| | :
| | ββ image_49.jpg
| | ββ latent_49.npy
:
:
βββ SYNSET_55
β βββ OBJ_ID_55_1
| | ββ image_0.jpg
| | ββ latent_0.npy
| | :
| | :
| | ββ image_49.jpg
| | ββ latent_49.npy
| :
| :
β βββ OBJ_ID_55_M
| | ββ image_0.jpg
| | ββ latent_0.npy
| | :
| | :
| | ββ image_49.jpg
| | ββ latent_49.npy
βββ LICENSE
As shown above, we provide the latent information for each generated image to facilitate downstream tasks, in the order shown below. TaitβBryan angles are used to define extrinsic object rotations, and the lightβs position is specified using spherical coordinates.
- Rotation X
- Rotation Y
- Rotation Z
- Floor hue
- Light
- Light
- Light hue
- Translation X
- Translation Y
- Translation Z
Data Splits
The 3DIEBench-T dataset has 2 splits: train and validation: 80% of the objects form the training set, and the remaining 20%, sampled from the same transformation distribution, form the validation set. We indicate the statistics below. The splits are available in the data directory.
Dataset Split | Number of Objects | Number of Object Instances (images) (objects Γ views ) |
---|---|---|
Train | 41,920 | 2,096,000 |
Validation | 10,552 | 527,600 |
Dataset Reproducibility & Usage
To reproduce the dataset from scratch, you can find the full instructions and code in our GitHub repository.
The same repository also provides scripts to load and work with the dataset.
License
The 3DIEBench-T dataset is released under the CC-BY-NC 4.0 License.
Contact
If you need help reproducing or using the dataset, please feel free to start a discussion or contact the paper's corresponding author directly at [email protected].
Acknowledgements
The dataset generation is adapted from SIE.
Citation
If you use this data, or build on it, please cite the main paper.
@article{konstantinou2025equicaps,
title={EquiCaps: Predictor-Free Pose-Aware Pre-Trained Capsule Networks},
author={Konstantinou, Athinoulla and Leontidis, Georgios and Thota, Mamatha and Durrant, Aiden},
journal={arXiv preprint arXiv:2506.09895},
year={2025}
}
- Downloads last month
- 41