Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Dataset Card for 3DIEBench-T

This dataset was created and released alongside the paper EquiCaps: Predictor-Free Pose-Aware Pre-Trained Capsule Networks.

Samples 3DIEBench-T

Dataset Summary

3DIEBench-T (3D Invariant Equivariant Benchmark–Translated) is a synthetic vision benchmark designed to evaluate invariant and equivariant representations under 3D transformations. Extending the original 3DIEBench rotation benchmark, 3DIEBench-T incorporates translations to the existing rotations by controlling the scene and object parameters for 3D rotation and translation prediction (equivariance) while retaining sufficient complexity for classification (invariance). Moving from SO(3) transformations to the more general SE(3) group, 3DIEBench-T increases the difficulty of both invariance and equivariance tasks, enabling the evaluation of representation methods under more realistic, simultaneous, multi-geometric settings.

Dataset Description

We adhere to the original 3DIEBench data generation protocol, ensuring any observed performance differences are attributed to the inclusion of translations rather than broader dataset modifications. We use 52,472 3D object instances spanning 55 classes from ShapeNetCoreV2, originally sourced from 3D Warehouse. For each instance, we generate 50 uniformly sampled views within specified ranges via Blender-Proc, yielding 2,623,600 images of size 256x256. Note that the 55 classes are not balanced.

Dataset Structure

The data are structured as indicated below.

β”œβ”€β”€ SYNSET_1                       
β”‚   β”œβ”€β”€ OBJ_ID_1_1  
|   |   β”œβ”€ image_0.jpg
|   |   β”œβ”€ latent_0.npy
|   |   :
|   |   :
|   |   β”œβ”€ image_49.jpg
|   |   └─ latent_49.npy        
|   :
|   :       
β”‚   └── OBJ_ID_1_N        
|   |   β”œβ”€ image_0.jpg
|   |   β”œβ”€ latent_0.npy
|   |   :
|   |   :
|   |   β”œβ”€ image_49.jpg
|   |   └─ latent_49.npy                
:
:              
β”œβ”€β”€ SYNSET_55                       
β”‚   β”œβ”€β”€ OBJ_ID_55_1  
|   |   β”œβ”€ image_0.jpg
|   |   β”œβ”€ latent_0.npy
|   |   :
|   |   :
|   |   β”œβ”€ image_49.jpg
|   |   └─ latent_49.npy        
|   :
|   :       
β”‚   └── OBJ_ID_55_M        
|   |   β”œβ”€ image_0.jpg
|   |   β”œβ”€ latent_0.npy
|   |   :
|   |   :
|   |   β”œβ”€ image_49.jpg
|   |   └─ latent_49.npy              
└── LICENSE 

As shown above, we provide the latent information for each generated image to facilitate downstream tasks, in the order shown below. Tait–Bryan angles are used to define extrinsic object rotations, and the light’s position is specified using spherical coordinates.

  • Rotation X ∈[βˆ’Ο€2,Ο€2] \in [-\tfrac{\pi}{2},\tfrac{\pi}{2}]
  • Rotation Y ∈[βˆ’Ο€2,Ο€2] \in [-\tfrac{\pi}{2},\tfrac{\pi}{2}]
  • Rotation Z ∈[βˆ’Ο€2,Ο€2] \in [-\tfrac{\pi}{2},\tfrac{\pi}{2}]
  • Floor hue ∈[0,1] \in [0,1]
  • Light θ∈[0,Ο€4] \theta \in [0,\tfrac{\pi}{4}]
  • Light Ο•βˆˆ[0,2Ο€] \phi \in [0,2\pi]
  • Light hue ∈[0,1] \in [0,1]
  • Translation X ∈[βˆ’0.5,0.5] \in [-0.5, 0.5]
  • Translation Y ∈[βˆ’0.5,0.5] \in [-0.5, 0.5]
  • Translation Z ∈[βˆ’0.5,0.5] \in [-0.5, 0.5]

Data Splits

The 3DIEBench-T dataset has 2 splits: train and validation: 80% of the objects form the training set, and the remaining 20%, sampled from the same transformation distribution, form the validation set. We indicate the statistics below. The splits are available in the data directory.

Dataset Split Number of Objects Number of Object Instances (images) (objects Γ— views )
Train 41,920 2,096,000
Validation 10,552 527,600

Dataset Reproducibility & Usage

To reproduce the dataset from scratch, you can find the full instructions and code in our GitHub repository.

The same repository also provides scripts to load and work with the dataset.

License

The 3DIEBench-T dataset is released under the CC-BY-NC 4.0 License.

Contact

If you need help reproducing or using the dataset, please feel free to start a discussion or contact the paper's corresponding author directly at [email protected].

Acknowledgements

The dataset generation is adapted from SIE.

Citation

If you use this data, or build on it, please cite the main paper.

@article{konstantinou2025equicaps,
  title={EquiCaps: Predictor-Free Pose-Aware Pre-Trained Capsule Networks},
  author={Konstantinou, Athinoulla and Leontidis, Georgios and Thota, Mamatha and Durrant, Aiden},
  journal={arXiv preprint arXiv:2506.09895},
  year={2025}
}
Downloads last month
41