File size: 6,056 Bytes
a643f0c
 
 
 
 
7008967
 
aca30c4
7008967
 
 
e0964a1
7008967
 
 
 
 
 
13c4acd
7008967
 
e0964a1
7008967
 
ceff308
 
 
 
7008967
871dcb3
ceff308
7008967
 
e0964a1
abc0ec7
 
 
 
 
e0964a1
d98caa8
 
 
 
 
 
7e2763d
 
 
4cac229
4bc5585
 
 
 
 
 
 
d98caa8
 
 
 
 
 
 
7008967
 
8be03bf
 
 
 
 
9531a52
 
 
 
 
 
 
 
 
 
7008967
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aca30c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7008967
 
 
 
 
ceff308
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
task_categories:
- robotics
language:
- en
---

PhysicalAI-Robotics-Manipulation-Objects is a dataset of automatic generated motions of robots performing operations such as picking and placing objects in a kitchen environment. The dataset was generated in IsaacSim leveraging reasoning algorithms and optimization-based motion planning to find solutions to the tasks automatically [1, 3]. The dataset includes a bimanual manipulator built with Kinova Gen3 arms. The environments are kitchen scenes where the furniture and appliances were procedurally generated [2].
This dataset is for research and development only.

## Dataset Contact(s):
Fabio Ramos ([email protected]) <br>
Anqi Li ([email protected])

## Dataset Creation Date:
03/18/2025

## License/Terms of Use: 
[Nvidia Non-Commercial License](./NVIDIA%20OneWay%20Noncommercial%20License.pdf)

## Intended Usage:
This dataset is provided in LeRobot format and is intended for training robot policies and foundation models.

## Dataset Characterization
* Data Collection Method <br>
  * Automated <br>
  * Automatic/Sensors <br>
  * Synthetic <br>

* Labeling Method<br>
  * Synthetic <br>

## Dataset Format
Within the collection, there are three datasets in LeRobot format `pick`, `place_bench`, and `place_cabinet`. 
* `pick`: The robot picks an object from the bench top. <br>
* `place bench`: The robot starts with the object at the gripper and places it on the kitchen's bench top.
* `place cabinet`: The robot starts with the object at the gripper and places it inside an opened cabinet.  

The videos below show three examples of the tasks: 

<div style="display: flex; justify-content: flex-start;">
<img src="./assets/episode_000028.gif" width="300" height="300" alt="pick" />
<img src="./assets/episode_000008.gif" width="300" height="300" alt="place_bench" />
<img src="./assets/episode_000048.gif" width="300" height="300" alt="place_cabinet" />
</div>


* action modality: 34D which includes joint states for the two arms, gripper joints, pan and tilt joints, torso joint, and front and back wheels.
* observation modalities
  * observation.state: 13D where the first 12D are the vectorized transform matrix of the "object of interest". The 13th entry is the joint value for the articulated object of interest (i.e. drawer, cabinet, etc).
  * observation.image.world__world_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.external_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.world__robot__right_arm_camera_color_frame__right_hand_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.world__robot__left_arm_camera_color_frame__left_hand_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.world__robot__camera_link__head_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.

The videos below illustrate the different camera modalities for a single trajectory. 

<div style="display: flex; justify-content: flex-start;">
<img src="./assets/episode_000057.gif" width="300" height="300" alt="rgb" />
<img src="./assets/episode_000057_depth.gif" width="300" height="300" alt="depth" />
<img src="./assets/episode_000057_semantic.gif" width="300" height="300" alt="semantic" />
</div>


## Dataset Quantification
Record Count:
* `pick`
  * number of episodes: 272
  * number of frames: 69726
  * number of videos: 4080 (1360 RGB videos, 1360 depth videos, 1360 semantic segmentation videos)
* `place bench`
  * number of episodes: 142
  * number of frames: 29728
  * number of videos: 2130 (710 RGB videos, 710 depth videos, 710 semantic segmentation videos)
* `place cabinet`
  * number of episodes: 126
  * number of frames: 30322
  * number of videos: 1890 (630 RGB videos, 630 depth videos, 630 semantic segmentation videos)

Total storage: 4.0 GB


## Reference(s):
```
[1] @inproceedings{garrett2020pddlstream,
  title={Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning},
  author={Garrett, Caelan Reed and Lozano-P{\'e}rez, Tom{\'a}s and Kaelbling, Leslie Pack},
  booktitle={Proceedings of the international conference on automated planning and scheduling},
  volume={30},
  pages={440--448},
  year={2020}
}

[2] @article{Eppner2024,
   title = {scene_synthesizer: A Python Library for Procedural Scene Generation in Robot Manipulation},
   author = {Clemens Eppner and Adithyavairavan Murali and Caelan Garrett and Rowland O'Flaherty and Tucker Hermans and Wei Yang and Dieter Fox},
   journal = {Journal of Open Source Software}
   publisher = {The Open Journal},
   year = {2024},
   Note = {\url{https://scene-synthesizer.github.io/}}
}

[3] @inproceedings{curobo_icra23,
    author={Sundaralingam, Balakumar and Hari, Siva Kumar Sastry and
        Fishman, Adam and Garrett, Caelan and Van Wyk, Karl and Blukis, Valts and
        Millane, Alexander and Oleynikova, Helen and Handa, Ankur and
        Ramos, Fabio and Ratliff, Nathan and Fox, Dieter},
    booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
    title={CuRobo: Parallelized Collision-Free Robot Motion Generation},
    year={2023},
    volume={},
    number={},
    pages={8112-8119},
    doi={10.1109/ICRA48891.2023.10160765}
}

```

## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.   

Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).