Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,8 @@ Download with:
|
|
12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
13 |
```
|
14 |
|
|
|
|
|
15 |
Changes from v1.1:
|
16 |
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
17 |
- Blur applied to faces
|
@@ -20,7 +22,7 @@ Contents of train/val_v2.0:
|
|
20 |
|
21 |
The training dataset is shareded into 100 independent shards. The shapes and definitions of the arrays are as follows (N is the number of frames).
|
22 |
|
23 |
-
- **
|
24 |
- **segment_indicies** - For video `n` and frame `i`, `segment_idx_n[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
25 |
- **robot_states** - States arrays defined in `Index-to-State Mapping` stored in `np.float32` format. For video `n` and frame `i`, the corresponding state is given by `states_n[i]`.
|
26 |
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_[n].json` files contain specific details for each individual video `n`.
|
|
|
12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
13 |
```
|
14 |
|
15 |
+
**Current version: v2.0**
|
16 |
+
|
17 |
Changes from v1.1:
|
18 |
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
19 |
- Blur applied to faces
|
|
|
22 |
|
23 |
The training dataset is shareded into 100 independent shards. The shapes and definitions of the arrays are as follows (N is the number of frames).
|
24 |
|
25 |
+
- **videos**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) "Cosmos-Tokenizer-DV8x8x8".
|
26 |
- **segment_indicies** - For video `n` and frame `i`, `segment_idx_n[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
27 |
- **robot_states** - States arrays defined in `Index-to-State Mapping` stored in `np.float32` format. For video `n` and frame `i`, the corresponding state is given by `states_n[i]`.
|
28 |
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_[n].json` files contain specific details for each individual video `n`.
|