Update README.md
Browse files
README.md
CHANGED
@@ -12,20 +12,18 @@ Download with:
|
|
12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
13 |
```
|
14 |
|
15 |
-
**Current version: v2.0**
|
16 |
-
|
17 |
Changes from v1.1:
|
18 |
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
19 |
- Blur applied to faces
|
20 |
|
21 |
Contents of train/val_v2.0:
|
22 |
|
23 |
-
The training dataset is shareded into 100 independent shards. The
|
24 |
|
25 |
-
- **
|
26 |
-
- **
|
27 |
-
- **
|
28 |
-
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_
|
29 |
|
30 |
#### Index-to-State Mapping (NEW)
|
31 |
```
|
|
|
12 |
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
|
13 |
```
|
14 |
|
|
|
|
|
15 |
Changes from v1.1:
|
16 |
- New train and val dataset of 100 hours, replacing the v1.1 datasets
|
17 |
- Blur applied to faces
|
18 |
|
19 |
Contents of train/val_v2.0:
|
20 |
|
21 |
+
The training dataset is shareded into 100 independent shards. The definitions are as follows:
|
22 |
|
23 |
+
- **video_{shard}.bin**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) "Cosmos-Tokenizer-DV8x8x8".
|
24 |
+
- **segment_idx_{shard}.bin** - Maps each frame `i` to its corresponding segment index. You may want to use this to separate non-contiguous frames from different videos (transitions).
|
25 |
+
- **states_{shard}.bin** - States arrays (defined below in `Index-to-State Mapping`) stored in `np.float32` format. For frame `i`, the corresponding state is represented by `states_{shard}[i]`.
|
26 |
+
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_{shard}.json` files contain specific details for each shard.
|
27 |
|
28 |
#### Index-to-State Mapping (NEW)
|
29 |
```
|