File size: 3,736 Bytes
333af84 aaf6fb2 d71367a 333af84 d71367a 94b418d d71367a b9102cd d71367a b9102cd d71367a 94b418d d71367a 96118c0 d71367a 9c4f5a7 d71367a 9c4f5a7 d71367a 9c4f5a7 59547d5 d71367a 9c4f5a7 d71367a 9c4f5a7 d71367a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
license: apache-2.0
pretty_name: 1X World Model Challenge Dataset
size_categories:
- 10M<n<100M
viewer: false
---
# 1X World Model Compression Challenge Dataset
This repository hosts the dataset for the [1X World Model Compression Challenge](https://huggingface.co/spaces/1x-technologies/1X_World_Model_Challenge_Compression).
```bash
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
```
## Updates Since v1.1
- **Train/Val v2.0 (~100 hours)**, replacing v1.1
- **Test v2.0 dataset for the Compression Challenge**
- **Faces blurred** for privacy
- **New raw video dataset** (CC-BY-NC-SA 4.0) at [worldmodel_raw_data](https://huggingface.co/datasets/1x-technologies/worldmodel_raw_data)
- **Example scripts** now split into:
- `cosmos_video_decoder.py` — for decoding Cosmos Tokenized bins
- `unpack_data_test.py` — for reading the new test set
- `unpack_data_train_val.py` — for reading the train/val sets
---
## Train & Val v2.0
### Format
Each split is sharded:
- `video_{shard}.bin` — [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) discrete DV8×8×8 tokens at 30 Hz
- `segment_idx_{shard}.bin` — segment boundaries
- `states_{shard}.bin` — `np.float32` states (see below)
- `metadata.json` / `metadata_{shard}.json` — overall vs. per‐shard metadata
---
## Test v2.0
We provide a 450 sample **test_v2.0** dataset for the [World Model Compression Challenge](https://huggingface.co/spaces/1x-technologies/1X_World_Model_Challenge_Compression) with a similar structure (`video_{shard}.bin`, `states_{shard}.bin`). Use:
- `unpack_data_test.py` to read the test set
- `unpack_data_train_val.py` to read train/val
---
### State Index Definition (New)
```
0: HIP_YAW
1: HIP_ROLL
2: HIP_PITCH
3: KNEE_PITCH
4: ANKLE_ROLL
5: ANKLE_PITCH
6: LEFT_SHOULDER_PITCH
7: LEFT_SHOULDER_ROLL
8: LEFT_SHOULDER_YAW
9: LEFT_ELBOW_PITCH
10: LEFT_ELBOW_YAW
11: LEFT_WRIST_PITCH
12: LEFT_WRIST_ROLL
13: RIGHT_SHOULDER_PITCH
14: RIGHT_SHOULDER_ROLL
15: RIGHT_SHOULDER_YAW
16: RIGHT_ELBOW_PITCH
17: RIGHT_ELBOW_YAW
18: RIGHT_WRIST_PITCH
19: RIGHT_WRIST_ROLL
20: NECK_PITCH
21: Left hand closure (0= open, 1= closed)
22: Right hand closure (0= open, 1= closed)
23: Linear Velocity
24: Angular Velocity
```
## Previous v1.1
- `video.bin` — 16×16 patches at 30Hz, quantized
- `segment_ids.bin` — segment boundaries
- `actions/` folder storing multiple `.bin`s for states, closures, etc.
### v1.1 Joint Index
```
{
0: HIP_YAW
1: HIP_ROLL
2: HIP_PITCH
3: KNEE_PITCH
4: ANKLE_ROLL
5: ANKLE_PITCH
6: LEFT_SHOULDER_PITCH
7: LEFT_SHOULDER_ROLL
8: LEFT_SHOULDER_YAW
9: LEFT_ELBOW_PITCH
10: LEFT_ELBOW_YAW
11: LEFT_WRIST_PITCH
12: LEFT_WRIST_ROLL
13: RIGHT_SHOULDER_PITCH
14: RIGHT_SHOULDER_ROLL
15: RIGHT_SHOULDER_YAW
16: RIGHT_ELBOW_PITCH
17: RIGHT_ELBOW_YAW
18: RIGHT_WRIST_PITCH
19: RIGHT_WRIST_ROLL
20: NECK_PITCH
}
A separate `val_v1.1` set is available.
---
## Provided Checkpoints
- `magvit2.ckpt` from [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) used in v1.1
- For v2.0, see [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer); we supply `cosmos_video_decoder.py`.
---
## Directory Structure Example
```
train_v1.1/
val_v1.1/
train_v2.0/
val_v2.0/
test_v2.0/
├── video_{shard}.bin
├── states_{shard}.bin
├── ...
├── metadata_{shard}.json
cosmos_video_decoder.py
unpack_data_test.py
unpack_data_train_val.py
```
**License**: [Apache-2.0](./LICENSE)
**Author**: 1X Technologies
``` |