root commited on
Commit
41e7b95
·
2 Parent(s): 104ee6b d3f1386

Merge branch 'main' of https://huggingface.co/datasets/1x-technologies/worldmodel

Browse files
Files changed (2) hide show
  1. README.md +46 -2
  2. unpack_data.py +32 -0
README.md CHANGED
@@ -12,7 +12,51 @@ Download with:
12
  huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
13
  ```
14
 
15
- Current version: v1.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  - **magvit2.ckpt** - weights for [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.
18
 
@@ -25,7 +69,7 @@ Contents of train/val_v1.1:
25
  - **neck_desired** `(N, 1)`: Desired neck pitch.
26
  - **l_hand_closure** `(N, 1)`: Left hand closure state (0 = open, 1 = closed).
27
  - **r_hand_closure** `(N, 1)`: Right hand closure state (0 = open, 1 = closed).
28
- #### Index-to-Joint Mapping
29
  ```
30
  {
31
  0: HIP_YAW
 
12
  huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
13
  ```
14
 
15
+ Changes from v1.1:
16
+ - New train and val dataset of 100 hours, replacing the v1.1 datasets
17
+ - Blur applied to faces
18
+
19
+ Contents of train/val_v2.0:
20
+
21
+ The training dataset is shareded into 100 independent shards. The definitions are as follows:
22
+
23
+ - **video_{shard}.bin**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) "Cosmos-Tokenizer-DV8x8x8".
24
+ - **segment_idx_{shard}.bin** - Maps each frame `i` to its corresponding segment index. You may want to use this to separate non-contiguous frames from different videos (transitions).
25
+ - **states_{shard}.bin** - States arrays (defined below in `Index-to-State Mapping`) stored in `np.float32` format. For frame `i`, the corresponding state is represented by `states_{shard}[i]`.
26
+ - **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_{shard}.json` files contain specific details for each shard.
27
+
28
+ #### Index-to-State Mapping (NEW)
29
+ ```
30
+ {
31
+ 0: HIP_YAW
32
+ 1: HIP_ROLL
33
+ 2: HIP_PITCH
34
+ 3: KNEE_PITCH
35
+ 4: ANKLE_ROLL
36
+ 5: ANKLE_PITCH
37
+ 6: LEFT_SHOULDER_PITCH
38
+ 7: LEFT_SHOULDER_ROLL
39
+ 8: LEFT_SHOULDER_YAW
40
+ 9: LEFT_ELBOW_PITCH
41
+ 10: LEFT_ELBOW_YAW
42
+ 11: LEFT_WRIST_PITCH
43
+ 12: LEFT_WRIST_ROLL
44
+ 13: RIGHT_SHOULDER_PITCH
45
+ 14: RIGHT_SHOULDER_ROLL
46
+ 15: RIGHT_SHOULDER_YAW
47
+ 16: RIGHT_ELBOW_PITCH
48
+ 17: RIGHT_ELBOW_YAW
49
+ 18: RIGHT_WRIST_PITCH
50
+ 19: RIGHT_WRIST_ROLL
51
+ 20: NECK_PITCH
52
+ 21: Left hand closure state (0 = open, 1 = closed)
53
+ 22: Right hand closure state (0 = open, 1 = closed)
54
+ 23: Linear Velocity
55
+ 24: Angular Velocity
56
+ }
57
+
58
+
59
+ Previous version: v1.1
60
 
61
  - **magvit2.ckpt** - weights for [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.
62
 
 
69
  - **neck_desired** `(N, 1)`: Desired neck pitch.
70
  - **l_hand_closure** `(N, 1)`: Left hand closure state (0 = open, 1 = closed).
71
  - **r_hand_closure** `(N, 1)`: Right hand closure state (0 = open, 1 = closed).
72
+ #### Index-to-Joint Mapping (OLD)
73
  ```
74
  {
75
  0: HIP_YAW
unpack_data.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Example script to unpack one shard of the 1xGPT v2.0 video dataset."""
2
+
3
+ import json
4
+ import pathlib
5
+ import subprocess
6
+
7
+ import numpy as np
8
+
9
+ dir_path = pathlib.Path("val_v2.0")
10
+ rank = 0
11
+
12
+ # load metadata.json
13
+ metadata = json.load(open(dir_path / "metadata.json"))
14
+ metadata_shard = json.load(open(dir_path / f"metadata_{rank}.json"))
15
+
16
+ total_frames = metadata_shard["shard_num_frames"]
17
+
18
+
19
+ maps = [
20
+ ("segment_idx", np.int32, []),
21
+ ("states", np.float32, [25]),
22
+ ]
23
+
24
+ video_path = dir_path / "video_0.mp4"
25
+
26
+ for m, dtype, shape in maps:
27
+ filename = dir_path / f"{m}_{rank}.bin"
28
+ print("Reading", filename, [total_frames] + shape)
29
+ m_out = np.memmap(filename, dtype=dtype, mode="r", shape=tuple([total_frames] + shape))
30
+ assert m_out.shape[0] == total_frames
31
+ print(m, m_out[:100])
32
+