File size: 3,736 Bytes
16e1eec
 
6fad5c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16e1eec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e7c4ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
dataset_info:
- config_name: large_100
  features:
  - name: lrs
    sequence:
      array4_d:
        shape:
        - 3
        - 16
        - 16
        - 16
        dtype: float32
  - name: hr
    dtype:
      array4_d:
        shape:
        - 3
        - 64
        - 64
        - 64
        dtype: float32
  splits:
  - name: train
    num_bytes: 268237120
    num_examples: 80
  - name: validation
    num_bytes: 33529640
    num_examples: 10
  - name: test
    num_bytes: 33529640
    num_examples: 10
  download_size: 329464088
  dataset_size: 335296400
- config_name: large_50
  features:
  - name: lrs
    sequence:
      array4_d:
        shape:
        - 3
        - 16
        - 16
        - 16
        dtype: float32
  - name: hr
    dtype:
      array4_d:
        shape:
        - 3
        - 64
        - 64
        - 64
        dtype: float32
  splits:
  - name: train
    num_bytes: 134118560
    num_examples: 40
  - name: validation
    num_bytes: 16764820
    num_examples: 5
  - name: test
    num_bytes: 16764820
    num_examples: 5
  download_size: 164732070
  dataset_size: 167648200
- config_name: small_50
  features:
  - name: lrs
    sequence:
      array4_d:
        shape:
        - 3
        - 4
        - 4
        - 4
        dtype: float32
  - name: hr
    dtype:
      array4_d:
        shape:
        - 3
        - 16
        - 16
        - 16
        dtype: float32
  splits:
  - name: train
    num_bytes: 2220320
    num_examples: 40
  - name: validation
    num_bytes: 277540
    num_examples: 5
  - name: test
    num_bytes: 277540
    num_examples: 5
  download_size: 2645696
  dataset_size: 2775400
---


# Super-resolution of Velocity Fields in Three-dimensional Fluid Dynamics

This dataset loader attempts to reproduce the data of Wang et al. (2024)'s experiments on Super-resolution of 3D Turbulence. 

References:
- Wang et al. (2024): "Discovering Symmetry Breaking in Physical Systems with Relaxed Group Convolution"

## Usage

For a given configuration  (e.g. `large_50`):

```py
>>> ds = datasets.load_dataset("dl2-g32/jhtdb", name="large_50")
>>> ds
DatasetDict({
    train: Dataset({
        features: ['lrs', 'hr'],
        num_rows: 40
    })
    validation: Dataset({
        features: ['lrs', 'hr'],
        num_rows: 5
    })
    test: Dataset({
        features: ['lrs', 'hr'],
        num_rows: 5
    })
})
```

Each split contains the input `lrs` which corresponds on a sequence of low resolution samples from time `t - ws/2, ..., t, ... ts + ws/2` (ws = window size) and `hr` corresponds to the high resolution sample at time `t`. All the parameters per data point are specified in the corresponding `metadata_*.csv`.

Specifically, for the default configuration, for each datapoint we have `3` low resolution samples and `1` high resolution sample. Each of the former have shapes `(3, 16, 16, 16)` and the latter has shape `(3, 64, 64, 64)`. 

## Replication

This dataset is entirely generated by `scripts/generate.py` and each configuration is fully specified in their corresponding `scripts/*.yaml`.

### Usage

```sh
python -m scripts.generate --config scripts/small_100.yaml --token edu.jhu.pha.turbulence.testing-201311
``` 

This will create two folders on `datasets/jhtdb`:
1. A `tmp` folder that will store all samples accross runs to serve as a cache. 
2. The corresponding subset, `small_50` for example. This folder will contain a `metadata_*.csv` and data `*.zip` for each split.

Note:
- For the small variants, the default token is enough, but for the large variants a token has to be requested. More details [here](https://turbulence.pha.jhu.edu/authtoken.aspx).
- For reference, the `large_100` takes ~15 minutes to generate for a total of ~300MB.