Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
zmgong commited on
Commit
6ac667b
·
verified ·
1 Parent(s): 335f24b

Upload DATA.md

Browse files
Files changed (1) hide show
  1. DATA.md +90 -0
DATA.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Data Structure in HDF5 Format
2
+
3
+ The data is stored in HDF5 format with the following structure. Each dataset contains multiple groups, each representing different splits of the data.
4
+
5
+ ### Group Structure
6
+ Each group represents a specific data split and contains several datasets. The groups are organized as follows:
7
+
8
+ - `all_keys`: Contains all data that will be used as key during the evaluation.
9
+ - `val_seen`: Contains seen query data for validation.
10
+ - `test_seen`: Contains seen query data for testing.
11
+ - `seen_keys`: Contains seen data that will be used as key during the evaluation. Note, for BIOSCAN-5M, these data are also used for training.
12
+ - `test_unseen`: Contains unseen test data.
13
+ - `val_unseen`: Contains unseen validation data.
14
+ - `unseen_keys`: Contains unseen data that will be used as key during the evaluation.
15
+ - `no_split_and_seen_train`: All data that will be used for contrastive pretrain.
16
+
17
+ Notably, there are some slight differences in the group structure of the BIOSCAN-1M and BIOSCAN-5M data, but they are fundamentally consistent.
18
+
19
+ ### Dataset Structure
20
+
21
+ Each group contains several datasets:
22
+
23
+ - `image`: Stores the image data as byte arrays.
24
+ - `image_mask`: Stores the length of each image byte array.
25
+ - `barcode`: Stores DNA barcode sequences.
26
+ - `family`: Stores the family classification of each sample.
27
+ - `genus`: Stores the genus classification of each sample.
28
+ - `order`: Stores the order classification of each sample.
29
+ - `sampleid`: Stores the sample IDs.
30
+ - `species`: Stores the species classification of each sample.
31
+ - `processid`: Stores the process IDs for each sample.
32
+ - `language_tokens_attention_mask`: Stores the attention masks for language tokens.
33
+ - `language_tokens_input_ids`: Stores the input IDs for language tokens.
34
+ - `language_tokens_token_type_ids`: Stores the token type IDs for language tokens.
35
+ - `image_file`: Stores the filenames of the images.
36
+
37
+
38
+ ### Content of each split
39
+
40
+ Here is a view of the BIOSCAN-1M's `all_key` split's content.
41
+ ```shell
42
+ h5ls -r BioScan_data_in_splits.hdf5
43
+
44
+ / Group
45
+ /all_keys Group
46
+ /all_keys/barcode Dataset {21118}
47
+ /all_keys/dna_features Dataset {21118, 768}
48
+ /all_keys/family Dataset {21118}
49
+ /all_keys/genus Dataset {21118}
50
+ /all_keys/image Dataset {21118, 24027}
51
+ /all_keys/image_features Dataset {21118, 512}
52
+ /all_keys/image_file Dataset {21118}
53
+ /all_keys/image_mask Dataset {21118}
54
+ /all_keys/language_tokens_attention_mask Dataset {21118, 20}
55
+ /all_keys/language_tokens_input_ids Dataset {21118, 20}
56
+ /all_keys/language_tokens_token_type_ids Dataset {21118, 20}
57
+ /all_keys/order Dataset {21118}
58
+ /all_keys/sampleid Dataset {21118}
59
+ /all_keys/species Dataset {21118}
60
+ ...
61
+ ```
62
+
63
+ Most of the datasets store lists of encoded strings. To read them, you can use:
64
+ ```shell
65
+ hdf5_split_group = h5py.File(hdf5_inputs_path, "r", libver="latest")['all_keys']
66
+ list_of_barcode = [item.decode("utf-8") for item in hdf5_split_group["barcode"][:]]
67
+ ```
68
+
69
+ The `image_features` and `dna_features` stored in the dataset hdf5 files are pre-extracted by models without contrastive learning. We used them to get borderline results. You shouldn't need them, but to read them, you can:
70
+ ```shell
71
+ image_features = hdf5_split_group["image_features"][:].astype(np.float32)
72
+ dna_features = hdf5_split_group["dna_features"][:].astype(np.float32)
73
+ ```
74
+
75
+ The `language_tokens_attention_mask`, `language_tokens_input_ids` and `language_tokens_token_type_ids` are tokens of strings concatenated by the `order`, `family`, `genus` and `species` of each sample; we used them as language input when we perform contrastive training with BERT-small. We read them by:
76
+ ```shell
77
+ language_input_ids = hdf5_split_group["language_tokens_input_ids"][:]
78
+ language_token_type_ids = hdf5_split_group["language_tokens_token_type_ids"][:]
79
+ language_attention_mask = hdf5_split_group["language_tokens_attention_mask"][:]
80
+ ```
81
+
82
+ We stored images by converting the images into byte-encoded data with padding and record the lengths of each encoded image. This allows us to store the images in an HDF5 file. The encoded images are stored in the image dataset, while the lengths of the images are recorded in image_mask. To read them, you can reference:
83
+
84
+ ```shell
85
+ image_enc_padded = hdf5_split_group["image"][idx].astype(np.uint8)
86
+ enc_length = hdf5_split_group["image_mask"][idx]
87
+ image_enc = image_enc_padded[:enc_length]
88
+ curr_image = Image.open(io.BytesIO(image_enc))
89
+ ```
90
+