Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
gabrielaltay commited on
Commit
f657301
·
1 Parent(s): c95c280

upload hubscripts/spl_adr_200db_hub.py to hub from bigbio repo

Browse files
Files changed (1) hide show
  1. spl_adr_200db.py +405 -0
spl_adr_200db.py ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """
17
+ Dataset containing standardised information about known adverse reactions for 200
18
+ FDA-approved drugs using information from the respective Structured Product Labels (SPLs).
19
+ This data resulted from a partnership between the United States Food and Drug Administration
20
+ (FDA) and the National Library of Medicine.
21
+
22
+ Structured Product Labels (SPLs) are the documents FDA uses to exchange information
23
+ about drugs and other products. For this dataset, SPLs were manually annotated for
24
+ adverse reactions at the mention level to facilitate development and evaluation of
25
+ text mining tools for extraction of ADRs from all SPLs. The ADRs were then normalised
26
+ to the Unified Medical Language System (UMLS) and to the Medical Dictionary for
27
+ Regulatory Activities (MedDRA).
28
+
29
+ These data were used for the adverse event challenge at TAC 2017 (Text Analysis Conference)
30
+ in four different tasks:
31
+ * Task 1: Extract AdverseReactions and related mentions (Severity, Factor, DrugClass,
32
+ Negation, Animal). This is similar to many NLP Named Entity Recognition (NER) evaluations.
33
+ * Task 2: Identify the relations between AdverseReactions and related mentions (i.e.,
34
+ Negated, Hypothetical, and Effect). This is similar to many NLP relation
35
+ identification evaluations.
36
+ * Task 3: Identify the positive AdverseReaction mention names in the labels.
37
+ For the purposes of this task, positive will be defined as the caseless strings
38
+ of all the AdverseReactions that have not been negated and are not related by
39
+ a Hypothetical relation to a DrugClass or Animal. Note that this means Factors
40
+ related via a Hypothetical relation are considered positive (e.g., "[unknown risk]
41
+ Factor of [stroke]AdverseReaction") for the purposes of this task. The result of
42
+ this task will be a list of unique strings corresponding to the positive ADRs
43
+ as they were written in the label.
44
+ * Task 4: Provide MedDRA PT(s) and LLT(s) for each positive AdverseReaction (occasionally,
45
+ two or more PTs are necessary to fully describe the reaction). For participants
46
+ approaching the tasks sequentially, this can be viewed as normalization of the terms
47
+ extracted in Task 3 to MedDRA LLTs/PTs. Because MedDRA is not publicly available,
48
+ and contains several versions, a standard version of MedDRA v18.1 will be provided
49
+ to the participants. Other resources such as the UMLS Terminology Services may be
50
+ used to aid with the normalization process.
51
+
52
+ For more information regarding the challenge at TAC 2017, please visit:
53
+ https://bionlp.nlm.nih.gov/tac2017adversereactions/
54
+
55
+ """
56
+
57
+ import xml.etree.ElementTree as ET
58
+ from collections import defaultdict
59
+ from itertools import accumulate
60
+ from pathlib import Path
61
+ from typing import Dict, List, Tuple
62
+
63
+ import datasets
64
+
65
+ from .bigbiohub import kb_features
66
+ from .bigbiohub import BigBioConfig
67
+ from .bigbiohub import Tasks
68
+
69
+ _LANGUAGES = ['English']
70
+ _PUBMED = False
71
+ _LOCAL = False
72
+ _CITATION = """\
73
+ @article{demner2018dataset,
74
+ author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson,
75
+ Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph},
76
+ title = {A dataset of 200 structured product labels annotated for adverse drug reactions},
77
+ journal = {Scientific Data},
78
+ volume = {5},
79
+ year = {2018},
80
+ month = {01},
81
+ pages = {180001},
82
+ url = {
83
+ https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions
84
+ },
85
+ doi = {10.1038/sdata.2018.1}
86
+ }
87
+ """
88
+
89
+ _DATASETNAME = "spl_adr_200db"
90
+ _DISPLAYNAME = "SPL ADR"
91
+
92
+ _DESCRIPTION = """\
93
+ The United States Food and Drug Administration (FDA) partnered with the National Library
94
+ of Medicine to create a pilot dataset containing standardised information about known
95
+ adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),
96
+ the documents FDA uses to exchange information about drugs and other products, were
97
+ manually annotated for adverse reactions at the mention level to facilitate development
98
+ and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were
99
+ then normalised to the Unified Medical Language System (UMLS) and to the Medical
100
+ Dictionary for Regulatory Activities (MedDRA).
101
+ """
102
+
103
+ _HOMEPAGE = "https://bionlp.nlm.nih.gov/tac2017adversereactions/"
104
+
105
+ # NOTE: Source: https://osf.io/6h9q4/
106
+ _LICENSE = 'Creative Commons Zero v1.0 Universal'
107
+ _URLS = {
108
+ _DATASETNAME: {
109
+ "train": "https://bionlp.nlm.nih.gov/tac2017adversereactions/train_xml.tar.gz",
110
+ "unannotated": "https://bionlp.nlm.nih.gov/tac2017adversereactions/unannotated_xml.tar.gz",
111
+ }
112
+ }
113
+
114
+ _SUPPORTED_TASKS = [
115
+ Tasks.NAMED_ENTITY_RECOGNITION,
116
+ Tasks.NAMED_ENTITY_DISAMBIGUATION,
117
+ Tasks.RELATION_EXTRACTION,
118
+ ]
119
+
120
+ _SOURCE_VERSION = "1.0.0"
121
+ _BIGBIO_VERSION = "1.0.0"
122
+
123
+
124
+ class SplAdr200DBDataset(datasets.GeneratorBasedBuilder):
125
+ """
126
+ The United States Food and Drug Administration (FDA) partnered with the National Library
127
+ of Medicine to create a pilot dataset containing standardised information about known
128
+ adverse reactions for 200 FDA-approved drugs.
129
+
130
+ These data were used in the adverse event challenge at TAC 2017 (Text Analysis Conference).
131
+ For more information on the tasks, see: https://bionlp.nlm.nih.gov/tac2017adversereactions/
132
+ """
133
+
134
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
135
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
136
+
137
+ BUILDER_CONFIGS = []
138
+
139
+ for subset_name in _URLS[_DATASETNAME]:
140
+ BUILDER_CONFIGS.extend(
141
+ [
142
+ BigBioConfig(
143
+ name=f"spl_adr_200db_{subset_name}_source",
144
+ version=SOURCE_VERSION,
145
+ description=f"SPL ADR 200db source {subset_name} schema",
146
+ schema="source",
147
+ subset_id=f"spl_adr_200db_{subset_name}",
148
+ ),
149
+ BigBioConfig(
150
+ name=f"spl_adr_200db_{subset_name}_bigbio_kb",
151
+ version=BIGBIO_VERSION,
152
+ description=f"SPL ADR 200db BigBio {subset_name} schema",
153
+ schema="bigbio_kb",
154
+ subset_id=f"spl_adr_200db_{subset_name}",
155
+ ),
156
+ ]
157
+ )
158
+
159
+ DEFAULT_CONFIG_NAME = "spl_adr_200db_source"
160
+
161
+ def _info(self) -> datasets.DatasetInfo:
162
+ if self.config.schema == "source":
163
+ unannotated_features = {
164
+ "drug_name": datasets.Value("string"),
165
+ "text": [datasets.Value("string")],
166
+ "sections": [
167
+ {
168
+ "id": datasets.Value("string"),
169
+ "name": datasets.Value("string"),
170
+ "text": datasets.Value("string"),
171
+ }
172
+ ],
173
+ }
174
+ features = datasets.Features(
175
+ {
176
+ **unannotated_features,
177
+ "mentions": [
178
+ {
179
+ "id": datasets.Value("string"),
180
+ "section": datasets.Value("string"),
181
+ "type": datasets.Value("string"),
182
+ "start": datasets.Value("string"),
183
+ "len": datasets.Value("string"),
184
+ "str": datasets.Value("string"),
185
+ }
186
+ ],
187
+ "relations": [
188
+ {
189
+ "id": datasets.Value("string"),
190
+ "type": datasets.Value("string"),
191
+ "arg1": datasets.Value("string"),
192
+ "arg2": datasets.Value("string"),
193
+ }
194
+ ],
195
+ "reactions": [
196
+ {
197
+ "id": datasets.Value("string"),
198
+ "str": datasets.Value("string"),
199
+ "normalizations": [
200
+ {
201
+ "id": datasets.Value("string"),
202
+ "meddra_pt": datasets.Value("string"),
203
+ "meddra_pt_id": datasets.Value("string"),
204
+ "meddra_llt": datasets.Value("string"),
205
+ "meddra_llt_id": datasets.Value("string"),
206
+ "flag": datasets.Value("string"),
207
+ }
208
+ ],
209
+ }
210
+ ],
211
+ }
212
+ )
213
+
214
+ elif self.config.schema == "bigbio_kb":
215
+ features = kb_features
216
+
217
+ return datasets.DatasetInfo(
218
+ description=_DESCRIPTION,
219
+ features=features,
220
+ homepage=_HOMEPAGE,
221
+ license=str(_LICENSE),
222
+ citation=_CITATION,
223
+ )
224
+
225
+ def _split_generators(self, dl_manager) -> List[datasets.SplitGenerator]:
226
+ """Returns SplitGenerators."""
227
+ *_, subset_name = self.config.subset_id.split("_")
228
+
229
+ urls = _URLS[_DATASETNAME][subset_name]
230
+
231
+ data_dir = dl_manager.download_and_extract(urls)
232
+
233
+ data_files = (Path(data_dir) / f"{subset_name}_xml").glob("*.xml")
234
+
235
+ return [
236
+ datasets.SplitGenerator(
237
+ name=datasets.Split.TRAIN,
238
+ gen_kwargs={
239
+ "filepaths": tuple(data_files),
240
+ },
241
+ ),
242
+ ]
243
+
244
+ def _source_features_from_xml(self, element_tree):
245
+ root = element_tree.getroot()
246
+ drug_name = root.attrib["drug"]
247
+
248
+ sections = root.findall(".//Text/Section")
249
+ relations = root.findall(".//Relations/Relation")
250
+ reactions = [
251
+ {
252
+ "id": reaction.attrib["id"],
253
+ "str": reaction.attrib["str"],
254
+ "normalizations": [
255
+ {
256
+ # NOTE: Default features to `None` as not all of them
257
+ # will be present in all reactions.
258
+ "meddra_pt": None,
259
+ "meddra_pt_id": None,
260
+ "meddra_llt": None,
261
+ "meddra_llt_id": None,
262
+ "flag": None,
263
+ **normalization.attrib,
264
+ }
265
+ for normalization in reaction.findall("Normalization")
266
+ ],
267
+ }
268
+ for reaction in root.findall(".//Reactions/Reaction")
269
+ ]
270
+
271
+ mentions = root.findall(".//Mentions/Mention")
272
+ return {
273
+ "drug_name": drug_name,
274
+ "text": [section.text for section in sections],
275
+ "mentions": [mention.attrib for mention in mentions],
276
+ "relations": [relation.attrib for relation in relations],
277
+ "reactions": reactions,
278
+ "sections": [
279
+ {**section.attrib, "text": section.text} for section in sections
280
+ ],
281
+ }
282
+
283
+ def _bigbio_kb_features_from_xml(self, element_tree):
284
+ source_features = self._source_features_from_xml(
285
+ element_tree=element_tree,
286
+ )
287
+ entity_normalizations = defaultdict(list)
288
+
289
+ for reaction in source_features["reactions"]:
290
+ entity_name = reaction["str"]
291
+ for normalization in reaction["normalizations"]:
292
+
293
+ # commenting this out for now
294
+ # if there is no db_name then its not a useful normalization
295
+ # if normalization["meddra_pt_id"]:
296
+ # entity_normalizations[entity_name].append(
297
+ # {"db_name": None, "db_id": f"pt_{normalization['meddra_pt_id']}"}
298
+ # )
299
+
300
+ if normalization["meddra_llt_id"]:
301
+ entity_normalizations[entity_name].append(
302
+ {
303
+ "db_name": "MedDRA v18.1",
304
+ "db_id": f"llt_{normalization['meddra_llt_id']}",
305
+ }
306
+ )
307
+
308
+ section_lengths = list(
309
+ accumulate(len(section["text"]) for section in source_features["sections"])
310
+ )
311
+
312
+ section_offsets = [
313
+ (start + index, end + index)
314
+ for index, (start, end) in enumerate(
315
+ zip([0] + section_lengths[:-1], section_lengths)
316
+ )
317
+ ]
318
+
319
+ section_start_offset_map = {
320
+ f"S{section_index}": offsets[0]
321
+ for section_index, offsets in enumerate(section_offsets, 1)
322
+ }
323
+
324
+ entities = []
325
+
326
+ for mention in source_features["mentions"]:
327
+ start_points = [
328
+ int(start_point) + section_start_offset_map[mention["section"]]
329
+ for start_point in mention["start"].split(",")
330
+ ]
331
+
332
+ lens = [int(len_) for len_ in mention["len"].split(",")]
333
+
334
+ offsets = [
335
+ (start_point, start_point + len_)
336
+ for start_point, len_ in zip(start_points, lens)
337
+ ]
338
+
339
+ text = " ".join(section["text"] for section in source_features["sections"])
340
+
341
+ entity_strings = [
342
+ text[start_point : start_point + len_]
343
+ for start_point, len_ in zip(start_points, lens)
344
+ ]
345
+
346
+ entities.append(
347
+ {
348
+ "id": f"{source_features['drug_name']}_entity_{mention['id']}",
349
+ "type": mention["type"],
350
+ "text": entity_strings,
351
+ "offsets": offsets,
352
+ "normalized": entity_normalizations[mention["str"]],
353
+ }
354
+ )
355
+
356
+ return {
357
+ "document_id": source_features["drug_name"],
358
+ "passages": [
359
+ {
360
+ "id": f"{source_features['drug_name']}_section_{section['id']}",
361
+ "type": section["name"],
362
+ "text": [section["text"]],
363
+ "offsets": [offsets],
364
+ }
365
+ for section, offsets in zip(
366
+ source_features["sections"], section_offsets
367
+ )
368
+ ],
369
+ "entities": entities,
370
+ "relations": [
371
+ {
372
+ "id": f"{source_features['drug_name']}_relation_{relation['id']}",
373
+ "type": relation["type"],
374
+ "arg1_id": relation["arg1"],
375
+ "arg2_id": relation["arg2"],
376
+ "normalized": [],
377
+ }
378
+ for relation in source_features["relations"]
379
+ ],
380
+ "events": [],
381
+ "coreferences": [],
382
+ }
383
+
384
+ def _generate_examples(self, filepaths: Tuple[Path]) -> Tuple[int, Dict]:
385
+ """Yields examples as (key, example) tuples."""
386
+
387
+ for file_index, drug_filename in enumerate(filepaths):
388
+ element_tree = ET.parse(drug_filename)
389
+
390
+ if self.config.schema == "source":
391
+ features = self._source_features_from_xml(
392
+ element_tree=element_tree,
393
+ )
394
+ elif self.config.schema == "bigbio_kb":
395
+ features = self._bigbio_kb_features_from_xml(
396
+ element_tree=element_tree,
397
+ )
398
+ features["id"] = file_index
399
+ else:
400
+ raise ValueError(
401
+ f"Unsupported schema '{self.config.schema}' requested for "
402
+ f"dataset with name '{_DATASETNAME}'."
403
+ )
404
+
405
+ yield file_index, features