Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,22 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
- config_name: boolq
|
4 |
features:
|
@@ -300,3 +318,57 @@ configs:
|
|
300 |
- split: test
|
301 |
path: wsc/test-*
|
302 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- other
|
4 |
+
language_creators:
|
5 |
+
- other
|
6 |
+
multilinguality:
|
7 |
+
- monolingual
|
8 |
+
source_datasets:
|
9 |
+
- original
|
10 |
+
paperswithcode_id: superglue
|
11 |
+
arxiv: 1905.00537
|
12 |
+
pretty_name: SuperGLUE Benchmark Datasets
|
13 |
+
tags:
|
14 |
+
- superglue
|
15 |
+
- nlp
|
16 |
+
- benchmark
|
17 |
+
license: mit
|
18 |
+
language:
|
19 |
+
- en
|
20 |
dataset_info:
|
21 |
- config_name: boolq
|
22 |
features:
|
|
|
318 |
- split: test
|
319 |
path: wsc/test-*
|
320 |
---
|
321 |
+
# SuperGLUE Benchmark Datasets
|
322 |
+
|
323 |
+
This repository contains the [**SuperGLUE**](https://arxiv.org/pdf/1905.00537) benchmark datasets uploaded to the Hugging Face Hub. Each dataset is available as a separate configuration, making it easy to load individual datasets using the [datasets](https://github.com/huggingface/datasets) library.
|
324 |
+
|
325 |
+
## Datasets Included
|
326 |
+
|
327 |
+
The repository includes the following SuperGLUE datasets:
|
328 |
+
|
329 |
+
- **BoolQ**
|
330 |
+
- **CB**
|
331 |
+
- **COPA**
|
332 |
+
- **MultiRC**
|
333 |
+
- **ReCoRD**
|
334 |
+
- **RTE**
|
335 |
+
- **WiC**
|
336 |
+
- **WSC**
|
337 |
+
|
338 |
+
Each dataset has been preprocessed to ensure consistency across train, validation, and test splits. Missing keys in the test split have been filled with dummy values (type-aware) to match the features found in the training and validation splits.
|
339 |
+
|
340 |
+
## Usage
|
341 |
+
|
342 |
+
You can load any of the datasets using the Hugging Face `datasets` library. For example, to load the BoolQ dataset, run:
|
343 |
+
|
344 |
+
```python
|
345 |
+
from datasets import load_dataset
|
346 |
+
|
347 |
+
# Load the BoolQ dataset from the SuperGLUE benchmark
|
348 |
+
dataset = load_dataset("Hyukkyu/superglue", "BoolQ")
|
349 |
+
|
350 |
+
# Access train, validation, and test splits
|
351 |
+
train_split = dataset["train"]
|
352 |
+
validation_split = dataset["validation"]
|
353 |
+
test_split = dataset["test"]
|
354 |
+
|
355 |
+
print(train_split)
|
356 |
+
```
|
357 |
+
Replace "BoolQ" with the desired configuration name (e.g., "CB", "COPA", "MultiRC", etc.) to load other datasets.
|
358 |
+
|
359 |
+
## Data Processing
|
360 |
+
- Schema Consistency:
|
361 |
+
A recursive procedure was used to infer the schema from the train and validation splits and fill in missing keys in the test split with dummy values. This ensures that all splits have the same features, preventing issues during model training or evaluation.
|
362 |
+
- Type-Aware Dummy Values:
|
363 |
+
Dummy values are inserted based on the expected type. For instance, missing boolean fields are filled with False, integer fields with -1, float fields with -1.0, and string fields with an empty string.
|
364 |
+
|
365 |
+
## Citation
|
366 |
+
```text
|
367 |
+
@article{wang2019superglue,
|
368 |
+
title={Superglue: A stickier benchmark for general-purpose language understanding systems},
|
369 |
+
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel},
|
370 |
+
journal={Advances in neural information processing systems},
|
371 |
+
volume={32},
|
372 |
+
year={2019}
|
373 |
+
}
|
374 |
+
```
|