|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test_panels |
|
path: data/test_panels-* |
|
- split: test_humans |
|
path: data/test_humans-* |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: abstract |
|
dtype: string |
|
- name: label |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 23034164 |
|
num_examples: 15390 |
|
- name: test_panels |
|
num_bytes: 2754077 |
|
num_examples: 1365 |
|
- name: test_humans |
|
num_bytes: 764554 |
|
num_examples: 497 |
|
download_size: 14975310 |
|
dataset_size: 26552795 |
|
--- |
|
**ERC Panel Classification Dataset** |
|
|
|
## Dataset Overview |
|
|
|
The **ERC Panel Classification Dataset** is designed to fine-tune a multi-label classifier to predict one or more ERC (European Research Council) panels based on research paper titles and abstracts. The dataset consists of three splits: **train**, **test-panels**, and **test-humans**. |
|
|
|
* **train**: The training set was generated through pseudolabeling using outputs from three different large language models (LLMs). The dataset contains multi-label panel assignments based on the content of each paper's title and abstract. |
|
* **test-panels**: The test set comprises ERC projects with only a single panel assigned to each document. The dataset was sampled so that around 100 examples of each panel assignment are included. |
|
* **test-humans**: This set was created using Argilla. For cases in the training set where there was disagreement between the two LLMs, three human annotators reviewed the documents. When two annotators could not agree on the panel(s), a third annotator, who had not seen the document, was consulted, and the final label was assigned based on majority agreement among the annotators. |
|
|
|
### Use Case |
|
|
|
This dataset is specifically created for fine-tuning a multi-label classifier to predict ERC panel(s). The training data uses a multi-label classification setup, while the test data has two parts: |
|
|
|
* **test-panels**: Single-label evaluation, where agreement is reached if any of the labels from the training data matches the assigned ERC panel. |
|
* **test-humans**: Multi-label evaluation, based on human annotations and majority agreement between annotators. |
|
|
|
## Dataset Structure |
|
|
|
The dataset contains the following fields: |
|
|
|
* **id**: A unique identifier for the document. |
|
* **title**: The title of the research paper. |
|
* **abstract**: The abstract of the research paper. |
|
* **label**: The assigned panel(s) for the paper. For **train**, this is a multi-label list (i.e., a list of panels assigned to the document). For **test-panels**, the label is a single panel per document. For **test-humans**, the label contains multiple panels as assigned by human annotators. |
|
|
|
## Dataset Splits |
|
|
|
The dataset is divided into three splits: |
|
|
|
* **train**: The training data, generated using pseudolabeling from three different LLMs. |
|
* **test-panels**: The test set consisting of ERC projects with single panel assignments. |
|
* **test-humans**: The test set consisting of papers reviewed by human annotators when there was disagreement between LLMs in the training set. |