Overview
Language model: deepset/roberta-base-squad2-distilled
Language: English
Training data: SQuAD 2.0 training set
Eval data: SQuAD 2.0 dev set
Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021
Details
- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
Hyperparameters
batch_size = 6
n_epochs = 2
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 5
distillation_loss_weight = 1
Performance
"exact": 68.6431398972458
"f1": 72.7637083790805
Authors
- Timo Möller:
timo.moeller [at] deepset.ai
- Julian Risch:
julian.risch [at] deepset.ai
- Malte Pietsch:
malte.pietsch [at] deepset.ai
- Michel Bartels:
michel.bartels [at] deepset.ai
About us
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch: Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
- Downloads last month
- 106
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Dataset used to train Shobhank-iiitdwd/Distiled-bert-medium-squad2-QA
Evaluation results
- Exact Match on squad_v2validation set self-reported69.823
- F1 on squad_v2validation set self-reported72.923