|
--- |
|
license: cc-by-4.0 |
|
--- |
|
|
|
# FiD model trained on TQA |
|
|
|
-- This is the model checkpoint of FiD [2], based on the T5 (with 3B parameters) and trained on the TQA dataset [1]. |
|
|
|
-- Hyperparameters: 8 x 40GB A100 GPUs; batch size 8; AdamW; LR 3e-5; 30000 steps |
|
|
|
References: |
|
|
|
[1] TriviaQA: A Large Scale Dataset for Reading Comprehension and Question Answering. ACL 2017 |
|
|
|
[2] Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. EACL 2021. |
|
|
|
## Model performance |
|
|
|
We evaluate it on the TQA dataset, the EM score is 66.1 on the test set. |
|
|
|
|
|
<a href="https://huggingface.co/exbert/?model=bert-base-uncased"> |
|
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> |
|
</a> |
|
|
|
|