bart-large-finetuned-question-to-answer
This model is a fine-tuned version of facebook/bart-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1153
- Bleu: 42.8973
- Gen Len: 18.69
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
0.8366 | 1.0 | 516 | 0.3882 | 32.192 | 18.8467 |
0.7567 | 2.0 | 1032 | 0.3263 | 34.6627 | 18.8333 |
0.6634 | 3.0 | 1548 | 0.2838 | 34.3455 | 18.8567 |
0.587 | 4.0 | 2064 | 0.2207 | 37.4365 | 18.8467 |
0.5178 | 5.0 | 2580 | 0.2778 | 36.1141 | 19.2267 |
0.4555 | 6.0 | 3096 | 0.1872 | 39.1633 | 18.6967 |
0.4137 | 7.0 | 3612 | 0.1854 | 39.3042 | 18.98 |
0.3672 | 8.0 | 4128 | 0.1543 | 40.8359 | 18.68 |
0.331 | 9.0 | 4644 | 0.1548 | 41.0895 | 18.54 |
0.3056 | 10.0 | 5160 | 0.1599 | 42.3384 | 18.6767 |
0.2762 | 11.0 | 5676 | 0.1508 | 41.1395 | 18.8167 |
0.2533 | 12.0 | 6192 | 0.1224 | 42.1233 | 18.7033 |
0.2332 | 13.0 | 6708 | 0.1195 | 42.8086 | 18.6967 |
0.2209 | 14.0 | 7224 | 0.1158 | 43.0663 | 18.72 |
0.21 | 15.0 | 7740 | 0.1153 | 42.8973 | 18.69 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
- Downloads last month
- 17
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for RohanHBTU/bart-large-finetuned-question-to-answer
Base model
facebook/bart-large