t5-small-finetuned-samsum
This model is a fine-tuned version of t5-small on the samsum dataset. It achieves the following results on the evaluation set:
- Loss: 1.7409
- Rouge1: 42.6713
- Rouge2: 19.8452
- Rougel: 35.971
- Rougelsum: 39.6113
- Gen Len: 16.6381
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
2.2617 | 1.0 | 921 | 1.8712 | 40.1321 | 17.123 | 33.1845 | 37.13 | 16.5685 |
2.0294 | 2.0 | 1842 | 1.8208 | 41.0756 | 18.1787 | 34.4685 | 38.1966 | 16.6308 |
1.9769 | 3.0 | 2763 | 1.7959 | 41.3228 | 18.4732 | 34.6591 | 38.2431 | 16.3875 |
1.9406 | 4.0 | 3684 | 1.7740 | 41.658 | 18.7294 | 34.907 | 38.6251 | 16.7078 |
1.9185 | 5.0 | 4605 | 1.7638 | 41.8923 | 19.1845 | 35.2485 | 38.7469 | 16.5428 |
1.8981 | 6.0 | 5526 | 1.7536 | 42.3314 | 19.2761 | 35.4452 | 39.3067 | 16.7579 |
1.8801 | 7.0 | 6447 | 1.7472 | 42.362 | 19.4885 | 35.7207 | 39.274 | 16.5538 |
1.868 | 8.0 | 7368 | 1.7452 | 42.3388 | 19.4036 | 35.6189 | 39.2259 | 16.577 |
1.8667 | 9.0 | 8289 | 1.7413 | 42.7453 | 19.932 | 36.08 | 39.7062 | 16.6736 |
1.8607 | 10.0 | 9210 | 1.7409 | 42.6713 | 19.8452 | 35.971 | 39.6113 | 16.6381 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 93
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for idkgaming/t5-small-finetuned-samsum
Base model
google-t5/t5-small