pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k
This model is a fine-tuned version of google/mt5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 5.0781
- Validation Loss: 4.5021
- Train Bleu: 0.0
- Train Gen Len: 21.0
- Epoch: 16
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
Training results
Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
---|---|---|---|---|
16.3214 | 7.9429 | 0.0008 | 127.0 | 0 |
9.3082 | 7.3301 | 0.0 | 127.0 | 1 |
8.1593 | 7.1451 | 0.0 | 127.0 | 2 |
7.7440 | 7.0581 | 0.0 | 8.0 | 3 |
7.5624 | 7.0120 | 0.0 | 3.0 | 4 |
7.4569 | 6.9663 | 0.0 | 3.0 | 5 |
7.3873 | 6.9154 | 0.0001 | 114.0 | 6 |
7.3206 | 6.8373 | 0.0001 | 127.0 | 7 |
7.2175 | 6.7063 | 0.0001 | 127.0 | 8 |
7.0942 | 6.4250 | 0.0003 | 127.0 | 9 |
6.8748 | 6.0066 | 0.0003 | 127.0 | 10 |
6.5383 | 5.5921 | 0.0003 | 127.0 | 11 |
6.2094 | 5.2557 | 0.0003 | 127.0 | 12 |
5.8157 | 4.9725 | 0.0003 | 127.0 | 13 |
5.5098 | 4.7642 | 0.0003 | 127.0 | 14 |
5.2683 | 4.6171 | 0.0017 | 127.0 | 15 |
5.0781 | 4.5021 | 0.0 | 21.0 | 16 |
Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for pakawadeep/mt5-base-finetuned-ctfl-backtranslation_7k
Base model
google/mt5-base