--- library_name: transformers language: - hre base_model: facebook/mbart-large-50-many-to-many-mmt tags: - generated_from_trainer model-index: - name: mBART Hre Vietnamese translation 1.0 results: [] --- # mBART Hre Vietnamese translation 1.0 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 336 | 0.1069 | | 0.5938 | 2.0 | 672 | 0.0613 | | 0.0798 | 3.0 | 1008 | 0.0315 | | 0.0798 | 4.0 | 1344 | 0.0154 | | 0.0267 | 5.0 | 1680 | 0.0098 | | 0.0123 | 6.0 | 2016 | 0.0057 | | 0.0123 | 7.0 | 2352 | 0.0062 | | 0.0061 | 8.0 | 2688 | 0.0052 | | 0.0043 | 9.0 | 3024 | 0.0033 | | 0.0043 | 10.0 | 3360 | 0.0031 | | 0.003 | 11.0 | 3696 | 0.0023 | | 0.0024 | 12.0 | 4032 | 0.0023 | | 0.0024 | 13.0 | 4368 | 0.0020 | | 0.0018 | 14.0 | 4704 | 0.0016 | | 0.0016 | 15.0 | 5040 | 0.0020 | | 0.0016 | 16.0 | 5376 | 0.0015 | | 0.0014 | 17.0 | 5712 | 0.0013 | | 0.0011 | 18.0 | 6048 | 0.0011 | | 0.0011 | 19.0 | 6384 | 0.0010 | | 0.0008 | 20.0 | 6720 | 0.0011 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0