opus-mt-en-zh-finetuned-audio-product

This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-zh on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0662
  • Bleu: 60.5401

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu
3.7442 1.0 66 0.1319 37.4394
0.1694 2.0 132 0.0904 48.8190
0.1179 3.0 198 0.0773 51.3552
0.0662 4.0 264 0.0716 51.7823
0.0542 5.0 330 0.0681 54.9762
0.0445 6.0 396 0.0668 55.1506
0.0308 7.0 462 0.0655 56.1614
0.0274 8.0 528 0.0656 57.7665
0.0246 9.0 594 0.0656 58.8837
0.019 10.0 660 0.0654 59.1559
0.017 11.0 726 0.0661 60.0839
0.0151 12.0 792 0.0658 58.4574
0.013 13.0 858 0.0660 59.3195
0.0119 14.0 924 0.0657 59.5030
0.0118 15.0 990 0.0662 60.0147
0.0104 16.0 1056 0.0661 60.6406
0.0101 17.0 1122 0.0662 60.3492
0.01 18.0 1188 0.0662 60.7125
0.0097 19.0 1254 0.0662 60.4941
0.0096 20.0 1320 0.0662 60.5401

Framework versions

  • Transformers 4.48.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
9
Safetensors
Model size
77.5M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nananatsu/opus-mt-en-zh-finetuned-audio-product

Quantized
(3)
this model