DACN2

This model is a fine-tuned version of vinai/phobert-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1474

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 347 1.1891
1.3935 2.0 694 1.1047
0.8907 3.0 1041 1.0154
0.8907 4.0 1388 1.0854
0.593 5.0 1735 1.3185
0.3795 6.0 2082 1.5470
0.3795 7.0 2429 1.4931
0.2399 8.0 2776 1.6889
0.1596 9.0 3123 1.8808
0.1596 10.0 3470 2.0850
0.1084 11.0 3817 2.3343
0.0806 12.0 4164 2.5696
0.0472 13.0 4511 2.6458
0.0472 14.0 4858 2.7680
0.0485 15.0 5205 2.8165
0.0417 16.0 5552 2.8918
0.0417 17.0 5899 3.0412
0.0233 18.0 6246 3.0186
0.0193 19.0 6593 3.0639
0.0193 20.0 6940 3.0657
0.0191 21.0 7287 2.9095
0.0146 22.0 7634 3.0045
0.0146 23.0 7981 3.2984
0.013 24.0 8328 3.3791
0.0131 25.0 8675 3.2946
0.0101 26.0 9022 3.2814
0.0101 27.0 9369 3.3177
0.0114 28.0 9716 3.2819
0.0046 29.0 10063 3.2945
0.0046 30.0 10410 3.3072

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.5.1+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.2
Downloads last month
5
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kylePham/DACN2

Base model

vinai/phobert-base
Finetuned
(51)
this model