lmaccarini's picture
End of training
e4b6394 verified
|
raw
history blame
2.64 kB
metadata
library_name: transformers
license: cc-by-4.0
base_model: dccuchile/tulio-chilean-spanish-bert
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: gestionabilidad_v1_batch16
    results: []

gestionabilidad_v1_batch16

This model is a fine-tuned version of dccuchile/tulio-chilean-spanish-bert on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3453
  • Accuracy: 0.9419
  • Precision: 0.9422
  • Recall: 0.9419
  • F1: 0.9420

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.0985 0.2670 200 0.4983 0.9118 0.9207 0.9118 0.9135
0.1098 0.5340 400 0.3588 0.9225 0.9261 0.9225 0.9234
0.1049 0.8011 600 0.2562 0.9325 0.9322 0.9325 0.9319
0.0706 1.0681 800 0.4060 0.9309 0.9330 0.9309 0.9314
0.0376 1.3351 1000 0.3807 0.9339 0.9346 0.9339 0.9341
0.0406 1.6021 1200 0.3532 0.9365 0.9367 0.9365 0.9366
0.0528 1.8692 1400 0.2903 0.9452 0.9452 0.9452 0.9452
0.0624 2.1362 1600 0.3112 0.9439 0.9442 0.9439 0.9440
0.0219 2.4032 1800 0.3662 0.9419 0.9422 0.9419 0.9420
0.021 2.6702 2000 0.3364 0.9452 0.9452 0.9452 0.9452
0.0297 2.9372 2200 0.3453 0.9419 0.9422 0.9419 0.9420

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0