LabiraPJOK_6x_50 / README.md
Labira's picture
Training in progress epoch 22
136c797
|
raw
history blame
2.66 kB
metadata
library_name: transformers
license: mit
base_model: Labira/LabiraPJOK_5x_50
tags:
  - generated_from_keras_callback
model-index:
  - name: Labira/LabiraPJOK_6x_50
    results: []

Labira/LabiraPJOK_6x_50

This model is a fine-tuned version of Labira/LabiraPJOK_5x_50 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.1707
  • Validation Loss: 2.0365
  • Epoch: 22

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 150, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
2.0573 2.3329 0
1.3047 2.0503 1
1.1311 1.8969 2
0.4437 1.8286 3
0.4078 1.8199 4
0.5102 1.8192 5
0.4207 1.8550 6
0.2787 1.9171 7
0.4091 1.9373 8
0.3602 1.9061 9
0.2561 1.8889 10
0.2233 1.8902 11
0.2392 1.8824 12
0.1526 1.8853 13
0.1237 1.9106 14
0.1993 1.9339 15
0.3208 1.9720 16
0.1681 2.0189 17
0.1451 2.0625 18
0.2050 2.0801 19
0.1442 2.0687 20
0.2149 2.0457 21
0.1707 2.0365 22

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.1
  • Tokenizers 0.19.1