LabiraPJOK_1_100 / README.md
Labira's picture
Training in progress epoch 16
8e671c6
|
raw
history blame
2.43 kB
metadata
library_name: transformers
license: mit
base_model: indolem/indobert-base-uncased
tags:
  - generated_from_keras_callback
model-index:
  - name: Labira/LabiraPJOK_1_100
    results: []

Labira/LabiraPJOK_1_100

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0089
  • Validation Loss: 7.5009
  • Epoch: 16

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
0.5573 6.1112 0
0.2525 4.7174 1
0.4330 4.9959 2
0.2496 5.5703 3
0.1868 5.9061 4
0.1491 6.0854 5
0.1951 6.3417 6
0.0492 6.4913 7
0.0296 6.5959 8
0.0352 6.7091 9
0.0530 6.7985 10
0.0265 6.9577 11
0.0239 7.1006 12
0.0221 7.2133 13
0.0171 7.3099 14
0.0154 7.4198 15
0.0089 7.5009 16

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.1
  • Tokenizers 0.19.1