Labira/LabiraPJOK_5_100_Full
This model is a fine-tuned version of Labira/LabiraPJOK_3_100_Full on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.0022
- Validation Loss: 0.0008
- Epoch: 96
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
3.5586 | 1.9568 | 0 |
1.7698 | 1.4494 | 1 |
1.2758 | 1.1921 | 2 |
0.9196 | 0.8832 | 3 |
0.9555 | 0.6157 | 4 |
0.6439 | 0.4235 | 5 |
0.4805 | 0.3079 | 6 |
0.2379 | 0.2399 | 7 |
0.2184 | 0.0946 | 8 |
0.1195 | 0.0436 | 9 |
0.0914 | 0.0233 | 10 |
0.0457 | 0.0143 | 11 |
0.0791 | 0.0107 | 12 |
0.0615 | 0.0084 | 13 |
0.0728 | 0.0071 | 14 |
0.0147 | 0.0061 | 15 |
0.0417 | 0.0058 | 16 |
0.0208 | 0.0064 | 17 |
0.0116 | 0.0074 | 18 |
0.0223 | 0.0055 | 19 |
0.0372 | 0.0046 | 20 |
0.0381 | 0.0046 | 21 |
0.0065 | 0.0049 | 22 |
0.0142 | 0.0048 | 23 |
0.0199 | 0.0036 | 24 |
0.0129 | 0.0025 | 25 |
0.0273 | 0.0019 | 26 |
0.0075 | 0.0016 | 27 |
0.0157 | 0.0015 | 28 |
0.0100 | 0.0015 | 29 |
0.0063 | 0.0015 | 30 |
0.0068 | 0.0015 | 31 |
0.0057 | 0.0015 | 32 |
0.0039 | 0.0015 | 33 |
0.0044 | 0.0015 | 34 |
0.0062 | 0.0014 | 35 |
0.0118 | 0.0013 | 36 |
0.0035 | 0.0011 | 37 |
0.0064 | 0.0009 | 38 |
0.0049 | 0.0008 | 39 |
0.0106 | 0.0008 | 40 |
0.0070 | 0.0009 | 41 |
0.0030 | 0.0010 | 42 |
0.0061 | 0.0011 | 43 |
0.0058 | 0.0011 | 44 |
0.0083 | 0.0012 | 45 |
0.0064 | 0.0014 | 46 |
0.0045 | 0.0014 | 47 |
0.0521 | 0.0014 | 48 |
0.0031 | 0.0015 | 49 |
0.0094 | 0.0014 | 50 |
0.0060 | 0.0012 | 51 |
0.0052 | 0.0010 | 52 |
0.0160 | 0.0008 | 53 |
0.0125 | 0.0007 | 54 |
0.0186 | 0.0007 | 55 |
0.0093 | 0.0011 | 56 |
0.0023 | 0.0019 | 57 |
0.0059 | 0.0023 | 58 |
0.0033 | 0.0022 | 59 |
0.0033 | 0.0020 | 60 |
0.0047 | 0.0017 | 61 |
0.0049 | 0.0015 | 62 |
0.0021 | 0.0013 | 63 |
0.0134 | 0.0012 | 64 |
0.0049 | 0.0012 | 65 |
0.0674 | 0.0013 | 66 |
0.0284 | 0.0013 | 67 |
0.0035 | 0.0012 | 68 |
0.0074 | 0.0011 | 69 |
0.0072 | 0.0010 | 70 |
0.0035 | 0.0010 | 71 |
0.0038 | 0.0009 | 72 |
0.0040 | 0.0009 | 73 |
0.0017 | 0.0008 | 74 |
0.0183 | 0.0008 | 75 |
0.0178 | 0.0007 | 76 |
0.0043 | 0.0007 | 77 |
0.0081 | 0.0007 | 78 |
0.0046 | 0.0007 | 79 |
0.0035 | 0.0007 | 80 |
0.0097 | 0.0007 | 81 |
0.0095 | 0.0007 | 82 |
0.0040 | 0.0008 | 83 |
0.0036 | 0.0008 | 84 |
0.0073 | 0.0008 | 85 |
0.0174 | 0.0008 | 86 |
0.0034 | 0.0009 | 87 |
0.0035 | 0.0009 | 88 |
0.0038 | 0.0009 | 89 |
0.0025 | 0.0008 | 90 |
0.0024 | 0.0008 | 91 |
0.0101 | 0.0008 | 92 |
0.0027 | 0.0008 | 93 |
0.0016 | 0.0008 | 94 |
0.0075 | 0.0008 | 95 |
0.0022 | 0.0008 | 96 |
Framework versions
- Transformers 4.46.2
- TensorFlow 2.17.0
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Labira/LabiraPJOK_5_100_Full
Base model
indolem/indobert-base-uncased
Finetuned
Labira/LabiraPJOK_1_100_Full
Finetuned
Labira/LabiraPJOK_2_100_Full
Finetuned
Labira/LabiraPJOK_3_100_Full