metadata
tags:
- generated_from_trainer
model-index:
- name: multi-label-class-classification-on-github-issues
results: []
multi-label-class-classification-on-github-issues
This model is a fine-tuned version of neuralmagic/oBERT-12-upstream-pruned-unstructured-97 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1041
- Micro f1: 0.6590
- Macro f1: 0.0721
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 |
---|---|---|---|---|---|
No log | 1.0 | 49 | 0.2840 | 0.3791 | 0.0172 |
No log | 2.0 | 98 | 0.1717 | 0.3791 | 0.0172 |
No log | 3.0 | 147 | 0.1436 | 0.3796 | 0.0173 |
No log | 4.0 | 196 | 0.1335 | 0.4608 | 0.0299 |
No log | 5.0 | 245 | 0.1250 | 0.5254 | 0.0371 |
No log | 6.0 | 294 | 0.1179 | 0.6312 | 0.0674 |
No log | 7.0 | 343 | 0.1125 | 0.6097 | 0.0549 |
No log | 8.0 | 392 | 0.1089 | 0.6368 | 0.0659 |
No log | 9.0 | 441 | 0.1061 | 0.6562 | 0.0715 |
No log | 10.0 | 490 | 0.1055 | 0.6525 | 0.0706 |
0.1604 | 11.0 | 539 | 0.1030 | 0.6636 | 0.0723 |
0.1604 | 12.0 | 588 | 0.1043 | 0.6526 | 0.0708 |
0.1604 | 13.0 | 637 | 0.1039 | 0.6561 | 0.0709 |
0.1604 | 14.0 | 686 | 0.1050 | 0.6576 | 0.0712 |
0.1604 | 15.0 | 735 | 0.1060 | 0.6530 | 0.0749 |
0.1604 | 16.0 | 784 | 0.1056 | 0.6606 | 0.0827 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2