bert-base-chinese-finetuned-QA-b4
This model is a fine-tuned version of ckiplab/bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.4979
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.3371 | 0.07 | 500 | 1.3143 |
1.1681 | 0.14 | 1000 | 1.2073 |
1.019 | 0.22 | 1500 | 1.2466 |
0.9532 | 0.29 | 2000 | 0.9045 |
0.927 | 0.36 | 2500 | 0.8376 |
0.858 | 0.43 | 3000 | 0.8539 |
0.911 | 0.51 | 3500 | 0.7801 |
0.7875 | 0.58 | 4000 | 0.9157 |
0.8543 | 0.65 | 4500 | 0.8728 |
0.8743 | 0.72 | 5000 | 0.8431 |
0.8004 | 0.79 | 5500 | 0.7866 |
0.828 | 0.87 | 6000 | 0.7686 |
0.7597 | 0.94 | 6500 | 0.7637 |
0.728 | 1.01 | 7000 | 0.8749 |
0.4751 | 1.08 | 7500 | 0.8225 |
0.5158 | 1.16 | 8000 | 1.1740 |
0.5046 | 1.23 | 8500 | 0.8435 |
0.501 | 1.3 | 9000 | 0.9899 |
0.4962 | 1.37 | 9500 | 1.0151 |
0.5345 | 1.45 | 10000 | 0.9104 |
0.5102 | 1.52 | 10500 | 0.9287 |
0.519 | 1.59 | 11000 | 0.9018 |
0.5032 | 1.66 | 11500 | 0.9816 |
0.4776 | 1.73 | 12000 | 1.1031 |
0.5099 | 1.81 | 12500 | 0.8666 |
0.4616 | 1.88 | 13000 | 0.8950 |
0.5127 | 1.95 | 13500 | 0.8260 |
0.4322 | 2.02 | 14000 | 1.0062 |
0.258 | 2.1 | 14500 | 1.1331 |
0.2794 | 2.17 | 15000 | 1.2275 |
0.2325 | 2.24 | 15500 | 1.2657 |
0.2742 | 2.31 | 16000 | 1.0803 |
0.2315 | 2.38 | 16500 | 1.2236 |
0.2782 | 2.46 | 17000 | 1.1722 |
0.2706 | 2.53 | 17500 | 1.2902 |
0.2817 | 2.6 | 18000 | 1.1814 |
0.2541 | 2.67 | 18500 | 1.1149 |
0.2965 | 2.75 | 19000 | 1.0627 |
0.2511 | 2.82 | 19500 | 1.1695 |
0.2675 | 2.89 | 20000 | 1.1538 |
0.2743 | 2.96 | 20500 | 1.2044 |
0.183 | 3.04 | 21000 | 1.3375 |
0.1657 | 3.11 | 21500 | 1.3305 |
0.1112 | 3.18 | 22000 | 1.4240 |
0.1316 | 3.25 | 22500 | 1.3410 |
0.0971 | 3.32 | 23000 | 1.4676 |
0.1227 | 3.4 | 23500 | 1.4158 |
0.1064 | 3.47 | 24000 | 1.4724 |
0.1012 | 3.54 | 24500 | 1.5001 |
0.0755 | 3.61 | 25000 | 1.4947 |
0.1 | 3.69 | 25500 | 1.4799 |
0.0913 | 3.76 | 26000 | 1.4940 |
0.0962 | 3.83 | 26500 | 1.4657 |
0.0664 | 3.9 | 27000 | 1.4992 |
0.0724 | 3.97 | 27500 | 1.4979 |
Framework versions
- Transformers 4.34.0
- Pytorch 1.13.1+cu116
- Datasets 2.14.5
- Tokenizers 0.14.1
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sharkMeow/bert-base-chinese-finetuned-QA-b4
Base model
ckiplab/bert-base-chinese