jamiehudson commited on
Commit
685a5f4
·
verified ·
1 Parent(s): 584e7cc

End of training

Browse files
README.md CHANGED
@@ -17,18 +17,18 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.6120
21
- - Accuracy: 0.9496
22
- - F1 Weighted: 0.9509
23
- - Precision Hopes: 0.6757
24
- - Recall Hopes: 0.7622
25
- - F1 Hopes: 0.7163
26
- - Precision Fears: 0.8672
27
- - Recall Fears: 0.9328
28
- - F1 Fears: 0.8988
29
- - Precision Neither: 0.9798
30
- - Recall Neither: 0.9655
31
- - F1 Neither: 0.9726
32
 
33
  ## Model description
34
 
@@ -54,28 +54,23 @@ The following hyperparameters were used during training:
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_steps: 600
57
- - num_epochs: 10
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Weighted | Precision Hopes | Recall Hopes | F1 Hopes | Precision Fears | Recall Fears | F1 Fears | Precision Neither | Recall Neither | F1 Neither |
63
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:-----------------:|:--------------:|:----------:|
64
- | No log | 1.0 | 171 | 1.0201 | 0.8791 | 0.8226 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8791 | 1.0 | 0.9357 |
65
- | 0.9075 | 2.0 | 342 | 0.3018 | 0.9146 | 0.9220 | 0.5251 | 0.8293 | 0.6430 | 0.7020 | 0.8908 | 0.7852 | 0.9834 | 0.9227 | 0.9521 |
66
- | 0.9075 | 3.0 | 513 | 0.2450 | 0.9141 | 0.9229 | 0.4947 | 0.8476 | 0.6247 | 0.7724 | 0.9412 | 0.8485 | 0.9864 | 0.9179 | 0.9509 |
67
- | 0.2782 | 4.0 | 684 | 0.3093 | 0.9402 | 0.9436 | 0.6 | 0.8232 | 0.6941 | 0.9182 | 0.8487 | 0.8821 | 0.9796 | 0.9548 | 0.9670 |
68
- | 0.2782 | 5.0 | 855 | 0.2838 | 0.9030 | 0.9145 | 0.4744 | 0.9024 | 0.6218 | 0.6970 | 0.9664 | 0.8099 | 0.9930 | 0.8994 | 0.9439 |
69
- | 0.1561 | 6.0 | 1026 | 0.4644 | 0.9479 | 0.9495 | 0.6548 | 0.7866 | 0.7147 | 0.9273 | 0.8571 | 0.8908 | 0.9774 | 0.9660 | 0.9717 |
70
- | 0.1561 | 7.0 | 1197 | 0.4976 | 0.9496 | 0.9512 | 0.6788 | 0.7988 | 0.7339 | 0.8516 | 0.9160 | 0.8826 | 0.9817 | 0.9636 | 0.9725 |
71
- | 0.0743 | 8.0 | 1368 | 0.5949 | 0.9517 | 0.9528 | 0.6978 | 0.7744 | 0.7341 | 0.8651 | 0.9160 | 0.8898 | 0.9798 | 0.9679 | 0.9738 |
72
- | 0.0297 | 9.0 | 1539 | 0.5912 | 0.9483 | 0.9498 | 0.6649 | 0.7622 | 0.7102 | 0.8615 | 0.9412 | 0.8996 | 0.9802 | 0.9636 | 0.9718 |
73
- | 0.0297 | 10.0 | 1710 | 0.6120 | 0.9496 | 0.9509 | 0.6757 | 0.7622 | 0.7163 | 0.8672 | 0.9328 | 0.8988 | 0.9798 | 0.9655 | 0.9726 |
74
 
75
 
76
  ### Framework versions
77
 
78
  - Transformers 4.41.2
79
  - Pytorch 2.3.0+cu121
80
- - Datasets 2.19.1
81
  - Tokenizers 0.19.1
 
17
 
18
  This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.3593
21
+ - Accuracy: 0.9434
22
+ - F1 Weighted: 0.9453
23
+ - Precision Fears: 0.7053
24
+ - Recall Fears: 0.8171
25
+ - F1 Fears: 0.7571
26
+ - Precision Hopes: 0.7458
27
+ - Recall Hopes: 0.88
28
+ - F1 Hopes: 0.8073
29
+ - Precision Neither: 0.9795
30
+ - Recall Neither: 0.9579
31
+ - F1 Neither: 0.9685
32
 
33
  ## Model description
34
 
 
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_steps: 600
57
+ - num_epochs: 5
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Weighted | Precision Fears | Recall Fears | F1 Fears | Precision Hopes | Recall Hopes | F1 Hopes | Precision Neither | Recall Neither | F1 Neither |
63
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:-----------------:|:--------------:|:----------:|
64
+ | No log | 1.0 | 214 | 0.7739 | 0.8930 | 0.8651 | 0.4776 | 0.2602 | 0.3368 | 0.0 | 0.0 | 0.0 | 0.9129 | 0.9876 | 0.9488 |
65
+ | 0.8895 | 2.0 | 428 | 0.2800 | 0.8960 | 0.9087 | 0.4736 | 0.9106 | 0.6231 | 0.7417 | 0.89 | 0.8091 | 0.9893 | 0.8949 | 0.9397 |
66
+ | 0.2905 | 3.0 | 642 | 0.3252 | 0.9492 | 0.9496 | 0.7879 | 0.7398 | 0.7631 | 0.7143 | 0.95 | 0.8155 | 0.9759 | 0.9691 | 0.9725 |
67
+ | 0.2905 | 4.0 | 856 | 0.2671 | 0.9281 | 0.9340 | 0.5813 | 0.8862 | 0.7021 | 0.8018 | 0.89 | 0.8436 | 0.9869 | 0.9335 | 0.9595 |
68
+ | 0.1741 | 5.0 | 1070 | 0.3593 | 0.9434 | 0.9453 | 0.7053 | 0.8171 | 0.7571 | 0.7458 | 0.88 | 0.8073 | 0.9795 | 0.9579 | 0.9685 |
 
 
 
 
 
69
 
70
 
71
  ### Framework versions
72
 
73
  - Transformers 4.41.2
74
  - Pytorch 2.3.0+cu121
75
+ - Datasets 2.19.2
76
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee5c23f1bf41d645962522598424ef5d46f0dd7a7ac213f4138ac606a8b85a36
3
  size 498615900
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09ee5737b5902bbb2a77606c99a7b29684653eaf9489ceb82765fde75eaafa07
3
  size 498615900
runs/Jun03_09-22-20_ec6854305179/events.out.tfevents.1717406548.ec6854305179.4427.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:978c215022d7925ea46f7af41185fcd75b3ab4dcb5e0546e8c1755762277d016
3
+ size 5027
runs/Jun03_09-22-42_ec6854305179/events.out.tfevents.1717406565.ec6854305179.4427.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e21d26ba8f1b80cc227e3e89d7cebd7ff088d50862a07760fbaa28dfbd8ae46e
3
+ size 10487
tokenizer.json CHANGED
@@ -1,7 +1,19 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
4
- "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 512,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
+ "padding": {
10
+ "strategy": "BatchLongest",
11
+ "direction": "Right",
12
+ "pad_to_multiple_of": null,
13
+ "pad_id": 1,
14
+ "pad_type_id": 0,
15
+ "pad_token": "<pad>"
16
+ },
17
  "added_tokens": [
18
  {
19
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8fbef072c612d7107032faeb9cdee392e788d88d45c7a2fbe2ee395165f45374
3
  size 5176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dde407b53aa1358f880d3006bd345e3f4adee0aa3f9afc07b8fbf7ce97b02553
3
  size 5176