saattrupdan commited on
Commit
200201b
·
1 Parent(s): 070a700

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -46
README.md CHANGED
@@ -1,32 +1,40 @@
1
  ---
2
  license: mit
3
- language: en
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
  - name: verdict-classifier-en
8
- results:
9
- - task:
10
- type: text-classification
11
- name: Verdict Classification
12
- widget:
13
- - "One might think that this is true, but it's taken out of context."
14
  ---
15
 
16
- # English Verdict Classifier
 
17
 
18
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on 2,100 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into English with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
19
- It achieves the following results on the evaluation set, being 1,000 such verdicts translated into English, but here including duplicates to represent the true distribution:
20
- - Loss: 0.2262
21
- - F1 Macro: 0.8813
22
- - F1 Misinformation: 0.9807
23
- - F1 Factual: 0.9846
24
- - F1 Other: 0.6786
25
- - Prec Macro: 0.8514
26
- - Prec Misinformation: 0.9908
27
- - Prec Factual: 0.9697
28
- - Prec Other: 0.5938
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## Training procedure
32
 
@@ -41,38 +49,38 @@ The following hyperparameters were used during training:
41
  - total_train_batch_size: 32
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_steps: 525
45
  - num_epochs: 1000
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
50
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
51
- | 1.0781 | 0.76 | 50 | 1.0941 | 0.0305 | 0.0 | 0.0 | 0.0914 | 0.0160 | 0.0 | 0.0 | 0.0479 |
52
- | 1.0077 | 1.53 | 100 | 0.9698 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
53
- | 0.9402 | 2.3 | 150 | 0.6143 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
54
- | 0.8817 | 3.08 | 200 | 0.3901 | 0.4580 | 0.9453 | 0.0 | 0.4286 | 0.4320 | 0.9211 | 0.0 | 0.375 |
55
- | 0.6771 | 3.84 | 250 | 0.2236 | 0.4836 | 0.9508 | 0.0 | 0.5 | 0.4373 | 0.9465 | 0.0 | 0.3654 |
56
- | 0.4794 | 4.61 | 300 | 0.1373 | 0.8467 | 0.9738 | 0.9697 | 0.5965 | 0.8142 | 0.9862 | 0.9412 | 0.5152 |
57
- | 0.3493 | 5.38 | 350 | 0.1394 | 0.8633 | 0.9761 | 0.9697 | 0.6441 | 0.8249 | 0.9907 | 0.9412 | 0.5429 |
58
- | 0.318 | 6.15 | 400 | 0.1203 | 0.8418 | 0.9739 | 0.9697 | 0.5818 | 0.8138 | 0.9839 | 0.9412 | 0.5161 |
59
- | 0.2508 | 6.91 | 450 | 0.1374 | 0.8674 | 0.9772 | 0.9697 | 0.6552 | 0.8303 | 0.9908 | 0.9412 | 0.5588 |
60
- | 0.1674 | 7.69 | 500 | 0.1904 | 0.8418 | 0.9689 | 0.9412 | 0.6154 | 0.7899 | 0.9929 | 0.8889 | 0.4878 |
61
- | 0.1829 | 8.46 | 550 | 0.1593 | 0.8759 | 0.9795 | 0.9697 | 0.6786 | 0.8419 | 0.9908 | 0.9412 | 0.5938 |
62
- | 0.1399 | 9.23 | 600 | 0.1616 | 0.8842 | 0.9795 | 0.9846 | 0.6885 | 0.8442 | 0.9954 | 0.9697 | 0.5676 |
63
- | 0.111 | 9.99 | 650 | 0.1656 | 0.8949 | 0.9817 | 0.9697 | 0.7333 | 0.8500 | 0.9977 | 0.9412 | 0.6111 |
64
- | 0.083 | 10.76 | 700 | 0.1874 | 0.8459 | 0.9763 | 0.9846 | 0.5769 | 0.8291 | 0.9818 | 0.9697 | 0.5357 |
65
- | 0.075 | 11.53 | 750 | 0.2262 | 0.8813 | 0.9807 | 0.9846 | 0.6786 | 0.8514 | 0.9908 | 0.9697 | 0.5938 |
66
- | 0.073 | 12.3 | 800 | 0.2647 | 0.8647 | 0.9761 | 0.9846 | 0.6333 | 0.8294 | 0.9907 | 0.9697 | 0.5278 |
67
- | 0.0585 | 13.08 | 850 | 0.2356 | 0.8720 | 0.9807 | 0.9688 | 0.6667 | 0.8451 | 0.9908 | 0.9688 | 0.5758 |
68
- | 0.0549 | 13.84 | 900 | 0.2521 | 0.8720 | 0.9796 | 0.9697 | 0.6667 | 0.8432 | 0.9886 | 0.9412 | 0.6 |
69
- | 0.0572 | 14.61 | 950 | 0.2730 | 0.8738 | 0.9783 | 0.9412 | 0.7018 | 0.8293 | 0.9931 | 0.8889 | 0.6061 |
70
- | 0.0487 | 15.38 | 1000 | 0.2744 | 0.8807 | 0.9795 | 0.9846 | 0.6780 | 0.8447 | 0.9931 | 0.9697 | 0.5714 |
71
- | 0.0653 | 16.15 | 1050 | 0.2522 | 0.8758 | 0.9807 | 0.9688 | 0.6780 | 0.8444 | 0.9931 | 0.9688 | 0.5714 |
72
- | 0.0467 | 16.91 | 1100 | 0.2914 | 0.8591 | 0.9761 | 0.9697 | 0.6316 | 0.8250 | 0.9885 | 0.9412 | 0.5455 |
73
- | 0.0293 | 17.69 | 1150 | 0.3072 | 0.8593 | 0.9749 | 0.9697 | 0.6333 | 0.8199 | 0.9907 | 0.9412 | 0.5278 |
74
- | 0.0402 | 18.46 | 1200 | 0.2922 | 0.8712 | 0.9772 | 0.9697 | 0.6667 | 0.8299 | 0.9930 | 0.9412 | 0.5556 |
75
- | 0.0209 | 19.23 | 1250 | 0.3046 | 0.8822 | 0.9795 | 0.9552 | 0.7119 | 0.8365 | 0.9954 | 0.9143 | 0.6 |
76
 
77
 
78
  ### Framework versions
@@ -80,4 +88,4 @@ The following hyperparameters were used during training:
80
  - Transformers 4.11.3
81
  - Pytorch 1.9.0+cu102
82
  - Datasets 1.9.0
83
- - Tokenizers 0.10.2
 
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
  - name: verdict-classifier-en
7
+ results: []
 
 
 
 
 
8
  ---
9
 
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
 
13
+ # verdict-classifier-en
 
 
 
 
 
 
 
 
 
 
14
 
15
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.1520
18
+ - F1 Macro: 0.9013
19
+ - F1 Misinformation: 0.9841
20
+ - F1 Factual: 0.9697
21
+ - F1 Other: 0.75
22
+ - Prec Macro: 0.8643
23
+ - Prec Misinformation: 0.9954
24
+ - Prec Factual: 0.9412
25
+ - Prec Other: 0.6562
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
 
39
  ## Training procedure
40
 
 
49
  - total_train_batch_size: 32
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 550
53
  - num_epochs: 1000
54
 
55
  ### Training results
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
59
+ | 1.072 | 0.73 | 50 | 1.0233 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
60
+ | 1.0077 | 1.47 | 100 | 0.8870 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
61
+ | 0.9439 | 2.2 | 150 | 0.6889 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
62
+ | 0.8743 | 2.93 | 200 | 0.3857 | 0.3129 | 0.9386 | 0.0 | 0.0 | 0.2959 | 0.8878 | 0.0 | 0.0 |
63
+ | 0.7564 | 3.67 | 250 | 0.2474 | 0.4630 | 0.9716 | 0.0 | 0.4176 | 0.4225 | 0.9839 | 0.0 | 0.2836 |
64
+ | 0.5366 | 4.41 | 300 | 0.1819 | 0.8054 | 0.9713 | 0.8772 | 0.5676 | 0.8043 | 0.9930 | 1.0 | 0.42 |
65
+ | 0.4043 | 5.15 | 350 | 0.1344 | 0.8425 | 0.9738 | 0.9538 | 0.6 | 0.8093 | 0.9884 | 0.9394 | 0.5 |
66
+ | 0.3792 | 5.87 | 400 | 0.1259 | 0.8645 | 0.9761 | 0.9841 | 0.6333 | 0.8388 | 0.9885 | 1.0 | 0.5278 |
67
+ | 0.2756 | 6.61 | 450 | 0.1344 | 0.8576 | 0.9774 | 0.9538 | 0.6415 | 0.8366 | 0.9841 | 0.9394 | 0.5862 |
68
+ | 0.2589 | 7.35 | 500 | 0.1188 | 0.8738 | 0.9783 | 0.9412 | 0.7018 | 0.8293 | 0.9931 | 0.8889 | 0.6061 |
69
+ | 0.2175 | 8.09 | 550 | 0.1436 | 0.8573 | 0.9798 | 0.9538 | 0.6383 | 0.8571 | 0.9798 | 0.9394 | 0.6522 |
70
+ | 0.1888 | 8.81 | 600 | 0.1566 | 0.8613 | 0.9761 | 0.9412 | 0.6667 | 0.8185 | 0.9907 | 0.8889 | 0.5758 |
71
+ | 0.15 | 9.55 | 650 | 0.1549 | 0.8542 | 0.9773 | 0.9538 | 0.6316 | 0.8245 | 0.9885 | 0.9394 | 0.5455 |
72
+ | 0.1464 | 10.29 | 700 | 0.1608 | 0.8633 | 0.9773 | 0.9697 | 0.6429 | 0.8307 | 0.9885 | 0.9412 | 0.5625 |
73
+ | 0.0954 | 11.03 | 750 | 0.1520 | 0.9013 | 0.9841 | 0.9697 | 0.75 | 0.8643 | 0.9954 | 0.9412 | 0.6562 |
74
+ | 0.1074 | 11.76 | 800 | 0.1655 | 0.8810 | 0.9819 | 0.9552 | 0.7059 | 0.8565 | 0.9886 | 0.9143 | 0.6667 |
75
+ | 0.1078 | 12.49 | 850 | 0.1937 | 0.8989 | 0.9829 | 0.9552 | 0.7586 | 0.8530 | 0.9977 | 0.9143 | 0.6471 |
76
+ | 0.098 | 13.23 | 900 | 0.2098 | 0.8767 | 0.9794 | 0.9412 | 0.7097 | 0.8226 | 1.0 | 0.8889 | 0.5789 |
77
+ | 0.0931 | 13.96 | 950 | 0.1591 | 0.8755 | 0.9819 | 0.9538 | 0.6909 | 0.8477 | 0.9908 | 0.9394 | 0.6129 |
78
+ | 0.0701 | 14.7 | 1000 | 0.2121 | 0.8926 | 0.9805 | 0.9552 | 0.7419 | 0.8398 | 1.0 | 0.9143 | 0.6053 |
79
+ | 0.0692 | 15.44 | 1050 | 0.2118 | 0.8989 | 0.9829 | 0.9552 | 0.7586 | 0.8530 | 0.9977 | 0.9143 | 0.6471 |
80
+ | 0.0848 | 16.17 | 1100 | 0.2094 | 0.8913 | 0.9818 | 0.9552 | 0.7368 | 0.8487 | 0.9954 | 0.9143 | 0.6364 |
81
+ | 0.0471 | 16.9 | 1150 | 0.2197 | 0.8919 | 0.9818 | 0.9697 | 0.7241 | 0.8514 | 0.9954 | 0.9412 | 0.6176 |
82
+ | 0.0399 | 17.64 | 1200 | 0.1997 | 0.9019 | 0.9852 | 0.9538 | 0.7667 | 0.8594 | 1.0 | 0.9394 | 0.6389 |
83
+ | 0.0307 | 18.38 | 1250 | 0.2873 | 0.8830 | 0.9795 | 0.9697 | 0.7000 | 0.8400 | 0.9954 | 0.9412 | 0.5833 |
84
 
85
 
86
  ### Framework versions
 
88
  - Transformers 4.11.3
89
  - Pytorch 1.9.0+cu102
90
  - Datasets 1.9.0
91
+ - Tokenizers 0.10.2