saattrupdan commited on
Commit
d096ea8
·
1 Parent(s): 88b0835

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -29
README.md CHANGED
@@ -1,31 +1,40 @@
1
  ---
2
  license: mit
3
- language: en
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
  - name: verdict-classifier-en
8
- results:
9
- - task:
10
- type: text-classification
11
- name: Verdict Classification
12
- widget:
13
- - "One might think that this is true, but it's taken out of context."
14
  ---
15
 
16
- # English Verdict Classifier
 
17
 
18
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on 1,500 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into English with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
19
- It achieves the following results on the evaluation set, being 1,000 such verdicts translated into English, but here including duplicates to represent the true distribution:
20
- - Loss: 0.1258
21
- - F1 Macro: 0.8408
22
- - F1 Misinformation: 0.9751
23
- - F1 Factual: 0.9508
24
- - F1 Other: 0.5965
25
- - Precision Macro: 0.8323
26
- - Precision Misinformation: 0.9818
27
- - Precision Factual: 1.0
28
- - Precision Other: 0.5152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Training procedure
31
 
@@ -40,21 +49,38 @@ The following hyperparameters were used during training:
40
  - total_train_batch_size: 32
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - lr_scheduler_warmup_steps: 462
44
  - num_epochs: 1000
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Precision Macro | Precision Misinformation | Precision Factual | Precision Other |
49
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
50
- | 1.034 | 0.98 | 57 | 0.9960 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
51
- | 0.968 | 1.98 | 114 | 0.8945 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
52
- | 0.9253 | 2.98 | 171 | 0.7182 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
53
- | 0.8215 | 3.98 | 228 | 0.3112 | 0.4795 | 0.9454 | 0.0 | 0.4932 | 0.4351 | 0.9381 | 0.0 | 0.3673 |
54
- | 0.5073 | 4.98 | 285 | 0.1564 | 0.8272 | 0.9703 | 0.9355 | 0.5758 | 0.8025 | 0.9883 | 0.9667 | 0.4524 |
55
- | 0.3046 | 5.98 | 342 | 0.1258 | 0.8408 | 0.9751 | 0.9508 | 0.5965 | 0.8323 | 0.9818 | 1.0 | 0.5152 |
56
- | 0.1971 | 6.98 | 399 | 0.1540 | 0.8458 | 0.9796 | 0.9538 | 0.6038 | 0.8258 | 0.9863 | 0.9394 | 0.5517 |
57
- | 0.1494 | 7.98 | 456 | 0.1779 | 0.8504 | 0.9737 | 0.9524 | 0.625 | 0.8195 | 0.9907 | 0.9677 | 0.5 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
 
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
  - name: verdict-classifier-en
7
+ results: []
 
 
 
 
 
8
  ---
9
 
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
 
13
+ # verdict-classifier-en
14
+
15
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.2262
18
+ - F1 Macro: 0.8813
19
+ - F1 Misinformation: 0.9807
20
+ - F1 Factual: 0.9846
21
+ - F1 Other: 0.6786
22
+ - Prec Macro: 0.8514
23
+ - Prec Misinformation: 0.9908
24
+ - Prec Factual: 0.9697
25
+ - Prec Other: 0.5938
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
 
39
  ## Training procedure
40
 
 
49
  - total_train_batch_size: 32
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 525
53
  - num_epochs: 1000
54
 
55
  ### Training results
56
 
57
+ | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
59
+ | 1.0781 | 0.76 | 50 | 1.0941 | 0.0305 | 0.0 | 0.0 | 0.0914 | 0.0160 | 0.0 | 0.0 | 0.0479 |
60
+ | 1.0077 | 1.53 | 100 | 0.9698 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
61
+ | 0.9402 | 2.3 | 150 | 0.6143 | 0.3136 | 0.9408 | 0.0 | 0.0 | 0.2961 | 0.8882 | 0.0 | 0.0 |
62
+ | 0.8817 | 3.08 | 200 | 0.3901 | 0.4580 | 0.9453 | 0.0 | 0.4286 | 0.4320 | 0.9211 | 0.0 | 0.375 |
63
+ | 0.6771 | 3.84 | 250 | 0.2236 | 0.4836 | 0.9508 | 0.0 | 0.5 | 0.4373 | 0.9465 | 0.0 | 0.3654 |
64
+ | 0.4794 | 4.61 | 300 | 0.1373 | 0.8467 | 0.9738 | 0.9697 | 0.5965 | 0.8142 | 0.9862 | 0.9412 | 0.5152 |
65
+ | 0.3493 | 5.38 | 350 | 0.1394 | 0.8633 | 0.9761 | 0.9697 | 0.6441 | 0.8249 | 0.9907 | 0.9412 | 0.5429 |
66
+ | 0.318 | 6.15 | 400 | 0.1203 | 0.8418 | 0.9739 | 0.9697 | 0.5818 | 0.8138 | 0.9839 | 0.9412 | 0.5161 |
67
+ | 0.2508 | 6.91 | 450 | 0.1374 | 0.8674 | 0.9772 | 0.9697 | 0.6552 | 0.8303 | 0.9908 | 0.9412 | 0.5588 |
68
+ | 0.1674 | 7.69 | 500 | 0.1904 | 0.8418 | 0.9689 | 0.9412 | 0.6154 | 0.7899 | 0.9929 | 0.8889 | 0.4878 |
69
+ | 0.1829 | 8.46 | 550 | 0.1593 | 0.8759 | 0.9795 | 0.9697 | 0.6786 | 0.8419 | 0.9908 | 0.9412 | 0.5938 |
70
+ | 0.1399 | 9.23 | 600 | 0.1616 | 0.8842 | 0.9795 | 0.9846 | 0.6885 | 0.8442 | 0.9954 | 0.9697 | 0.5676 |
71
+ | 0.111 | 9.99 | 650 | 0.1656 | 0.8949 | 0.9817 | 0.9697 | 0.7333 | 0.8500 | 0.9977 | 0.9412 | 0.6111 |
72
+ | 0.083 | 10.76 | 700 | 0.1874 | 0.8459 | 0.9763 | 0.9846 | 0.5769 | 0.8291 | 0.9818 | 0.9697 | 0.5357 |
73
+ | 0.075 | 11.53 | 750 | 0.2262 | 0.8813 | 0.9807 | 0.9846 | 0.6786 | 0.8514 | 0.9908 | 0.9697 | 0.5938 |
74
+ | 0.073 | 12.3 | 800 | 0.2647 | 0.8647 | 0.9761 | 0.9846 | 0.6333 | 0.8294 | 0.9907 | 0.9697 | 0.5278 |
75
+ | 0.0585 | 13.08 | 850 | 0.2356 | 0.8720 | 0.9807 | 0.9688 | 0.6667 | 0.8451 | 0.9908 | 0.9688 | 0.5758 |
76
+ | 0.0549 | 13.84 | 900 | 0.2521 | 0.8720 | 0.9796 | 0.9697 | 0.6667 | 0.8432 | 0.9886 | 0.9412 | 0.6 |
77
+ | 0.0572 | 14.61 | 950 | 0.2730 | 0.8738 | 0.9783 | 0.9412 | 0.7018 | 0.8293 | 0.9931 | 0.8889 | 0.6061 |
78
+ | 0.0487 | 15.38 | 1000 | 0.2744 | 0.8807 | 0.9795 | 0.9846 | 0.6780 | 0.8447 | 0.9931 | 0.9697 | 0.5714 |
79
+ | 0.0653 | 16.15 | 1050 | 0.2522 | 0.8758 | 0.9807 | 0.9688 | 0.6780 | 0.8444 | 0.9931 | 0.9688 | 0.5714 |
80
+ | 0.0467 | 16.91 | 1100 | 0.2914 | 0.8591 | 0.9761 | 0.9697 | 0.6316 | 0.8250 | 0.9885 | 0.9412 | 0.5455 |
81
+ | 0.0293 | 17.69 | 1150 | 0.3072 | 0.8593 | 0.9749 | 0.9697 | 0.6333 | 0.8199 | 0.9907 | 0.9412 | 0.5278 |
82
+ | 0.0402 | 18.46 | 1200 | 0.2922 | 0.8712 | 0.9772 | 0.9697 | 0.6667 | 0.8299 | 0.9930 | 0.9412 | 0.5556 |
83
+ | 0.0209 | 19.23 | 1250 | 0.3046 | 0.8822 | 0.9795 | 0.9552 | 0.7119 | 0.8365 | 0.9954 | 0.9143 | 0.6 |
84
 
85
 
86
  ### Framework versions