saattrupdan commited on
Commit
139baa3
·
1 Parent(s): d096ea8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -19
README.md CHANGED
@@ -1,19 +1,22 @@
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
  - name: verdict-classifier-en
7
- results: []
 
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
- # verdict-classifier-en
14
-
15
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
16
- It achieves the following results on the evaluation set:
17
  - Loss: 0.2262
18
  - F1 Macro: 0.8813
19
  - F1 Misinformation: 0.9807
@@ -24,17 +27,6 @@ It achieves the following results on the evaluation set:
24
  - Prec Factual: 0.9697
25
  - Prec Other: 0.5938
26
 
27
- ## Model description
28
-
29
- More information needed
30
-
31
- ## Intended uses & limitations
32
-
33
- More information needed
34
-
35
- ## Training and evaluation data
36
-
37
- More information needed
38
 
39
  ## Training procedure
40
 
@@ -88,4 +80,4 @@ The following hyperparameters were used during training:
88
  - Transformers 4.11.3
89
  - Pytorch 1.9.0+cu102
90
  - Datasets 1.9.0
91
- - Tokenizers 0.10.2
 
1
  ---
2
  license: mit
3
+ language: en
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
  - name: verdict-classifier-en
8
+ results:
9
+ - task:
10
+ type: text-classification
11
+ name: Verdict Classification
12
+ widget:
13
+ - "One might think that this is true, but it's taken out of context."
14
  ---
15
 
16
+ # English Verdict Classifier
 
17
 
18
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on 2,100 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into English with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
19
+ It achieves the following results on the evaluation set, being 1,000 such verdicts translated into English, but here including duplicates to represent the true distribution:
 
 
20
  - Loss: 0.2262
21
  - F1 Macro: 0.8813
22
  - F1 Misinformation: 0.9807
 
27
  - Prec Factual: 0.9697
28
  - Prec Other: 0.5938
29
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## Training procedure
32
 
 
80
  - Transformers 4.11.3
81
  - Pytorch 1.9.0+cu102
82
  - Datasets 1.9.0
83
+ - Tokenizers 0.10.2