Commit
·
c791d95
1
Parent(s):
0df5108
Update README.md
Browse files
README.md
CHANGED
@@ -1,40 +1,30 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
model-index:
|
6 |
- name: verdict-classifier-en
|
7 |
-
results:
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
# verdict-classifier-en
|
14 |
-
|
15 |
-
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
|
16 |
-
It achieves the following results on the evaluation set:
|
17 |
- Loss: 0.1290
|
18 |
- F1 Macro: 0.9171
|
19 |
- F1 Misinformation: 0.9896
|
20 |
- F1 Factual: 0.9890
|
21 |
- F1 Other: 0.7727
|
22 |
-
-
|
23 |
-
-
|
24 |
-
-
|
25 |
-
-
|
26 |
-
|
27 |
-
## Model description
|
28 |
-
|
29 |
-
More information needed
|
30 |
-
|
31 |
-
## Intended uses & limitations
|
32 |
-
|
33 |
-
More information needed
|
34 |
-
|
35 |
-
## Training and evaluation data
|
36 |
-
|
37 |
-
More information needed
|
38 |
|
39 |
## Training procedure
|
40 |
|
@@ -54,7 +44,7 @@ The following hyperparameters were used during training:
|
|
54 |
|
55 |
### Training results
|
56 |
|
57 |
-
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other |
|
58 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
|
59 |
| 1.1493 | 0.16 | 50 | 1.1040 | 0.0550 | 0.0 | 0.1650 | 0.0 | 0.0300 | 0.0 | 0.0899 | 0.0 |
|
60 |
| 1.0899 | 0.32 | 100 | 1.0765 | 0.0619 | 0.0203 | 0.1654 | 0.0 | 0.2301 | 0.6 | 0.0903 | 0.0 |
|
@@ -129,4 +119,4 @@ The following hyperparameters were used during training:
|
|
129 |
- Transformers 4.11.3
|
130 |
- Pytorch 1.9.0+cu102
|
131 |
- Datasets 1.9.0
|
132 |
-
- Tokenizers 0.10.2
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
language: en
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
7 |
- name: verdict-classifier-en
|
8 |
+
results:
|
9 |
+
- task:
|
10 |
+
type: text-classification
|
11 |
+
name: Verdict Classification
|
12 |
+
widget:
|
13 |
+
- "One might think that this is true, but it's taken out of context."
|
14 |
---
|
15 |
|
16 |
+
# English Verdict Classifier
|
17 |
+
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on 2,500 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into English with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
|
18 |
+
It achieves the following results on the evaluation set, being 1,000 such verdicts translated into English, but here including duplicates to represent the true distribution:
|
|
|
|
|
|
|
|
|
19 |
- Loss: 0.1290
|
20 |
- F1 Macro: 0.9171
|
21 |
- F1 Misinformation: 0.9896
|
22 |
- F1 Factual: 0.9890
|
23 |
- F1 Other: 0.7727
|
24 |
+
- Precision Macro: 0.8940
|
25 |
+
- Precision Misinformation: 0.9954
|
26 |
+
- Precision Factual: 0.9783
|
27 |
+
- Precision Other: 0.7083
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Training procedure
|
30 |
|
|
|
44 |
|
45 |
### Training results
|
46 |
|
47 |
+
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Precision Macro | Precision Misinformation | Precision Factual | Precision Other |
|
48 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
|
49 |
| 1.1493 | 0.16 | 50 | 1.1040 | 0.0550 | 0.0 | 0.1650 | 0.0 | 0.0300 | 0.0 | 0.0899 | 0.0 |
|
50 |
| 1.0899 | 0.32 | 100 | 1.0765 | 0.0619 | 0.0203 | 0.1654 | 0.0 | 0.2301 | 0.6 | 0.0903 | 0.0 |
|
|
|
119 |
- Transformers 4.11.3
|
120 |
- Pytorch 1.9.0+cu102
|
121 |
- Datasets 1.9.0
|
122 |
+
- Tokenizers 0.10.2
|