Commit
·
8aa487b
1
Parent(s):
a2ea80f
Update README.md
Browse files
README.md
CHANGED
@@ -7,14 +7,22 @@ language:
|
|
7 |
metrics:
|
8 |
- perplexity
|
9 |
library_name: transformers
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
## Twitter-roBERTa-base fine-tuned using masked language modelling
|
13 |
-
This is a RoBERTa-base model finetuned (domain adaptation) on ~2M tweets from Jin 2009.
|
14 |
This is the first step of a two steps approach to finetune for sentiment analysis (ULMFit)
|
15 |
This model is suitable for English.
|
16 |
|
|
|
|
|
|
|
|
|
17 |
Reference Paper: [ULMFit](https://arxiv.org/abs/1801.06146).
|
|
|
18 |
Git Repo: TBD
|
19 |
-
Labels: 0 -> Negative; 1 -> Positive
|
20 |
-
|
|
|
7 |
metrics:
|
8 |
- perplexity
|
9 |
library_name: transformers
|
10 |
+
tags:
|
11 |
+
- distillroberta-base
|
12 |
+
- twitter
|
13 |
+
pipeline_tag: fill-mask
|
14 |
---
|
15 |
|
16 |
## Twitter-roBERTa-base fine-tuned using masked language modelling
|
17 |
+
This is a RoBERTa-base model finetuned (domain adaptation) on ~2M tweets from Jin 2009 (sentiment140).
|
18 |
This is the first step of a two steps approach to finetune for sentiment analysis (ULMFit)
|
19 |
This model is suitable for English.
|
20 |
|
21 |
+
Main charachetistics:
|
22 |
+
- pretrained model and tokenizer: distillroberta-base
|
23 |
+
- no cleaning/processing applied to the data
|
24 |
+
|
25 |
Reference Paper: [ULMFit](https://arxiv.org/abs/1801.06146).
|
26 |
+
Reference dataset: [Sentiment140](https://www.kaggle.com/datasets/kazanova/sentiment140?resource=download)
|
27 |
Git Repo: TBD
|
28 |
+
Labels: 0 -> Negative; 1 -> Positive
|
|