Update readme.md
Browse files
README.md
CHANGED
|
@@ -1,15 +1,12 @@
|
|
| 1 |
# <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
|
| 2 |
|
| 3 |
-
|
| 4 |
-
- The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
|
| 5 |
-
- BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
|
| 6 |
-
|
| 7 |
-
The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
|
| 8 |
|
| 9 |
@inproceedings{bertweet,
|
| 10 |
title = {{BERTweet: A pre-trained language model for English Tweets}},
|
| 11 |
author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
|
| 12 |
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
|
|
|
|
| 13 |
year = {2020}
|
| 14 |
}
|
| 15 |
|
|
@@ -17,25 +14,15 @@ The general architecture and experimental results of BERTweet can be found in ou
|
|
| 17 |
|
| 18 |
For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
|
| 19 |
|
| 20 |
-
### <a name="install2"></a> Installation
|
| 21 |
-
|
| 22 |
-
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
|
| 23 |
-
- Install `transformers`:
|
| 24 |
-
- `git clone https://github.com/huggingface/transformers.git`
|
| 25 |
-
- `cd transformers`
|
| 26 |
-
- `pip3 install --upgrade .`
|
| 27 |
-
- Install `emoji`: `pip3 install emoji`
|
| 28 |
-
|
| 29 |
### <a name="models2"></a> Pre-trained models
|
| 30 |
|
| 31 |
|
| 32 |
Model | #params | Arch. | Pre-training data
|
| 33 |
---|---|---|---
|
| 34 |
-
`vinai/bertweet-base` | 135M | base |
|
| 35 |
`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
|
| 36 |
`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
|
| 37 |
-
|
| 38 |
-
Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
|
| 39 |
|
| 40 |
### <a name="usage2"></a> Example usage
|
| 41 |
|
|
@@ -45,10 +32,15 @@ import torch
|
|
| 45 |
from transformers import AutoModel, AutoTokenizer
|
| 46 |
|
| 47 |
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
# INPUT TWEET IS ALREADY NORMALIZED!
|
| 51 |
-
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :
|
| 52 |
|
| 53 |
input_ids = torch.tensor([tokenizer.encode(line)])
|
| 54 |
|
|
@@ -62,7 +54,9 @@ with torch.no_grad():
|
|
| 62 |
|
| 63 |
### <a name="preprocess"></a> Normalize raw input Tweets
|
| 64 |
|
| 65 |
-
Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
|
|
|
|
|
|
|
| 66 |
|
| 67 |
```python
|
| 68 |
import torch
|
|
|
|
| 1 |
# <a name="introduction"></a> BERTweet: A pre-trained language model for English Tweets
|
| 2 |
|
| 3 |
+
BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure. The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic. The general architecture and experimental results of BERTweet can be found in our [paper](https://aclanthology.org/2020.emnlp-demos.2/):
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
@inproceedings{bertweet,
|
| 6 |
title = {{BERTweet: A pre-trained language model for English Tweets}},
|
| 7 |
author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
|
| 8 |
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
|
| 9 |
+
pages = {9--14},
|
| 10 |
year = {2020}
|
| 11 |
}
|
| 12 |
|
|
|
|
| 14 |
|
| 15 |
For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
### <a name="models2"></a> Pre-trained models
|
| 18 |
|
| 19 |
|
| 20 |
Model | #params | Arch. | Pre-training data
|
| 21 |
---|---|---|---
|
| 22 |
+
`vinai/bertweet-base` | 135M | base | 850M English Tweets (cased)
|
| 23 |
`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
|
| 24 |
`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
|
| 25 |
+
`vinai/bertweet-large` | 355M | large | 873M English Tweets (cased)
|
|
|
|
| 26 |
|
| 27 |
### <a name="usage2"></a> Example usage
|
| 28 |
|
|
|
|
| 32 |
from transformers import AutoModel, AutoTokenizer
|
| 33 |
|
| 34 |
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
|
| 35 |
+
|
| 36 |
+
# For transformers v4.x+:
|
| 37 |
+
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
|
| 38 |
+
|
| 39 |
+
# For transformers v3.x:
|
| 40 |
+
# tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
|
| 41 |
|
| 42 |
# INPUT TWEET IS ALREADY NORMALIZED!
|
| 43 |
+
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :crying_face:"
|
| 44 |
|
| 45 |
input_ids = torch.tensor([tokenizer.encode(line)])
|
| 46 |
|
|
|
|
| 54 |
|
| 55 |
### <a name="preprocess"></a> Normalize raw input Tweets
|
| 56 |
|
| 57 |
+
Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument. This argument currently only supports models "`vinai/bertweet-base`", "`vinai/bertweet-covid19-base-cased`" and "`vinai/bertweet-covid19-base-uncased`".
|
| 58 |
+
|
| 59 |
+
- Install `emoji`: `pip3 install emoji`
|
| 60 |
|
| 61 |
```python
|
| 62 |
import torch
|