modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
luca-martial/DialoGPT-Elon | afa2ffff96344dbe63d95f6b03451ce9459502d0 | 2021-06-10T09:35:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | luca-martial | null | luca-martial/DialoGPT-Elon | 4,891 | 2 | transformers | 900 | ---
tags:
- conversational
---
# DialoGPT-Elon: Chat with Elon Musk
This is an attempt to create an AI replica of Elon Musk. The bot's conversation abilities come from Microsoft's [DialoGPT conversational model](https://huggingface.co/microsoft/DialoGPT-medium) fine-tuned on conversation transcripts of Elon's interviews on [Clubhouse](https://zamesin.me/clubhouse-elon-musk-interview/), the [Lex Fridman podcast](https://lexfridman.com/wordpress/wp-content/uploads/2019/11/elon_musk_lex_fridman_2_transcript.pdf) and the [Joe Rogan Experience](https://www.kaggle.com/christianlillelund/joe-rogan-experience-1169-elon-musk).
I also built a Discord AI bot that makes use of this model. Check out my [GitHub repo](https://github.com/luca-martial/elon-bot)! |
smilegate-ai/kor_unsmile | f597423eeff8e6da99cad85cbe3a81adf5225637 | 2022-03-28T01:34:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | smilegate-ai | null | smilegate-ai/kor_unsmile | 4,882 | 3 | transformers | 901 | Entry not found |
Helsinki-NLP/opus-mt-en-cs | 7cba4a7e3daff13c48fc2fcd740ef0711b1dd075 | 2021-09-09T21:34:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"cs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-cs | 4,879 | null | transformers | 902 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-cs
* source languages: en
* target languages: cs
* OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.cs | 22.8 | 0.507 |
| news-test2008.en.cs | 20.7 | 0.485 |
| newstest2009.en.cs | 21.8 | 0.500 |
| newstest2010.en.cs | 22.1 | 0.505 |
| newstest2011.en.cs | 23.2 | 0.507 |
| newstest2012.en.cs | 20.8 | 0.482 |
| newstest2013.en.cs | 24.7 | 0.514 |
| newstest2015-encs.en.cs | 24.9 | 0.527 |
| newstest2016-encs.en.cs | 26.7 | 0.540 |
| newstest2017-encs.en.cs | 22.7 | 0.503 |
| newstest2018-encs.en.cs | 22.9 | 0.504 |
| newstest2019-encs.en.cs | 24.9 | 0.518 |
| Tatoeba.en.cs | 46.1 | 0.647 |
|
uer/chinese_roberta_L-2_H-128 | b24cec74c68f494dc7ee8f05959ac3aee598cf78 | 2022-07-15T08:09:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-2_H-128 | 4,866 | 1 | transformers | 903 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
uer/pegasus-base-chinese-cluecorpussmall | 50bb33cb078e9a08c56c8b65513ddc46cb9665ae | 2022-07-15T08:18:04.000Z | [
"pytorch",
"tf",
"pegasus",
"text2text-generation",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uer | null | uer/pegasus-base-chinese-cluecorpussmall | 4,853 | 1 | transformers | 904 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。"
---
# Chinese Pegasus
## Model description
This model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
You can download the set of Chinese PEGASUS models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **PEGASUS-Base** | [**L=12/H=768 (Base)**][base] |
| **PEGASUS-Large** | [**L=16/H=1024 (Large)**][large] |
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of PEGASUS-Base):
```python
>>> from transformers import BertTokenizer, PegasusForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> model = PegasusForConditionalGeneration.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。", max_length=50, do_sample=False)
[{'generated_text': '书 的 质 量 很 好 。'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
Taking the case of PEGASUS-Base
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor gsg --sentence_selection_strategy random
```
```
python3 pretrain.py --dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/pegasus/base_config.json \
--output_model_path models/cluecorpussmall_pegasus_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 8
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_pegasus_from_uer_to_huggingface.py --input_model_path cluecorpussmall_pegasus_base_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@inproceedings{zhang2020pegasus,
title={Pegasus: Pre-training with extracted gap-sentences for abstractive summarization},
author={Zhang, Jingqing and Zhao, Yao and Saleh, Mohammad and Liu, Peter},
booktitle={International Conference on Machine Learning},
pages={11328--11339},
year={2020},
organization={PMLR}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[base]:https://huggingface.co/uer/pegasus-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/pegasus-large-chinese-cluecorpussmall |
dbmdz/bert-base-italian-cased | bcecdad25ce7cdd99c58c4e504ab97e6ff7222cf | 2021-05-19T14:59:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-base-italian-cased | 4,818 | 2 | transformers | 905 | ---
language: it
license: mit
datasets:
- wikipedia
---
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
papluca/xlm-roberta-base-language-detection | f793746a8fff7a83f266fa0df91064727c8c76a2 | 2021-11-25T12:41:12.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:1911.02116",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | papluca | null | papluca/xlm-roberta-base-language-detection | 4,816 | 11 | transformers | 906 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-language-detection
results: []
---
# xlm-roberta-base-language-detection
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset.
## Model description
This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output).
For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al.
## Intended uses & limitations
You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:
`arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)`
## Training and evaluation data
The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.6%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table.
| Language | Precision | Recall | F1-score | support |
|:--------:|:---------:|:------:|:--------:|:-------:|
|ar |0.998 |0.996 |0.997 |500 |
|bg |0.998 |0.964 |0.981 |500 |
|de |0.998 |0.996 |0.997 |500 |
|el |0.996 |1.000 |0.998 |500 |
|en |1.000 |1.000 |1.000 |500 |
|es |0.967 |1.000 |0.983 |500 |
|fr |1.000 |1.000 |1.000 |500 |
|hi |0.994 |0.992 |0.993 |500 |
|it |1.000 |0.992 |0.996 |500 |
|ja |0.996 |0.996 |0.996 |500 |
|nl |1.000 |1.000 |1.000 |500 |
|pl |1.000 |1.000 |1.000 |500 |
|pt |0.988 |1.000 |0.994 |500 |
|ru |1.000 |0.994 |0.997 |500 |
|sw |1.000 |1.000 |1.000 |500 |
|th |1.000 |0.998 |0.999 |500 |
|tr |0.994 |0.992 |0.993 |500 |
|ur |1.000 |1.000 |1.000 |500 |
|vi |0.992 |1.000 |0.996 |500 |
|zh |1.000 |1.000 |1.000 |500 |
### Benchmarks
As a baseline to compare `xlm-roberta-base-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below.
| Language | Precision | Recall | F1-score | support |
|:--------:|:---------:|:------:|:--------:|:-------:|
|ar |0.990 |0.970 |0.980 |500 |
|bg |0.998 |0.964 |0.981 |500 |
|de |0.992 |0.944 |0.967 |500 |
|el |1.000 |0.998 |0.999 |500 |
|en |1.000 |1.000 |1.000 |500 |
|es |1.000 |0.968 |0.984 |500 |
|fr |0.996 |1.000 |0.998 |500 |
|hi |0.949 |0.976 |0.963 |500 |
|it |0.990 |0.980 |0.985 |500 |
|ja |0.927 |0.988 |0.956 |500 |
|nl |0.980 |1.000 |0.990 |500 |
|pl |0.986 |0.996 |0.991 |500 |
|pt |0.950 |0.996 |0.973 |500 |
|ru |0.996 |0.974 |0.985 |500 |
|sw |1.000 |1.000 |1.000 |500 |
|th |1.000 |0.996 |0.998 |500 |
|tr |0.990 |0.968 |0.979 |500 |
|ur |0.998 |0.996 |0.997 |500 |
|vi |0.971 |0.990 |0.980 |500 |
|zh |1.000 |1.000 |1.000 |500 |
## Training procedure
Fine-tuning was done via the `Trainer` API.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
The validation results on the `valid` split of the Language Identification dataset are summarised here below.
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 |
| 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 |
In short, it achieves the following results on the validation set:
- Loss: 0.0101
- Accuracy: 0.9977
- F1: 0.9977
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Rostlab/prot_t5_xxl_uniref50 | 31a40d7b55caf68d7a8a8dfd913b779b99dc09a9 | 2021-03-30T19:25:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Rostlab | null | Rostlab/prot_t5_xxl_uniref50 | 4,800 | null | transformers | 907 | Entry not found |
dandelin/vilt-b32-finetuned-vqa | a08ecc15918f6aa056ff01b5397f10bd718af96b | 2022-07-29T13:47:48.000Z | [
"pytorch",
"vilt",
"arxiv:2102.03334",
"transformers",
"visual-question-answering",
"license:apache-2.0"
] | visual-question-answering | false | dandelin | null | dandelin/vilt-b32-finetuned-vqa | 4,795 | 3 | transformers | 908 | ---
tags:
- visual-question-answering
license: apache-2.0
widget:
- text: "What animal is it?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg"
- text: "Where is it?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg"
---
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for visual question answering.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
``` |
Salesforce/codegen-350M-mono | a268ccf6c2f3f6c8eb582bbea2411d750253864f | 2022-06-28T17:46:25.000Z | [
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"transformers",
"license:bsd-3-clause"
] | text-generation | false | Salesforce | null | Salesforce/codegen-350M-mono | 4,795 | 1 | transformers | 909 | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Mono 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 350M** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 350M* and further pre-trained on a Python programming language dataset, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 350M) was firstly initialized with *CodeGen-Multi 350M*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
microsoft/deberta-v2-xxlarge-mnli | 8f8b43ddfc6f5e93a7abd1d1506d606b080a4c06 | 2021-05-21T20:08:40.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta",
"deberta-mnli",
"license:mit"
] | text-classification | false | microsoft | null | microsoft/deberta-v2-xxlarge-mnli | 4,792 | 1 | transformers | 910 | ---
language: en
tags:
- deberta
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa V2 XXLarge model fine-tuned with MNLI task, 48 layers, 1536 hidden size. Total parameters 1.5B.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=rte
output_dir="ds_results"
num_gpus=8
batch_size=4
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=rte
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
Langboat/mengzi-t5-base | fbd9c58ba8e1a5393668fbdfe477ec70267c01e7 | 2021-10-21T12:33:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"zh",
"arxiv:2110.06696",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Langboat | null | Langboat/mengzi-t5-base | 4,775 | 15 | transformers | 911 | ---
language:
- zh
license: apache-2.0
---
# Mengzi-T5 model (Chinese)
Pretrained model on 300G Chinese corpus.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base")
model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
yiyanghkust/finbert-fls | 443586dc31c765c0aaf1c4daaed8cf3643c92fa5 | 2022-06-10T23:20:05.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"financial-text-analysis",
"forward-looking-statement"
] | text-classification | false | yiyanghkust | null | yiyanghkust/finbert-fls | 4,754 | 3 | transformers | 912 | ---
language: "en"
tags:
- financial-text-analysis
- forward-looking-statement
widget:
- text: "We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs. "
---
Forward-looking statements (FLS) inform investors of managers’ beliefs and opinions about firm's future events or results. Identifying forward-looking statements from corporate reports can assist investors in financial analysis. FinBERT-FLS is a FinBERT model fine-tuned on 3,500 manually annotated sentences from Management Discussion and Analysis section of annual reports of Russell 3000 firms.
**Input**: A financial text.
**Output**: Specific-FLS , Non-specific FLS, or Not-FLS.
# How to use
You can use this model with Transformers pipeline for forward-looking statement classification.
```python
# tested in transformers==4.18.0
from transformers import BertTokenizer, BertForSequenceClassification, pipeline
finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-fls',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-fls')
nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer)
results = nlp('We expect the age of our fleet to enhance availability and reliability due to reduced downtime for repairs.')
print(results) # [{'label': 'Specific FLS', 'score': 0.77278733253479}]
```
Visit [FinBERT.AI](https://finbert.ai/) for more details on the recent development of FinBERT. |
stas/t5-very-small-random | 988f491d1f2b837d47895885d96b1d4992a25d0e | 2021-04-21T02:34:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stas | null | stas/t5-very-small-random | 4,736 | null | transformers | 913 | This is a tiny random t5 model used for testing
See `t5-make-very-small-model.py` for how it was created. |
transformersbook/bert-base-uncased-finetuned-clinc | 795b076da71dc236dde692338e21560cbbffa6e4 | 2022-02-05T16:38:54.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1909.02027",
"transformers"
] | text-classification | false | transformersbook | null | transformersbook/bert-base-uncased-finetuned-clinc | 4,715 | null | transformers | 914 | # Intent Detection with BERT
This model was trained on the [CLINC150](https://arxiv.org/abs/1909.02027) dataset for customer intent detection. The dataset can be found on the [Hub](https://huggingface.co/datasets/clinc_oos). The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
|
bigscience/bloom-6b3 | 94a4acd1e473db402c72df105b31d05c46632e5d | 2022-07-13T09:02:39.000Z | [
"pytorch",
"jax",
"bloom",
"feature-extraction",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"transformers",
"license:bigscience-bloom-rail-1.0",
"text-generation"
] | text-generation | false | bigscience | null | bigscience/bloom-6b3 | 4,706 | 2 | transformers | 915 | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
# <span style="color:red"><b>WARNING:</b> This is an <b>intermediary checkpoint</b>. It is not fully trained yet. You might want to use [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) if you want a model that has completed training.</span>
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 6.3B billion parameters:
* 30 layers, 32 attention heads
* Hidden layers are 4096-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
_In progress._
Current training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/)
- Checkpoint size:
- Bf16 weights: 329GB
- Full checkpoint with optimizer states: 2.3TB
- Training throughput: About 150 TFLOP per GPU per second
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Estimated end: 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay
|
Helsinki-NLP/opus-mt-ine-en | bd46b80a8dfef9dcd1a1201bbe2807bdec97ad9b | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"en",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ine-en | 4,704 | null | transformers | 916 | ---
language:
- ca
- es
- os
- ro
- fy
- cy
- sc
- is
- yi
- lb
- an
- sq
- fr
- ht
- rm
- ps
- af
- uk
- sl
- lt
- bg
- be
- gd
- si
- en
- br
- mk
- or
- mr
- ru
- fo
- co
- oc
- pl
- gl
- nb
- bn
- id
- hy
- da
- gv
- nl
- pt
- hi
- as
- kw
- ga
- sv
- gu
- wa
- lv
- el
- it
- hr
- ur
- nn
- de
- cs
- ine
tags:
- translation
license: apache-2.0
---
### ine-eng
* source group: Indo-European languages
* target group: English
* OPUS readme: [ine-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md)
* model: transformer
* source language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 11.2 | 0.375 |
| newsdev2016-enro-roneng.ron.eng | 35.5 | 0.614 |
| newsdev2017-enlv-laveng.lav.eng | 25.1 | 0.542 |
| newsdev2019-engu-gujeng.guj.eng | 16.0 | 0.420 |
| newsdev2019-enlt-liteng.lit.eng | 24.0 | 0.522 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 30.1 | 0.550 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 33.4 | 0.572 |
| newssyscomb2009-ceseng.ces.eng | 24.0 | 0.520 |
| newssyscomb2009-deueng.deu.eng | 25.7 | 0.526 |
| newssyscomb2009-fraeng.fra.eng | 27.9 | 0.550 |
| newssyscomb2009-itaeng.ita.eng | 31.4 | 0.574 |
| newssyscomb2009-spaeng.spa.eng | 28.3 | 0.555 |
| news-test2008-deueng.deu.eng | 24.0 | 0.515 |
| news-test2008-fraeng.fra.eng | 24.5 | 0.524 |
| news-test2008-spaeng.spa.eng | 25.5 | 0.533 |
| newstest2009-ceseng.ces.eng | 23.3 | 0.516 |
| newstest2009-deueng.deu.eng | 23.2 | 0.512 |
| newstest2009-fraeng.fra.eng | 27.3 | 0.545 |
| newstest2009-itaeng.ita.eng | 30.3 | 0.567 |
| newstest2009-spaeng.spa.eng | 27.9 | 0.549 |
| newstest2010-ceseng.ces.eng | 23.8 | 0.523 |
| newstest2010-deueng.deu.eng | 26.2 | 0.545 |
| newstest2010-fraeng.fra.eng | 28.6 | 0.562 |
| newstest2010-spaeng.spa.eng | 31.4 | 0.581 |
| newstest2011-ceseng.ces.eng | 24.2 | 0.521 |
| newstest2011-deueng.deu.eng | 23.9 | 0.522 |
| newstest2011-fraeng.fra.eng | 29.5 | 0.570 |
| newstest2011-spaeng.spa.eng | 30.3 | 0.570 |
| newstest2012-ceseng.ces.eng | 23.5 | 0.516 |
| newstest2012-deueng.deu.eng | 24.9 | 0.529 |
| newstest2012-fraeng.fra.eng | 30.0 | 0.568 |
| newstest2012-ruseng.rus.eng | 29.9 | 0.565 |
| newstest2012-spaeng.spa.eng | 33.3 | 0.593 |
| newstest2013-ceseng.ces.eng | 25.6 | 0.531 |
| newstest2013-deueng.deu.eng | 27.7 | 0.545 |
| newstest2013-fraeng.fra.eng | 30.0 | 0.561 |
| newstest2013-ruseng.rus.eng | 24.4 | 0.514 |
| newstest2013-spaeng.spa.eng | 30.8 | 0.577 |
| newstest2014-csen-ceseng.ces.eng | 27.7 | 0.558 |
| newstest2014-deen-deueng.deu.eng | 27.7 | 0.545 |
| newstest2014-fren-fraeng.fra.eng | 32.2 | 0.592 |
| newstest2014-hien-hineng.hin.eng | 16.7 | 0.450 |
| newstest2014-ruen-ruseng.rus.eng | 27.2 | 0.552 |
| newstest2015-encs-ceseng.ces.eng | 25.4 | 0.518 |
| newstest2015-ende-deueng.deu.eng | 28.8 | 0.552 |
| newstest2015-enru-ruseng.rus.eng | 25.6 | 0.527 |
| newstest2016-encs-ceseng.ces.eng | 27.0 | 0.540 |
| newstest2016-ende-deueng.deu.eng | 33.5 | 0.592 |
| newstest2016-enro-roneng.ron.eng | 32.8 | 0.591 |
| newstest2016-enru-ruseng.rus.eng | 24.8 | 0.523 |
| newstest2017-encs-ceseng.ces.eng | 23.7 | 0.510 |
| newstest2017-ende-deueng.deu.eng | 29.3 | 0.556 |
| newstest2017-enlv-laveng.lav.eng | 18.9 | 0.486 |
| newstest2017-enru-ruseng.rus.eng | 28.0 | 0.546 |
| newstest2018-encs-ceseng.ces.eng | 24.9 | 0.521 |
| newstest2018-ende-deueng.deu.eng | 36.0 | 0.604 |
| newstest2018-enru-ruseng.rus.eng | 23.8 | 0.517 |
| newstest2019-deen-deueng.deu.eng | 31.5 | 0.570 |
| newstest2019-guen-gujeng.guj.eng | 12.1 | 0.377 |
| newstest2019-lten-liteng.lit.eng | 26.6 | 0.555 |
| newstest2019-ruen-ruseng.rus.eng | 27.5 | 0.541 |
| Tatoeba-test.afr-eng.afr.eng | 59.0 | 0.724 |
| Tatoeba-test.ang-eng.ang.eng | 9.9 | 0.254 |
| Tatoeba-test.arg-eng.arg.eng | 41.6 | 0.487 |
| Tatoeba-test.asm-eng.asm.eng | 22.8 | 0.392 |
| Tatoeba-test.ast-eng.ast.eng | 36.1 | 0.521 |
| Tatoeba-test.awa-eng.awa.eng | 11.6 | 0.280 |
| Tatoeba-test.bel-eng.bel.eng | 42.2 | 0.597 |
| Tatoeba-test.ben-eng.ben.eng | 45.8 | 0.598 |
| Tatoeba-test.bho-eng.bho.eng | 34.4 | 0.518 |
| Tatoeba-test.bre-eng.bre.eng | 24.4 | 0.405 |
| Tatoeba-test.bul-eng.bul.eng | 50.8 | 0.660 |
| Tatoeba-test.cat-eng.cat.eng | 51.2 | 0.677 |
| Tatoeba-test.ces-eng.ces.eng | 47.6 | 0.641 |
| Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.214 |
| Tatoeba-test.cos-eng.cos.eng | 61.0 | 0.675 |
| Tatoeba-test.csb-eng.csb.eng | 22.5 | 0.394 |
| Tatoeba-test.cym-eng.cym.eng | 34.7 | 0.522 |
| Tatoeba-test.dan-eng.dan.eng | 56.2 | 0.708 |
| Tatoeba-test.deu-eng.deu.eng | 44.9 | 0.625 |
| Tatoeba-test.dsb-eng.dsb.eng | 21.0 | 0.383 |
| Tatoeba-test.egl-eng.egl.eng | 6.9 | 0.221 |
| Tatoeba-test.ell-eng.ell.eng | 62.1 | 0.741 |
| Tatoeba-test.enm-eng.enm.eng | 22.6 | 0.466 |
| Tatoeba-test.ext-eng.ext.eng | 33.2 | 0.496 |
| Tatoeba-test.fao-eng.fao.eng | 28.1 | 0.460 |
| Tatoeba-test.fas-eng.fas.eng | 9.6 | 0.306 |
| Tatoeba-test.fra-eng.fra.eng | 50.3 | 0.661 |
| Tatoeba-test.frm-eng.frm.eng | 30.0 | 0.457 |
| Tatoeba-test.frr-eng.frr.eng | 15.2 | 0.301 |
| Tatoeba-test.fry-eng.fry.eng | 34.4 | 0.525 |
| Tatoeba-test.gcf-eng.gcf.eng | 18.4 | 0.317 |
| Tatoeba-test.gla-eng.gla.eng | 24.1 | 0.400 |
| Tatoeba-test.gle-eng.gle.eng | 52.2 | 0.671 |
| Tatoeba-test.glg-eng.glg.eng | 50.5 | 0.669 |
| Tatoeba-test.glv-eng.glv.eng | 5.7 | 0.189 |
| Tatoeba-test.gos-eng.gos.eng | 19.2 | 0.378 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.022 |
| Tatoeba-test.grc-eng.grc.eng | 0.9 | 0.095 |
| Tatoeba-test.gsw-eng.gsw.eng | 23.9 | 0.390 |
| Tatoeba-test.guj-eng.guj.eng | 28.0 | 0.428 |
| Tatoeba-test.hat-eng.hat.eng | 44.2 | 0.567 |
| Tatoeba-test.hbs-eng.hbs.eng | 51.6 | 0.666 |
| Tatoeba-test.hif-eng.hif.eng | 22.3 | 0.451 |
| Tatoeba-test.hin-eng.hin.eng | 41.7 | 0.585 |
| Tatoeba-test.hsb-eng.hsb.eng | 46.4 | 0.590 |
| Tatoeba-test.hye-eng.hye.eng | 40.4 | 0.564 |
| Tatoeba-test.isl-eng.isl.eng | 43.8 | 0.605 |
| Tatoeba-test.ita-eng.ita.eng | 60.7 | 0.735 |
| Tatoeba-test.jdt-eng.jdt.eng | 5.5 | 0.091 |
| Tatoeba-test.kok-eng.kok.eng | 7.8 | 0.205 |
| Tatoeba-test.ksh-eng.ksh.eng | 15.8 | 0.284 |
| Tatoeba-test.kur-eng.kur.eng | 11.6 | 0.232 |
| Tatoeba-test.lad-eng.lad.eng | 30.7 | 0.484 |
| Tatoeba-test.lah-eng.lah.eng | 11.0 | 0.286 |
| Tatoeba-test.lat-eng.lat.eng | 24.4 | 0.432 |
| Tatoeba-test.lav-eng.lav.eng | 47.2 | 0.646 |
| Tatoeba-test.lij-eng.lij.eng | 9.0 | 0.287 |
| Tatoeba-test.lit-eng.lit.eng | 51.7 | 0.670 |
| Tatoeba-test.lld-eng.lld.eng | 22.4 | 0.369 |
| Tatoeba-test.lmo-eng.lmo.eng | 26.1 | 0.381 |
| Tatoeba-test.ltz-eng.ltz.eng | 39.8 | 0.536 |
| Tatoeba-test.mai-eng.mai.eng | 72.3 | 0.758 |
| Tatoeba-test.mar-eng.mar.eng | 32.0 | 0.554 |
| Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.822 |
| Tatoeba-test.mkd-eng.mkd.eng | 49.5 | 0.638 |
| Tatoeba-test.msa-eng.msa.eng | 38.6 | 0.566 |
| Tatoeba-test.multi.eng | 45.6 | 0.615 |
| Tatoeba-test.mwl-eng.mwl.eng | 40.4 | 0.767 |
| Tatoeba-test.nds-eng.nds.eng | 35.5 | 0.538 |
| Tatoeba-test.nep-eng.nep.eng | 4.9 | 0.209 |
| Tatoeba-test.nld-eng.nld.eng | 54.2 | 0.694 |
| Tatoeba-test.non-eng.non.eng | 39.3 | 0.573 |
| Tatoeba-test.nor-eng.nor.eng | 50.9 | 0.663 |
| Tatoeba-test.oci-eng.oci.eng | 19.6 | 0.386 |
| Tatoeba-test.ori-eng.ori.eng | 16.2 | 0.364 |
| Tatoeba-test.orv-eng.orv.eng | 13.6 | 0.288 |
| Tatoeba-test.oss-eng.oss.eng | 9.4 | 0.301 |
| Tatoeba-test.pan-eng.pan.eng | 17.1 | 0.389 |
| Tatoeba-test.pap-eng.pap.eng | 57.0 | 0.680 |
| Tatoeba-test.pdc-eng.pdc.eng | 41.6 | 0.526 |
| Tatoeba-test.pms-eng.pms.eng | 13.7 | 0.333 |
| Tatoeba-test.pol-eng.pol.eng | 46.5 | 0.632 |
| Tatoeba-test.por-eng.por.eng | 56.4 | 0.710 |
| Tatoeba-test.prg-eng.prg.eng | 2.3 | 0.193 |
| Tatoeba-test.pus-eng.pus.eng | 3.2 | 0.194 |
| Tatoeba-test.roh-eng.roh.eng | 17.5 | 0.420 |
| Tatoeba-test.rom-eng.rom.eng | 5.0 | 0.237 |
| Tatoeba-test.ron-eng.ron.eng | 51.4 | 0.670 |
| Tatoeba-test.rue-eng.rue.eng | 26.0 | 0.447 |
| Tatoeba-test.rus-eng.rus.eng | 47.8 | 0.634 |
| Tatoeba-test.san-eng.san.eng | 4.0 | 0.195 |
| Tatoeba-test.scn-eng.scn.eng | 45.1 | 0.440 |
| Tatoeba-test.sco-eng.sco.eng | 41.9 | 0.582 |
| Tatoeba-test.sgs-eng.sgs.eng | 38.7 | 0.498 |
| Tatoeba-test.sin-eng.sin.eng | 29.7 | 0.499 |
| Tatoeba-test.slv-eng.slv.eng | 38.2 | 0.564 |
| Tatoeba-test.snd-eng.snd.eng | 12.7 | 0.342 |
| Tatoeba-test.spa-eng.spa.eng | 53.2 | 0.687 |
| Tatoeba-test.sqi-eng.sqi.eng | 51.9 | 0.679 |
| Tatoeba-test.stq-eng.stq.eng | 9.0 | 0.391 |
| Tatoeba-test.swe-eng.swe.eng | 57.4 | 0.705 |
| Tatoeba-test.swg-eng.swg.eng | 18.0 | 0.338 |
| Tatoeba-test.tgk-eng.tgk.eng | 24.3 | 0.413 |
| Tatoeba-test.tly-eng.tly.eng | 1.1 | 0.094 |
| Tatoeba-test.ukr-eng.ukr.eng | 48.0 | 0.639 |
| Tatoeba-test.urd-eng.urd.eng | 27.2 | 0.471 |
| Tatoeba-test.vec-eng.vec.eng | 28.0 | 0.398 |
| Tatoeba-test.wln-eng.wln.eng | 17.5 | 0.320 |
| Tatoeba-test.yid-eng.yid.eng | 26.9 | 0.457 |
| Tatoeba-test.zza-eng.zza.eng | 1.7 | 0.131 |
### System Info:
- hf_name: ine-eng
- source_languages: ine
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
- src_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt
- src_alpha3: ine
- tgt_alpha3: eng
- short_pair: ine-en
- chrF2_score: 0.615
- bleu: 45.6
- brevity_penalty: 0.997
- ref_len: 71872.0
- src_name: Indo-European languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: ine
- tgt_alpha2: en
- prefer_old: False
- long_pair: ine-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
hf-internal-testing/tiny-albert | de839e2e6aa81798e2ac85a8ac414da41862e23a | 2021-07-16T01:27:09.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hf-internal-testing | null | hf-internal-testing/tiny-albert | 4,675 | null | transformers | 917 | This is a tiny-albert random model to be used for basic testing.
|
microsoft/beit-base-patch16-224 | f73f827d08788b563c0b7a3eb04169a77e79a588 | 2022-01-28T10:18:01.000Z | [
"pytorch",
"jax",
"beit",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/beit-base-patch16-224 | 4,633 | 1 | transformers | 918 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# BEiT (base-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import BeitFeatureExtractor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-224')
model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
facebook/wmt19-en-ru | a96448f0cef6a0b22c8f73f6f0e74b610efe806f | 2020-12-11T21:39:58.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"en",
"ru",
"dataset:wmt19",
"arxiv:1907.06616",
"transformers",
"translation",
"wmt19",
"facebook",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | facebook | null | facebook/wmt19-en-ru | 4,623 | 1 | transformers | 919 | ---
language:
- en
- ru
tags:
- translation
- wmt19
- facebook
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
---
# FSMT
## Model description
This is a ported version of [fairseq wmt19 transformer](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) for en-ru.
For more details, please see, [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616).
The abbreviation FSMT stands for FairSeqMachineTranslation
All four models are available:
* [wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru)
* [wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en)
* [wmt19-en-de](https://huggingface.co/facebook/wmt19-en-de)
* [wmt19-de-en](https://huggingface.co/facebook/wmt19-de-en)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "facebook/wmt19-en-ru"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Машинное обучение - это здорово, не так ли?
```
#### Limitations and bias
- The original (and this ported model) doesn't seem to handle well inputs with repeated sub-phrases, [content gets truncated](https://discuss.huggingface.co/t/issues-with-translating-inputs-containing-repeated-phrases/981)
## Training data
Pretrained weights were left identical to the original model released by fairseq. For more details, please, see the [paper](https://arxiv.org/abs/1907.06616).
## Eval results
pair | fairseq | transformers
-------|---------|----------
en-ru | [36.4](http://matrix.statmt.org/matrix/output/1914?run_id=6724) | 33.47
The score is slightly below the score reported by `fairseq`, since `transformers`` currently doesn't support:
- model ensemble, therefore the best performing checkpoint was ported (``model4.pt``).
- re-ranking
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-ru
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=15
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
note: fairseq reports using a beam of 50, so you should get a slightly higher score if re-run with `--num_beams 50`.
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={Facebook FAIR's WMT19 News Translation Task Submission},
author={Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey},
booktitle={Proc. of WMT},
}
```
## TODO
- port model ensemble (fairseq uses 4 model checkpoints)
|
indolem/indobertweet-base-uncased | 32e28c05b47e33b6675d2670a1745c50a65e987a | 2021-09-18T01:24:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"id",
"dataset:Twitter 2021",
"arxiv:2109.04607",
"transformers",
"Twitter",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | indolem | null | indolem/indobertweet-base-uncased | 4,614 | 3 | transformers | 920 | ---
language:
- id
tags:
- Twitter
license: apache-2.0
datasets:
- Twitter 2021
widget:
- text: "guweehh udh ga' paham lg sm [MASK]"
---
# IndoBERTweet 🐦
## 1. Paper
Fajri Koto, Jey Han Lau, and Timothy Baldwin. [_IndoBERTweet: A Pretrained Language Model for Indonesian Twitter
with Effective Domain-Specific Vocabulary Initialization_](https://arxiv.org/pdf/2109.04607.pdf).
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (**EMNLP 2021**), Dominican Republic (virtual).
## 2. About
[IndoBERTweet](https://github.com/indolem/IndoBERTweet) is the first large-scale pretrained model for Indonesian Twitter
that is trained by extending a monolingually trained Indonesian BERT model with additive domain-specific vocabulary.
In this paper, we show that initializing domain-specific vocabulary with average-pooling of BERT subword embeddings is more efficient than pretraining from scratch, and more effective than initializing based on word2vec projections.
## 3. Pretraining Data
We crawl Indonesian tweets over a 1-year period using the official Twitter API, from December 2019 to December 2020, with 60 keywords covering 4 main topics: economy, health, education, and government. We obtain in total of **409M word tokens**, two times larger than the training data used to pretrain [IndoBERT](https://aclanthology.org/2020.coling-main.66.pdf). Due to Twitter policy, this pretraining data will not be released to public.
## 4. How to use
Load model and tokenizer (tested with transformers==3.5.1)
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("indolem/indobertweet-base-uncased")
model = AutoModel.from_pretrained("indolem/indobertweet-base-uncased")
```
**Preprocessing Steps:**
* lower-case all words
* converting user mentions and URLs into @USER and HTTPURL, respectively
* translating emoticons into text using the [emoji package](https://pypi.org/project/emoji/).
## 5. Results over 7 Indonesian Twitter Datasets
<table>
<col>
<colgroup span="2"></colgroup>
<colgroup span="2"></colgroup>
<tr>
<th rowspan="2">Models</td>
<th colspan="2" scope="colgroup">Sentiment</th>
<th colspan="1" scope="colgroup">Emotion</th>
<th colspan="2" scope="colgroup">Hate Speech</th>
<th colspan="2" scope="colgroup">NER</th>
<th rowspan="2" scope="colgroup">Average</th>
</tr>
<tr>
<th scope="col">IndoLEM</th>
<th scope="col">SmSA</th>
<th scope="col">EmoT</th>
<th scope="col">HS1</th>
<th scope="col">HS2</th>
<th scope="col">Formal</th>
<th scope="col">Informal</th>
</tr>
<tr>
<td scope="row">mBERT</td>
<td>76.6</td>
<td>84.7</td>
<td>67.5</td>
<td>85.1</td>
<td>75.1</td>
<td>85.2</td>
<td>83.2</td>
<td>79.6</td>
</tr>
<tr>
<td scope="row">malayBERT</td>
<td>82.0</td>
<td>84.1</td>
<td>74.2</td>
<td>85.0</td>
<td>81.9</td>
<td>81.9</td>
<td>81.3</td>
<td>81.5</td>
</tr>
<tr>
<td scope="row">IndoBERT (Willie, et al., 2020)</td>
<td>84.1</td>
<td>88.7</td>
<td>73.3</td>
<td>86.8</td>
<td>80.4</td>
<td>86.3</td>
<td>84.3</td>
<td>83.4</td>
</tr>
<tr>
<td scope="row">IndoBERT (Koto, et al., 2020)</td>
<td>84.1</td>
<td>87.9</td>
<td>71.0</td>
<td>86.4</td>
<td>79.3</td>
<td>88.0</td>
<td><b>86.9</b></td>
<td>83.4</td>
</tr>
<tr>
<td scope="row">IndoBERTweet (1M steps from scratch)</td>
<td>86.2</td>
<td>90.4</td>
<td>76.0</td>
<td><b>88.8</b></td>
<td><b>87.5</b></td>
<td><b>88.1</b></td>
<td>85.4</td>
<td>86.1</td>
</tr>
<tr>
<td scope="row">IndoBERT + Voc adaptation + 200k steps</td>
<td><b>86.6</b></td>
<td><b>92.7</b></td>
<td><b>79.0</b></td>
<td>88.4</td>
<td>84.0</td>
<td>87.7</td>
<td><b>86.9</b></td>
<td><b>86.5</b></td>
</tr>
</table>
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{koto2021indobertweet,
title={IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization},
author={Fajri Koto and Jey Han Lau and Timothy Baldwin},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021)},
year={2021}
}
``` |
yjernite/bart_eli5 | 38797dd2ef06f5542c6f7db853518703f6b3da21 | 2021-03-09T22:31:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yjernite | null | yjernite/bart_eli5 | 4,594 | 3 | transformers | 921 | ---
language: en
license: apache-2.0
datasets:
- eli5
---
## BART ELI5
Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
|
nboost/pt-tinybert-msmarco | b0de5c59e4779c149295b9bd0e5988a8f2cd2be7 | 2021-05-20T01:28:00.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | nboost | null | nboost/pt-tinybert-msmarco | 4,590 | null | transformers | 922 | Entry not found |
setu4993/LaBSE | 082cccca1eea3e0fab80749de8e8aded21bec253 | 2021-12-05T06:10:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"bo",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"he",
"hi",
"hmn",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"or",
"pa",
"pl",
"pt",
"ro",
"ru",
"rw",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tk",
"tl",
"tr",
"tt",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:CommonCrawl",
"dataset:Wikipedia",
"arxiv:2007.01852",
"transformers",
"sentence_embedding",
"multilingual",
"google",
"sentence-similarity",
"license:apache-2.0"
] | feature-extraction | false | setu4993 | null | setu4993/LaBSE | 4,587 | 18 | transformers | 923 | ---
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- or
- pa
- pl
- pt
- ro
- ru
- rw
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
tags:
- bert
- sentence_embedding
- multilingual
- google
- sentence-similarity
license: apache-2.0
datasets:
- CommonCrawl
- Wikipedia
---
# LaBSE
## Model description
Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model trained for sentence embedding for 109 languages. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LaBSE).
- Paper: [arXiv](https://arxiv.org/abs/2007.01852).
- Original model: [TensorFlow Hub](https://tfhub.dev/google/LaBSE/2).
- Blog post: [Google AI Blog](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html).
- Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt).
This is migrated from the v2 model on the TF Hub, which uses dict-based input. The embeddings produced by both the versions of the model are [equivalent](https://github.com/setu4993/convert-labse-tf-pt/blob/ec3a019159a54ed6493181a64486c2808c01f216/tests/test_conversion.py#L31).
## Usage
Using the model:
```python
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LaBSE")
model = BertModel.from_pretrained("setu4993/LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
```
To get the sentence embeddings, use the pooler output:
```python
english_embeddings = english_outputs.pooler_output
```
Output for other languages:
```python
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
```
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
```python
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
```
## Details
Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852).
### BibTeX entry and citation info
```bibtex
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
uer/roberta-base-finetuned-cluener2020-chinese | 312ce0cd10634f238c0cd3205fa6905933394564 | 2022-02-20T07:51:53.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"zh",
"transformers",
"autotrain_compatible"
] | token-classification | false | uer | null | uer/roberta-base-finetuned-cluener2020-chinese | 4,582 | 6 | transformers | 924 | ---
language: zh
widget:
- text: "江苏警方通报特斯拉冲进店铺"
---
# Chinese RoBERTa-Base Model for NER
## Model description
The model is used for named entity recognition. You can download the model either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo) (in UER-py format), or via HuggingFace from the link [roberta-base-finetuned-cluener2020-chinese](https://huggingface.co/uer/roberta-base-finetuned-cluener2020-chinese).
## How to use
You can use this model directly with a pipeline for token classification :
```python
>>> from transformers import AutoModelForTokenClassification,AutoTokenizer,pipeline
>>> model = AutoModelForTokenClassification.from_pretrained('uer/roberta-base-finetuned-cluener2020-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-cluener2020-chinese')
>>> ner = pipeline('ner', model=model, tokenizer=tokenizer)
>>> ner("江苏警方通报特斯拉冲进店铺")
[
{'word': '江', 'score': 0.49153077602386475, 'entity': 'B-address', 'index': 1, 'start': 0, 'end': 1},
{'word': '苏', 'score': 0.6319217681884766, 'entity': 'I-address', 'index': 2, 'start': 1, 'end': 2},
{'word': '特', 'score': 0.5912262797355652, 'entity': 'B-company', 'index': 7, 'start': 6, 'end': 7},
{'word': '斯', 'score': 0.69145667552948, 'entity': 'I-company', 'index': 8, 'start': 7, 'end': 8},
{'word': '拉', 'score': 0.7054660320281982, 'entity': 'I-company', 'index': 9, 'start': 8, 'end': 9}
]
```
## Training data
[CLUENER2020](https://github.com/CLUEbenchmark/CLUENER2020) is used as training data. We only use the train set of the dataset.
## Training procedure
The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune five epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
```
python3 run_ner.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/cluener2020/train.tsv \
--dev_path datasets/cluener2020/dev.tsv \
--label2id_path datasets/cluener2020/label2id.json \
--output_model_path models/cluener2020_ner_model.bin \
--learning_rate 3e-5 --epochs_num 5 --batch_size 32 --seq_length 512
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_token_classification_from_uer_to_huggingface.py --input_model_path models/cluener2020_ner_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{xu2020cluener2020,
title={CLUENER2020: Fine-grained Name Entity Recognition for Chinese},
author={Xu, Liang and Dong, Qianqian and Yu, Cong and Tian, Yin and Liu, Weitang and Li, Lu and Zhang, Xuanwei},
journal={arXiv preprint arXiv:2001.04351},
year={2020}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
``` |
Babelscape/wikineural-multilingual-ner | 65382ee6df6c881a23b02fee6337dbde658ba24c | 2022-02-19T07:32:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"de",
"en",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ru",
"dataset:Babelscape/wikineural",
"transformers",
"named-entity-recognition",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | token-classification | false | Babelscape | null | Babelscape/wikineural-multilingual-ner | 4,576 | 5 | transformers | 925 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
widget:
- text: "My name is Wolfgang and I live in Berlin."
- text: "George Washington went to Washington."
- text: "Mi nombre es Sarah y vivo en Londres."
- text: "Меня зовут Симона, и я живу в Риме."
tags:
- named-entity-recognition
datasets:
- Babelscape/wikineural
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
license:
- cc-by-nc-sa-4.0
pretty_name: wikineural-dataset
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
## Model Description
- **Summary:** mBERT model fine-tuned for 3 epochs on the recently-introduced WikiNEuRal dataset for Multilingual NER. The system supports the 9 languages covered by WikiNEuRal (de, en, es, fr, it, nl, pl, pt, ru), and it was trained on all 9 languages jointly. For a stronger baseline system (mBERT + Bi-LSTM + CRF) look at the official repository.
- **Official Repository:** [https://github.com/Babelscape/wikineural](https://github.com/Babelscape/wikineural)
- **Paper:** [https://aclanthology.org/wikineural](https://aclanthology.org/2021.findings-emnlp.215/)
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Babelscape/wikineural-multilingual-ner")
model = AutoModelForTokenClassification.from_pretrained("Babelscape/wikineural-multilingual-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is trained on WikiNEuRal, a state-of-the-art dataset for Multilingual NER automatically derived from Wikipedia. Therefore, it may not generalize well on all textual genres (e.g. news). On the other hand, models trained only on news articles (e.g. only on CoNLL03) have been proven to obtain much lower scores on encyclopedic articles. To obtain a more robust system, we encourage to train a system on the combination of WikiNEuRal + CoNLL.
## Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents and models belongs to the original copyright holders.
## Citation Information
```bibtex
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
}
```
|
cl-tohoku/bert-base-japanese-char-whole-word-masking | fa5374ac8a5b2c6d3f5f8e156a9892cdf687201d | 2021-09-23T13:45:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | cl-tohoku | null | cl-tohoku/bert-base-japanese-char-whole-word-masking | 4,575 | 2 | transformers | 926 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 仙台は「[MASK]の都」と呼ばれている。
---
# BERT base Japanese (character tokenization, whole word masking enabled)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by character-level tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into characters.
The vocabulary size is 4000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
Helsinki-NLP/opus-mt-en-ro | 6d32771161c298fb3164c208e48fea050a85ab65 | 2021-09-09T21:38:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ro",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ro | 4,543 | null | transformers | 927 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ro
* source languages: en
* target languages: ro
* OPUS readme: [en-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro.en.ro | 30.8 | 0.592 |
| newstest2016-enro.en.ro | 28.8 | 0.571 |
| Tatoeba.en.ro | 45.3 | 0.670 |
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner | 40b8059a6f3bfbb49d64038e131f49b93cc37417 | 2021-10-17T11:13:00.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-ner | 4,522 | 1 | transformers | 928 | ---
language:
- ar
license: apache-2.0
widget:
- text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
---
# CAMeLBERT-Mix NER Model
## Model description
**CAMeLBERT-Mix NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678).
"* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-mix-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
KoboldAI/fairseq-dense-125M | ee40bc27509c93cd551f2791bb9d462bbab4d450 | 2022-02-01T22:48:03.000Z | [
"pytorch",
"xglm",
"text-generation",
"transformers"
] | text-generation | false | KoboldAI | null | KoboldAI/fairseq-dense-125M | 4,511 | null | transformers | 929 | Entry not found |
ltgoslo/norbert2 | afb829e3d0b861bd5f8cda6522b32ca0b097d7eb | 2022-02-15T16:01:58.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"no",
"transformers",
"norwegian",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | ltgoslo | null | ltgoslo/norbert2 | 4,511 | 3 | transformers | 930 | ---
language: no
license: cc-by-4.0
pipeline_tag: fill-mask
tags:
- norwegian
- bert
thumbnail: https://raw.githubusercontent.com/ltgoslo/NorBERT/main/Norbert.png
widget:
- text: "Nå ønsker de seg en [MASK] bolig. "
---
## Quickstart
**Release 2.0** (February 7, 2022)
Trained on the very large corpus of Norwegian (C4 + NCC, about 15 billion word tokens).
Features a 50 000 words vocabulary and was trained using Whole Word Masking.
Download the model here:
* Cased Norwegian BERT Base 2.0 (NorBERT 2): [221.zip](http://vectors.nlpl.eu/repository/20/221.zip)
More about NorBERT training corpora, training procedure and evaluation benchmarks: http://norlm.nlpl.eu/
Associated code: https://github.com/ltgoslo/NorBERT
Check this paper for more details:
_Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen. [Large-Scale Contextualised Language Modelling for Norwegian](https://aclanthology.org/2021.nodalida-main.4/), NoDaLiDa'21 (2021)_
NorBERT was trained as a part of NorLM, a joint initiative of the projects [EOSC-Nordic](https://www.eosc-nordic.eu/) (European Open Science Cloud),
coordinated by the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo.
The computations were performed on resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. |
Helsinki-NLP/opus-mt-en-fi | 627fe90df5c335be61521cd89c68f62e2bdce050 | 2021-09-09T21:35:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-fi | 4,505 | null | transformers | 931 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-fi
* source languages: en
* target languages: fi
* OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md)
* dataset: opus+bt-news
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip)
* test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt)
* test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2019-enfi.en.fi | 25.7 | 0.578 |
|
facebook/blenderbot-1B-distill | a10c0a628734b4cd46da57520bd35c7ca8965201 | 2021-06-17T12:02:20.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"transformers",
"convAI",
"conversational",
"facebook",
"license:apache-2.0",
"autotrain_compatible"
] | conversational | false | facebook | null | facebook/blenderbot-1B-distill | 4,497 | 9 | transformers | 932 | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
indigo-ai/BERTino | 4539d128176da9bbadf98ab813c9acaf68e8d8f1 | 2021-09-22T08:51:24.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"it",
"transformers",
"DISTILbert",
"Italian",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | indigo-ai | null | indigo-ai/BERTino | 4,468 | 5 | transformers | 933 | ---
language: it
tags:
- DISTILbert
- Italian
license: mit
widget:
- text: Vado al [MASK] a fare la spesa
- text: Vado al parco a guardare le [MASK]
- text: Il cielo è [MASK] di stelle.
---
# BERTino: an Italian DistilBERT model
This repository hosts BERTino, an Italian DistilBERT model pre-trained by
[indigo.ai](https://indigo.ai/en/)
on a large general-domain Italian corpus. BERTino is task-agnostic and can be
fine-tuned for every downstream task.
### Corpus
The pre-training corpus that we used is the union of the
[Paisa](https://www.corpusitaliano.it/) and
[ItWaC](https://corpora.dipintra.it/public/run.cgi/corp_info?corpname=itwac_full)
corpora. The final corpus counts 14 millions of sentences for a total of 12 GB
of text.
### Downstream Results
To validate the pre-training that we conducted, we evaluated BERTino on the
[Italian ParTUT](https://universaldependencies.org/treebanks/it_partut/index.html),
[Italian ISDT](https://universaldependencies.org/treebanks/it_isdt/index.html),
[Italian WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500)
and multi-class sentence classification tasks. We report for comparison results
obtained by the [teacher model](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased)
fine-tuned in the same tasks and for the same number of epochs.
**Italian ISDT:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9801 | 9m, 4s | 3s |
| Teacher | 0,983 | 16m, 28s | 5s |
**Italian ParTUT:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9268 | 1m, 18s | 1s |
| Teacher | 0,9688 | 2m, 18s | 1s |
**Italian WikiNER:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,9038 | 35m, 35s | 3m, 1s |
| Teacher | 0,9178 | 67m, 8s | 5m, 16s |
**Multi-class sentence classification:**
| Model | F1 score | Fine-tuning time | Evaluation time |
|--------------|----------|------------------|-----------------|
| BERTino | 0,7788 | 4m, 40s | 6s |
| Teacher | 0,7986 | 8m, 52s | 9s |
|
microsoft/unispeech-sat-base | ca010627e5d232d9179ebdcd615eaa02cea382ed | 2021-11-05T12:41:05.000Z | [
"pytorch",
"unispeech-sat",
"pretraining",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"transformers",
"speech"
] | null | false | microsoft | null | microsoft/unispeech-sat-base | 4,458 | null | transformers | 934 | ---
language:
- en
datasets:
- librispeech_asr
tags:
- speech
---
# UniSpeech-SAT-Base
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The base model pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
The model was pre-trained on:
- 960 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Usage
This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on task such as speaker verification, speaker identification, and speaker diarization.
**Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
of phonemes before fine-tuning.
## Speech Recognition
To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition).
## Speech Classification
To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
## Speaker Verification
TODO
## Speaker Diarization
TODO
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
hf-internal-testing/tiny-xlm-roberta | 915669c23332981bc5936cf93abbd395f487ac38 | 2021-07-16T01:27:42.000Z | [
"pytorch",
"xlm-roberta",
"text-generation",
"transformers"
] | text-generation | false | hf-internal-testing | null | hf-internal-testing/tiny-xlm-roberta | 4,446 | null | transformers | 935 | This is a tiny random {mname_tiny} model to be used for basic testing |
HooshvareLab/bert-base-parsbert-uncased | d73a0e2c7492c33bd5819bcdb23eba207404dd19 | 2021-05-18T20:47:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2005.12515",
"transformers",
"autotrain_compatible"
] | fill-mask | false | HooshvareLab | null | HooshvareLab/bert-base-parsbert-uncased | 4,431 | 4 | transformers | 936 | ## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
---
## Introduction
This model is pre-trained on a large Persian corpus with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 2M documents. A large subset of this corpus was crawled manually.
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpus into a proper format. This process produces more than 40M true sentences.
## Evaluation
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
### Sentiment Analysis (SA) task
| Dataset | ParsBERT | mBERT | DeepSentiPers |
|:--------------------------:|:---------:|:-----:|:-------------:|
| Digikala User Comments | 81.74* | 80.74 | - |
| SnappFood User Comments | 88.12* | 87.87 | - |
| SentiPers (Multi Class) | 71.11* | - | 69.33 |
| SentiPers (Binary Class) | 92.13* | - | 91.98 |
### Text Classification (TC) task
| Dataset | ParsBERT | mBERT |
|:-----------------:|:--------:|:-----:|
| Digikala Magazine | 93.59* | 90.72 |
| Persian News | 97.19* | 95.79 |
### Named Entity Recognition (NER) task
| Dataset | ParsBERT | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:--------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| PEYMA | 93.10* | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 98.79* | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
**If you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference**
## How to use
### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['ما', 'در', 'هوش', '##واره', 'معتقدیم', 'با', 'انتقال', 'صحیح', 'دانش', 'و', 'اگاهی', '،', 'همه', 'افراد', 'میتوانند', 'از', 'ابزارهای', 'هوشمند', 'استفاده', 'کنند', '.', 'شعار', 'ما', 'هوش', 'مصنوعی', 'برای', 'همه', 'است', '.']
```
### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
```
## NLP Tasks Tutorial
Coming soon stay tuned
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
## Releases
### Release v0.1 (May 27, 2019)
This is the first version of our ParsBERT based on BERT<sub>BASE</sub>
|
arampacha/roberta-tiny | f55131409ab048f087bf793ad4009b8a463e8a7b | 2022-05-20T22:07:50.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | arampacha | null | arampacha/roberta-tiny | 4,425 | null | transformers | 937 | # roberta-tiny
Tiny untrained model for testing purposes |
dumitrescustefan/bert-base-romanian-uncased-v1 | 9eb44d518953103bdc9f088217ec978c6ec9a9e4 | 2021-11-02T15:26:10.000Z | [
"pytorch",
"jax",
"bert",
"ro",
"transformers"
] | null | false | dumitrescustefan | null | dumitrescustefan/bert-base-romanian-uncased-v1 | 4,422 | 3 | transformers | 938 | ---
language: ro
---
# bert-base-romanian-uncased-v1
The BERT **base**, **uncased** model for Romanian, trained on a 15GB corpus, version 
### How to use
```python
from transformers import AutoTokenizer, AutoModel
import torch
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-uncased-v1", do_lower_case=True)
model = AutoModel.from_pretrained("dumitrescustefan/bert-base-romanian-uncased-v1")
# tokenize a sentence and run through the model
input_ids = torch.tensor(tokenizer.encode("Acesta este un test.", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
# get encoding
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
Remember to always sanitize your text! Replace ``s`` and ``t`` cedilla-letters to comma-letters with :
```
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
because the model was **NOT** trained on cedilla ``s`` and ``t``s. If you don't, you will have decreased performance due to <UNK>s and increased number of tokens per word.
### Evaluation
Evaluation is performed on Universal Dependencies [Romanian RRT](https://universaldependencies.org/treebanks/ro_rrt/index.html) UPOS, XPOS and LAS, and on a NER task based on [RONEC](https://github.com/dumitrescustefan/ronec). Details, as well as more in-depth tests not shown here, are given in the dedicated [evaluation page](https://github.com/dumitrescustefan/Romanian-Transformers/tree/master/evaluation/README.md).
The baseline is the [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) model ``bert-base-multilingual-(un)cased``, as at the time of writing it was the only available BERT model that works on Romanian.
| Model | UPOS | XPOS | NER | LAS |
|--------------------------------|:-----:|:------:|:-----:|:-----:|
| bert-base-multilingual-uncased | 97.65 | 95.72 | 83.91 | 87.65 |
| bert-base-romanian-uncased-v1 | **98.18** | **96.84** | **85.26** | **89.61** |
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
| Corpus | Lines(M) | Words(M) | Chars(B) | Size(GB) |
|----------- |:--------: |:--------: |:--------: |:--------: |
| OPUS | 55.05 | 635.04 | 4.045 | 3.8 |
| OSCAR | 33.56 | 1725.82 | 11.411 | 11 |
| Wikipedia | 1.54 | 60.47 | 0.411 | 0.4 |
| **Total** | **90.15** | **2421.33** | **15.867** | **15.2** |
#### Acknowledgements
- We'd like to thank [Sampo Pyysalo](https://github.com/spyysalo) from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
|
cardiffnlp/twitter-roberta-base-hate | 82cfd645130b125b8e8de63d90b0376620a0bfbd | 2021-05-20T15:02:45.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-hate | 4,413 | 3 | transformers | 939 | # Twitter-roBERTa-base for Hate Speech Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='hate'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-hate 0.9168
2) hate 0.0832
```
|
gogamza/kobart-base-v1 | d7e64abd841bc1fa5d2939d14161124c51f29e8b | 2021-11-11T07:43:48.000Z | [
"pytorch",
"bart",
"feature-extraction",
"ko",
"transformers",
"license:mit"
] | feature-extraction | false | gogamza | null | gogamza/kobart-base-v1 | 4,384 | null | transformers | 940 | ---
language: ko
tags:
- bart
license: mit
---
## KoBART-base-v1
```python
from transformers import PreTrainedTokenizerFast, BartModel
tokenizer = PreTrainedTokenizerFast.from_pretrained('gogamza/kobart-base-v1')
model = BartModel.from_pretrained('gogamza/kobart-base-v1')
```
|
Helsinki-NLP/opus-mt-hu-en | 62599d9d6eca96022d26974486779623f09ebc9c | 2021-09-09T22:10:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hu",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hu-en | 4,372 | null | transformers | 941 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hu-en
* source languages: hu
* target languages: en
* OPUS readme: [hu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.en | 52.9 | 0.683 |
|
monologg/biobert_v1.1_pubmed | 3abac4e8d78fbeb1ccdfa0079dcfe678f3337e0a | 2021-05-19T23:50:54.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | monologg | null | monologg/biobert_v1.1_pubmed | 4,366 | 1 | transformers | 942 | Entry not found |
Salesforce/grappa_large_jnt | 0d2500a00f3a4b71addaf09a0c1abe30788362dd | 2021-05-20T12:23:41.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Salesforce | null | Salesforce/grappa_large_jnt | 4,361 | 1 | transformers | 943 | Entry not found |
nreimers/albert-small-v2 | 18045fa83de53fd7d4548fdc2473862914cbc7d5 | 2021-05-31T12:26:52.000Z | [
"pytorch",
"albert",
"feature-extraction",
"transformers"
] | feature-extraction | false | nreimers | null | nreimers/albert-small-v2 | 4,351 | null | transformers | 944 | # albert-small-v2
This is a 6 layer version of [albert-base-v2](https://huggingface.co/albert-base-v2). |
EMBEDDIA/crosloengual-bert | 750255b6915cf42623143690d8ea79ceab8ee2e8 | 2021-05-18T18:21:38.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"hr",
"sl",
"en",
"multilingual",
"arxiv:2006.07890",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | EMBEDDIA | null | EMBEDDIA/crosloengual-bert | 4,336 | 1 | transformers | 945 | ---
language:
- hr
- sl
- en
- multilingual
license: cc-by-4.0
---
# CroSloEngual BERT
CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Evaluation is presented in our article:
```
@Inproceedings{ulcar-robnik2020finest,
author = "Ulčar, M. and Robnik-Šikonja, M.",
year = 2020,
title = "{FinEst BERT} and {CroSloEngual BERT}: less is more in multilingual models",
editor = "Sojka, P and Kopeček, I and Pala, K and Horák, A",
booktitle = "Text, Speech, and Dialogue {TSD 2020}",
series = "Lecture Notes in Computer Science",
volume = 12284,
publisher = "Springer",
url = "https://doi.org/10.1007/978-3-030-58323-1_11",
}
```
The preprint is available at [arxiv.org/abs/2006.07890](https://arxiv.org/abs/2006.07890). |
microsoft/wavlm-base-plus-sv | feb593a6c23c1cc3d9510425c29b0a14d2b07b1e | 2022-03-25T10:39:41.000Z | [
"pytorch",
"wavlm",
"audio-xvector",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.13900",
"transformers",
"speech"
] | null | false | microsoft | null | microsoft/wavlm-base-plus-sv | 4,325 | 4 | transformers | 946 | ---
language:
- en
tags:
- speech
---
# WavLM-Base-Plus for Speaker Verification
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Fine-tuning details
The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
[X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
# Usage
## Speaker Verification
```python
from transformers import Wav2Vec2FeatureExtractor, WavLMForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-plus-sv')
model = WavLMForXVector.from_pretrained('microsoft/wavlm-base-plus-sv')
# audio files are decoded on the fly
audio = [x["array"] for x in dataset[:2]["audio"]]
inputs = feature_extractor(audio, padding=True, return_tensors="pt")
embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.86 # the optimal threshold is dataset-dependent
if similarity < threshold:
print("Speakers are not the same!")
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
sampathkethineedi/industry-classification | 9914232048445adee24e4d4683c4d1873a9bafb4 | 2020-07-16T15:27:38.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"transformers",
"tensorflow",
"industry",
"buisiness",
"description",
"multi-class",
"classification"
] | text-classification | false | sampathkethineedi | null | sampathkethineedi/industry-classification | 4,322 | 3 | transformers | 947 | ---
language: "en"
thumbnail: "https://huggingface.co/sampathkethineedi"
tags:
- distilbert
- pytorch
- tensorflow
- text-classification
- industry
- buisiness
- description
- multi-class
- classification
liscence: "mit"
inference: false
---
# industry-classification
## Model description
DistilBERT Model to classify a business description into one of **62 industry tags**.
Trained on 7000 samples of Business Descriptions and associated labels of companies in India.
## How to use
PyTorch and TF models available
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sampathkethineedi/industry-classification")
model = AutoModelForSequenceClassification.from_pretrained("sampathkethineedi/industry-classification")
industry_tags = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
industry_tags("Stellar Capital Services Limited is an India-based non-banking financial company ... loan against property, management consultancy, personal loans and unsecured loans.")
'''Ouput'''
[{'label': 'Consumer Finance', 'score': 0.9841355681419373}]
```
## Limitations and bias
Training data is only for Indian companies
|
monologg/kobigbird-bert-base | ceacda477e20abef2c929adfa4a07c6f811323be | 2021-11-05T01:27:29.000Z | [
"pytorch",
"big_bird",
"fill-mask",
"ko",
"transformers",
"korean",
"autotrain_compatible"
] | fill-mask | false | monologg | null | monologg/kobigbird-bert-base | 4,319 | 11 | transformers | 948 | ---
language: ko
tags:
- korean
mask_token: "[MASK]"
widget:
- text: 대한민국의 수도는 [MASK] 입니다.
---
# KoBigBird
<img src="https://user-images.githubusercontent.com/28896432/140442206-e34b02d5-e279-47e5-9c2a-db1278b1c14d.png" width="200"/>
Pretrained BigBird Model for Korean (**kobigbird-bert-base**)
## About
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences.
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT.
Model is warm started from Korean BERT’s checkpoint.
## How to use
*NOTE:* Use `BertTokenizer` instead of BigBirdTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
```python
from transformers import AutoModel, AutoTokenizer
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = AutoModel.from_pretrained("monologg/kobigbird-bert-base")
# you can change `attention_type` to full attention like this:
model = AutoModel.from_pretrained("monologg/kobigbird-bert-base", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = AutoModel.from_pretrained("monologg/kobigbird-bert-base", block_size=16, num_random_blocks=2)
tokenizer = AutoTokenizer.from_pretrained("monologg/kobigbird-bert-base")
text = "한국어 BigBird 모델을 공개합니다!"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
|
Hate-speech-CNERG/dehatebert-mono-spanish | 2b9664ac59ee7f0b054fc0b1433cbedff3c2bdba | 2021-09-25T14:00:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"arxiv:2004.06465",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-spanish | 4,307 | 2 | transformers | 949 | ---
language: es
license: apache-2.0
---
This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
nferruz/ProtGPT2 | 959b229096b77edeb7656373bf9e379a3b6458ea | 2022-05-25T09:31:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nferruz | null | nferruz/ProtGPT2 | 4,302 | 11 | transformers | 950 | # **ProtGPT2**
ProtGPT2 ([preprint](https://www.biorxiv.org/content/10.1101/2022.03.09.483666v1)) is a language model that speaks the protein language and can be used for de novo protein design and engineering. ProtGPT2 generated sequences conserve natural proteins' critical features (amino acid propensities, secondary structural content, and globularity) while exploring unseen regions of the protein space.
## **Model description**
ProtGPT2 is based on the GPT2 Transformer architecture and contains 36 layers with a model dimensionality of 1280, totalling 738 million parameters.
ProtGPT2 is a decoder-only transformer model pre-trained on the protein space, database UniRef50 (version 2021_04). The pre-training was done on the raw sequences without FASTA headers. Details of training and datasets can be found here: https://huggingface.co/datasets/nferruz/UR50_2021_04
ProtGPT2 was trained in a self-supervised fashion, i.e., the raw sequence data was used during training without including the annotation of sequences. In particular, ProtGPT2 was trained using a causal modelling objective, in which the model is trained to predict the next token (or, in this case, oligomer) in the sequence.
By doing so, the model learns an internal representation of proteins and is able to <em>speak</em> the protein language.
### **How to use ProtGPT2**
ProtGPT2 can be used with the HuggingFace transformer python package. Detailed installation instructions can be found here: https://huggingface.co/docs/transformers/installation
Since ProtGPT2 has been trained on the classical language model objective, it excels at generating protein sequences. It can be used to generate sequences in a zero-shot fashion or to generate sequences of a particular type after finetuning on a user-defined dataset.
**Example 1: Generating _de novo_ proteins in a zero-shot fashion**
In the example below, ProtGPT2 generates sequences that follow the amino acid 'M'. Any other amino acid, oligomer, fragment, or protein of choice can be selected instead. The model will generate the most probable sequences that follow the input. Alternatively, the input field can also be left empty and it will choose the starting tokens.
```
>>> from transformers import pipeline
>>> protgpt2 = pipeline('text-generation', model="nferruz/ProtGPT2")
>>> sequences = protgpt2("M", max_length=100, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=10, eos_token_id=0)
>>> for seq in sequences:
print(seq):
{'generated_text': 'MINDLLDISRIISGKMTLDRAEVNLTAIARQVVEEQRQAAEAKSIQLLCSTPDTNHYVFG\nDFDRLKQTLWNLLSNAVKFTPSGGTVELELGYNAEGMEVYVKDSGIGIDPAFLPYVFDRF\nRQSDAADSRNYGGLGLGLAIVKHLLDLHEGNVSAQSEGFGKGATFTVLLPLKPLKRELAA\nVNRHTAVQQSAPLNDNLAGMKILIVEDRPDTNEMVSYILEEAGAIVETAESGAAALTSLK\nSYSPDLVLSDIGMPMMDGYEMIEYIREWKTTKGG'}
{'generated_text': 'MQGDSSISSSNRMFT\nLCKPLTVANETSTLSTTRNSKSNKRVSKQRVNLAESPERNAPSPASIKTNETEEFSTIKT\nTNNEVLGYEPNYVSYDFVPMEKCNLCNENCSIELASLNEETFVKKTICCHECRKKAIENA\nENNNTKGSAVSNNSVTSSSGRKKIIVSGSQILRNLDSLTSSKSNISTLLNPNHLAKLAKN\nGNLSSLSSLQSSASSISKSSSTSSTPTTSPKVSSPTNSPSSSPINSPTP'}
{'generated_text': 'M\nSTHVSLENTLASLQATFFSLEARHTALETQLLSTRTELAATKQELVRVQAEISRADAQAQ\nDLKAQILTLKEKADQAEVEAAAATQRAEESQAALEAQTAELAQLRLEKQAPQHVAEEGDP\nQPAAPTTQAQSPVTSAAAAASSAASAEPSKPELTFPAYTKRKPPTITHAPKAPTKVALNP\nSTLSTSGSGGGAKADPTPTTPVPSSSAGLIPKALRLPPPVTPAASGAKPAPSARSKLRGP\nDAPLSPSTQS'}
{'generated_text': 'MVLLSTGPLPILFLGPSLAELNQKYQVVSDTLLRFTNTV\nTFNTLKFLGSDS\n'}
{'generated_text': 'M\nNNDEQPFIMSTSGYAGNTTSSMNSTSDFNTNNKSNTWSNRFSNFIAYFSGVGWFIGAISV\nIFFIIYVIVFLSRKTKPSGQKQYSRTERNNRDVDSIKRANYYG\n'}
{'generated_text': 'M\nEAVYSFTITETGTGTVEVTPLDRTISGADIVYPPDTACVPLTVQPVINANGTWTLGSGCT\nGHFSVDTTGHVNCLTGGFGAAGVHTVIYTVETPYSGNSFAVIDVNVTEPSGPGDGGNGNG\nDRGDGPDNGGGNNPGPDPDPSTPPPPGDCSSPLPVVCSDRDCADFDTQAQVQIYLDRYGG\nTCDLDGNHDGTPCENLPNNSGGQSSDSGNGGGNPGTGSTHQVVTGDCLWNIASRNNGQGG\nQAWPALLAANNESITNP'}
{'generated_text': 'M\nGLTTSGGARGFCSLAVLQELVPRPELLFVIDRAFHSGKHAVDMQVVDQEGLGDGVATLLY\nAHQGLYTCLLQAEARLLGREWAAVPALEPNFMESPLIALPRQLLEGLEQNILSAYGSEWS\nQDVAEPQGDTPAALLATALGLHEPQQVAQRRRQLFEAAEAALQAIRASA\n'}
{'generated_text': 'M\nGAAGYTGSLILAALKQNPDIAVYALNRNDEKLKDVCGQYSNLKGQVCDLSNESQVEALLS\nGPRKTVVNLVGPYSFYGSRVLNACIEANCHYIDLTGEVYWIPQMIKQYHHKAVQSGARIV\nPAVGFDSTPAELGSFFAYQQCREKLKKAHLKIKAYTGQSGGASGGTILTMIQHGIENGKI\nLREIRSMANPREPQSDFKHYKEKTFQDGSASFWGVPFVMKGINTPVVQRSASLLKKLYQP\nFDYKQCFSFSTLLNSLFSYIFNAI'}
{'generated_text': 'M\nKFPSLLLDSYLLVFFIFCSLGLYFSPKEFLSKSYTLLTFFGSLLFIVLVAFPYQSAISAS\nKYYYFPFPIQFFDIGLAENKSNFVTSTTILIFCFILFKRQKYISLLLLTVVLIPIISKGN\nYLFIILILNLAVYFFLFKKLYKKGFCISLFLVFSCIFIFIVSKIMYSSGIEGIYKELIFT\nGDNDGRFLIIKSFLEYWKDNLFFGLGPSSVNLFSGAVSGSFHNTYFFIFFQSGILGAFIF\nLLPFVYFFISFFKDNSSFMKLF'}
{'generated_text': 'M\nRRAVGNADLGMEAARYEPSGAYQASEGDGAHGKPHSLPFVALERWQQLGPEERTLAEAVR\nAVLASGQYLLGEAVRRFETAVAAWLGVPFALGVASGTAALTLALRAYGVGPGDEVIVPAI\nTFIATSNAITAAGARPVLVDIDPSTWNMSVASLAARLTPKTKAILAVHLWGQPVDMHPLL\nDIAAQANLAVIEDCAQALGASIAGTKVGTFGDAAAFSFYPTKNMTTGEGGMLVTNARDLA\nQAARMLRSHGQDPPTAYMHSQVGFN'}
```
**Example 2: Finetuning on a set of user-defined sequences**
This alternative option to the zero-shot generation permits introducing direction in the generation process. User-defined training and validation files containing the sequences of interest are provided to the model. After a short update of the model's weights, ProtGPT2 will generate sequences that follow the input properties.
To create the validation and training file, it is necessary to (1) substitute the FASTA headers for each sequence with the expression "<|endoftext|>" and (2) split the originating dataset into training and validation files (this is often done with the ratio 90/10, 80/20 or 95/5). Then, to finetune the model to the input sequences, we can use the example below. Here we show a learning rate of 1e-06, but ideally, the learning rate should be optimised in separate runs. After training, the finetuned model will be stored in the ./output folder. Lastly, ProtGPT2 can generate the tailored sequences as shown in Example 1:
```
python run_clm.py --model_name_or_path nferruz/ProtGPT2 --train_file training.txt --validation_file validation.txt --tokenizer_name nferruz/ProtGPT2
--do_train --do_eval --output_dir output --learning_rate 1e-06
```
The HuggingFace script run_clm.py can be found here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
### **Training specs**
The model was trained on 128 NVIDIA A100 GPUs for 50 epochs, using a block size of 512, and a total batch size of 1024 (65,536 tokens per batch). The optimizer used was Adam (beta1 = 0.9, beta2 = 0.999) with a learning rate of 1e-3. |
microsoft/layoutlm-base-cased | 91acf0f93186fbcebb9f80c2e2259754f0ef922b | 2021-09-27T05:55:31.000Z | [
"pytorch",
"layoutlm",
"arxiv:1912.13318",
"transformers"
] | null | false | microsoft | null | microsoft/layoutlm-base-cased | 4,301 | 5 | transformers | 951 | # LayoutLM
**Multimodal (text + layout/format + image) pre-training for document AI**
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlm)
## Model description
LayoutLM is a simple but effective pre-training method of text and layout for document image understanding and information extraction tasks, such as form understanding and receipt understanding. LayoutLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318)
Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, [KDD 2020](https://www.kdd.org/kdd2020/accepted-papers)
## Different Tokenizer
Note that LayoutLM-Cased requires a different tokenizer, based on RobertaTokenizer. You can
initialize it as follows:
~~~
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutlm-base-cased')
~~~
## Citation
If you find LayoutLM useful in your research, please cite the following paper:
``` latex
@misc{xu2019layoutlm,
title={LayoutLM: Pre-training of Text and Layout for Document Image Understanding},
author={Yiheng Xu and Minghao Li and Lei Cui and Shaohan Huang and Furu Wei and Ming Zhou},
year={2019},
eprint={1912.13318},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
nateraw/vit-base-beans | cecf50fd479911ff2b12b74d85347bbaf91a3e1c | 2022-07-08T07:04:19.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"en",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nateraw | null | nateraw/vit-base-beans | 4,300 | 3 | transformers | 952 | ---
language: en
license: apache-2.0
tags:
- generated_from_trainer
- image-classification
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/angular_leaf_spot.jpeg
example_title: Angular Leaf Spot
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit-base-beans
results:
- task:
type: image-classification
name: Image Classification
dataset:
type: beans
name: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
- task:
type: image-classification
name: Image Classification
dataset:
name: beans
type: beans
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9453125
verified: true
- name: Precision Macro
type: precision
value: 0.9453325082933705
verified: true
- name: Precision Micro
type: precision
value: 0.9453125
verified: true
- name: Precision Weighted
type: precision
value: 0.9452605321507761
verified: true
- name: Recall Macro
type: recall
value: 0.945736434108527
verified: true
- name: Recall Micro
type: recall
value: 0.9453125
verified: true
- name: Recall Weighted
type: recall
value: 0.9453125
verified: true
- name: F1 Macro
type: f1
value: 0.9451827242524917
verified: true
- name: F1 Micro
type: f1
value: 0.9453125
verified: true
- name: F1 Weighted
type: f1
value: 0.944936150332226
verified: true
- name: loss
type: loss
value: 0.26030588150024414
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0942
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2809 | 1.0 | 130 | 0.2287 | 0.9699 |
| 0.1097 | 2.0 | 260 | 0.1676 | 0.9624 |
| 0.1027 | 3.0 | 390 | 0.0942 | 0.9774 |
| 0.0923 | 4.0 | 520 | 0.1104 | 0.9699 |
| 0.1726 | 5.0 | 650 | 0.1030 | 0.9699 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
cross-encoder/ms-marco-MiniLM-L-4-v2 | 1f1ab0943a42a52afd702e7e8337bec985c189ea | 2021-08-05T08:39:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | cross-encoder | null | cross-encoder/ms-marco-MiniLM-L-4-v2 | 4,278 | null | transformers | 953 | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
junnyu/roformer_chinese_base | f780ca57e5eab4627f334bed71a8e5a353b33d69 | 2022-01-04T11:46:28.000Z | [
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"zh",
"arxiv:2104.09864",
"transformers",
"tf2.0",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/roformer_chinese_base | 4,264 | 8 | transformers | 954 | ---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我想去公园玩!"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我想去公园玩!"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
nlpaueb/bert-base-uncased-eurlex | 77adac5a840314c4d688f565a925db1bdd2e87e7 | 2022-04-28T14:44:15.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"en",
"transformers",
"legal",
"license:cc-by-sa-4.0",
"fill-mask"
] | fill-mask | false | nlpaueb | null | nlpaueb/bert-base-uncased-eurlex | 4,251 | 4 | transformers | 955 | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png
tags:
- legal
widget:
- text: "Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products."
---
# LEGAL-BERT: The Muppets straight out of Law School
<img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/>
LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks.<br>
This is the sub-domain variant pre-trained on EU legislation.
<br/><br/>
---
I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261)
---
## Pre-training corpora
The pre-training corpora of LEGAL-BERT include:
* 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office.
* 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk).
* 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX.
* 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng).
* 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law).
* 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).
* We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
## Models list
| Model name | Model Path | Training corpora |
| ------------------- | ------------------------------------ | ------------------- |
| CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts |
| EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation |
| ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases |
| LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All |
| LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All |
\* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora.
\*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020).
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-uncased-eurlex")
model = AutoModel.from_pretrained("nlpaueb/bert-base-uncased-eurlex")
```
## Use LEGAL-BERT variants as Language Models
| Corpus | Model | Masked token | Predictions |
| --------------------------------- | ---------------------------------- | ------------ | ------------ |
| | **BERT-BASE-UNCASED** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05')
| | **CONTRACTS-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04')
| | **EURLEX-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02')
| | **ECHR-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05')
| | **LEGAL-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01')
| | **LEGAL-BERT-SMALL** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05')
## Evaluation on downstream tasks
Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261)
## Author - Publication
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
bert-large-cased-whole-word-masking | 8b5d59881077680d99e2cac339412f20a180bd45 | 2021-05-18T16:30:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | null | null | bert-large-cased-whole-word-masking | 4,250 | 1 | transformers | 956 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (cased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] Hello I'm a fashion model. [SEP]",
"score":0.1474294513463974,
"token":4633,
"token_str":"fashion"
},
{
"sequence":"[CLS] Hello I'm a magazine model. [SEP]",
"score":0.05430116504430771,
"token":2435,
"token_str":"magazine"
},
{
"sequence":"[CLS] Hello I'm a male model. [SEP]",
"score":0.039395421743392944,
"token":2581,
"token_str":"male"
},
{
"sequence":"[CLS] Hello I'm a former model. [SEP]",
"score":0.036936815828084946,
"token":1393,
"token_str":"former"
},
{
"sequence":"[CLS] Hello I'm a professional model. [SEP]",
"score":0.03663451969623566,
"token":1848,
"token_str":"professional"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = BertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = TFBertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] The man worked as a carpenter. [SEP]",
"score":0.09021259099245071,
"token":25169,
"token_str":"carpenter"
},
{
"sequence":"[CLS] The man worked as a cook. [SEP]",
"score":0.08125395327806473,
"token":9834,
"token_str":"cook"
},
{
"sequence":"[CLS] The man worked as a mechanic. [SEP]",
"score":0.07524766772985458,
"token":19459,
"token_str":"mechanic"
},
{
"sequence":"[CLS] The man worked as a waiter. [SEP]",
"score":0.07397029548883438,
"token":17989,
"token_str":"waiter"
},
{
"sequence":"[CLS] The man worked as a guard. [SEP]",
"score":0.05848982185125351,
"token":3542,
"token_str":"guard"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] The woman worked as a maid. [SEP]",
"score":0.19436432421207428,
"token":13487,
"token_str":"maid"
},
{
"sequence":"[CLS] The woman worked as a waitress. [SEP]",
"score":0.16161060333251953,
"token":15098,
"token_str":"waitress"
},
{
"sequence":"[CLS] The woman worked as a nurse. [SEP]",
"score":0.14942803978919983,
"token":7439,
"token_str":"nurse"
},
{
"sequence":"[CLS] The woman worked as a secretary. [SEP]",
"score":0.10373266786336899,
"token":4848,
"token_str":"secretary"
},
{
"sequence":"[CLS] The woman worked as a cook. [SEP]",
"score":0.06384387612342834,
"token":9834,
"token_str":"cook"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
google/fnet-base | d764d799e9bb72d0429a64d1512416a47e2246b3 | 2021-10-31T07:33:21.000Z | [
"pytorch",
"rust",
"fnet",
"pretraining",
"en",
"dataset:c4",
"arxiv:2105.03824",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/fnet-base | 4,247 | 6 | transformers | 957 | ---
language: en
tags:
- fnet
license: apache-2.0
datasets:
- c4
---
# FNet base model
Pretrained model on English language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was
introduced in [this paper](https://arxiv.org/abs/2105.03824) and first released in [this repository](https://github.com/google-research/google-research/tree/master/f_net).
This model is cased: it makes a difference between english and English. The model achieves 0.58 accuracy on MLM objective and 0.80 on NSP objective.
Disclaimer: This model card has been written by [gchhablani](https://huggingface.co/gchhablani).
## Model description
FNet is a transformers model with attention replaced with fourier transforms. Hence, the inputs do not contain an `attention_mask`. It is pretrained on a large corpus of
English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling
them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and
labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the FNet model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=fnet) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
## Training data
The FNet model was pretrained on [C4](https://huggingface.co/datasets/c4), a cleaned version of the Common Crawl dataset.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
FNet-base was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 512 tokens. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
FNet-base was fine-tuned and evaluated on the validation data of the [GLUE benchamrk](https://huggingface.co/datasets/glue). The results of the official model (written in Flax) can be seen in Table 1 on page 7 of [the official paper](https://arxiv.org/abs/2105.03824).
For comparison, this model (ported to PyTorch) was fine-tuned and evaluated using the [official Hugging Face GLUE evaluation scripts](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#glue-tasks) alongside [bert-base-cased](https://hf.co/models/bert-base-cased) for comparison.
The training was done on a single 16GB NVIDIA Tesla V100 GPU. For MRPC/WNLI, the models were trained for 5 epochs, while for other tasks, the models were trained for 3 epochs. A sequence length of 512 was used with batch size 16 and learning rate 2e-5.
The following table summarizes the results for [fnet-base](https://huggingface.co/google/fnet-base) (called *FNet (PyTorch) - Reproduced*) and [bert-base-cased](https://hf.co/models/bert-base-cased) (called *Bert (PyTorch) - Reproduced*) in terms of **fine-tuning** speed. The format is *hour:min:seconds*. **Note** that the authors compared **pre-traning** speed in [the official paper](https://arxiv.org/abs/2105.03824) instead.
| Task/Model | FNet-base (PyTorch) |Bert-base (PyTorch)|
|:----:|:-----------:|:----:|
| MNLI-(m/mm) | [06:40:55](https://huggingface.co/gchhablani/fnet-base-finetuned-mnli) | [09:52:33](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mnli)|
| QQP | [06:21:16](https://huggingface.co/gchhablani/fnet-base-finetuned-qqp) | [09:25:01](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qqp) |
| QNLI | [01:48:22](https://huggingface.co/gchhablani/fnet-base-finetuned-qnli) | [02:40:22](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qnli)|
| SST-2 | [01:09:27](https://huggingface.co/gchhablani/fnet-base-finetuned-sst2) | [01:42:17](https://huggingface.co/gchhablani/bert-base-cased-finetuned-sst2)|
| CoLA | [00:09:47](https://huggingface.co/gchhablani/fnet-base-finetuned-cola) | [00:14:20](https://huggingface.co/gchhablani/bert-base-cased-finetuned-cola)|
| STS-B | [00:07:09](https://huggingface.co/gchhablani/fnet-base-finetuned-stsb) | [00:10:24](https://huggingface.co/gchhablani/bert-base-cased-finetuned-stsb)|
| MRPC | [00:07:48](https://huggingface.co/gchhablani/fnet-base-finetuned-mrpc) | [00:11:12](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mrpc)|
| RTE | [00:03:24](https://huggingface.co/gchhablani/fnet-base-finetuned-rte) | [00:04:51](https://huggingface.co/gchhablani/bert-base-cased-finetuned-rte)|
| WNLI | [00:02:37](https://huggingface.co/gchhablani/fnet-base-finetuned-wnli) | [00:03:23](https://huggingface.co/gchhablani/bert-base-cased-finetuned-wnli)|
| SUM | 16:30:45 | 24:23:56 |
On average the PyTorch version of FNet-base requires *ca.* 32% less time for GLUE fine-tuning on GPU.
The following table summarizes the results for [fnet-base](https://huggingface.co/google/fnet-base) (called *FNet (PyTorch) - Reproduced*) and [bert-base-cased](https://hf.co/models/bert-base-cased) (called *Bert (PyTorch) - Reproduced*) in terms of performance and compares it to the reported performance of the official FNet-base model (called *FNet (Flax) - Official*). Note that the training hyperparameters of the reproduced models were not the same as the official model, so the performance may differ significantly for some tasks (for example: CoLA).
| Task/Model | Metric | FNet-base (PyTorch) | Bert-base (PyTorch) | FNet-Base (Flax - official) |
|:----:|:-----------:|:----:|:-----------:|:----:|
| MNLI-(m/mm) | Accuracy or Match/Mismatch | [76.75](https://huggingface.co/gchhablani/fnet-base-finetuned-mnli) | [84.10](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mnli) | 72/73 |
| QQP | mean(Accuracy,F1) | [86.5](https://huggingface.co/gchhablani/fnet-base-finetuned-qqp) | [89.26](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qqp) | 83 |
| QNLI | Accuracy | [84.39](https://huggingface.co/gchhablani/fnet-base-finetuned-qnli) | [90.99](https://huggingface.co/gchhablani/bert-base-cased-finetuned-qnli) | 80 |
| SST-2 | Accuracy | [89.45](https://huggingface.co/gchhablani/fnet-base-finetuned-sst2) | [92.32](https://huggingface.co/gchhablani/bert-base-cased-finetuned-sst2) | 95 |
| CoLA | Matthews corr or Accuracy | [35.94](https://huggingface.co/gchhablani/fnet-base-finetuned-cola) | [59.57](https://huggingface.co/gchhablani/bert-base-cased-finetuned-cola) | 69 |
| STS-B | Spearman corr. | [82.19](https://huggingface.co/gchhablani/fnet-base-finetuned-stsb) | [88.98](https://huggingface.co/gchhablani/bert-base-cased-finetuned-stsb) | 79 |
| MRPC | mean(F1/Accuracy) | [81.15](https://huggingface.co/gchhablani/fnet-base-finetuned-mrpc) | [88.15](https://huggingface.co/gchhablani/bert-base-cased-finetuned-mrpc) | 76 |
| RTE | Accuracy | [62.82](https://huggingface.co/gchhablani/fnet-base-finetuned-rte) | [67.15](https://huggingface.co/gchhablani/bert-base-cased-finetuned-rte) | 63 |
| WNLI | Accuracy | [54.93](https://huggingface.co/gchhablani/fnet-base-finetuned-wnli) | [46.48](https://huggingface.co/gchhablani/bert-base-cased-finetuned-wnli) | - |
| Avg | - | 72.7 | 78.6 | 76.7 |
We can see that FNet-base achieves around 93% of BERT-base's performance on average.
For more details, please refer to the checkpoints linked with the scores. On overview of all fine-tuned checkpoints of the following table can be accessed [here](https://huggingface.co/models?other=fnet-bert-base-comparison).
### How to use
You can use this model directly with a pipeline for masked language modeling:
**Note: The mask filling pipeline doesn't work exactly as the original model performs masking after converting to tokens. In masking pipeline an additional space is added after the [MASK].**
```python
>>> from transformers import FNetForMaskedLM, FNetTokenizer, pipeline
>>> tokenizer = FNetTokenizer.from_pretrained("google/fnet-base")
>>> model = FNetForMaskedLM.from_pretrained("google/fnet-base")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker("Hello I'm a [MASK] model.")
[
{"sequence": "hello i'm a new model.", "score": 0.12073223292827606, "token": 351, "token_str": "new"},
{"sequence": "hello i'm a first model.", "score": 0.08501081168651581, "token": 478, "token_str": "first"},
{"sequence": "hello i'm a next model.", "score": 0.060546260327100754, "token": 1037, "token_str": "next"},
{"sequence": "hello i'm a last model.", "score": 0.038265593349933624, "token": 813, "token_str": "last"},
{"sequence": "hello i'm a sister model.", "score": 0.033868927508592606, "token": 6232, "token_str": "sister"},
]
```
Here is how to use this model to get the features of a given text in PyTorch:
**Note: You must specify the maximum sequence length to be 512 and truncate/pad to the same length because the original model has no attention mask and considers all the hidden states during forward pass.**
```python
from transformers import FNetTokenizer, FNetModel
tokenizer = FNetTokenizer.from_pretrained("google/fnet-base")
model = FNetModel.from_pretrained("google/fnet-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt', padding='max_length', truncation=True, max_length=512)
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-03824,
author = {James Lee{-}Thorp and
Joshua Ainslie and
Ilya Eckstein and
Santiago Onta{\~{n}}{\'{o}}n},
title = {FNet: Mixing Tokens with Fourier Transforms},
journal = {CoRR},
volume = {abs/2105.03824},
year = {2021},
url = {https://arxiv.org/abs/2105.03824},
archivePrefix = {arXiv},
eprint = {2105.03824},
timestamp = {Fri, 14 May 2021 12:13:30 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-03824.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## Contributions
Thanks to [@gchhablani](https://huggingface.co/gchhablani) for adding this model. |
hfl/chinese-macbert-large | 1cf2677c782975600ce58e2961656b1b29eddbae | 2021-05-19T19:14:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/chinese-macbert-large | 4,240 | 4 | transformers | 958 | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
<p align="center">
<br>
<img src="https://github.com/ymcui/MacBERT/raw/master/pics/banner.png" width="500"/>
<br>
</p>
<p align="center">
<a href="https://github.com/ymcui/MacBERT/blob/master/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/MacBERT.svg?color=blue&style=flat-square">
</a>
</p>
# Please use 'Bert' related functions to load this model!
This repository contains the resources in our paper **"Revisiting Pre-trained Models for Chinese Natural Language Processing"**, which will be published in "[Findings of EMNLP](https://2020.emnlp.org)". You can read our camera-ready paper through [ACL Anthology](#) or [arXiv pre-print](https://arxiv.org/abs/2004.13922).
**[Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)**
*Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu*
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Introduction
**MacBERT** is an improved BERT with novel **M**LM **a**s **c**orrection pre-training task, which mitigates the discrepancy of pre-training and fine-tuning.
Instead of masking with [MASK] token, which never appears in the fine-tuning stage, **we propose to use similar words for the masking purpose**. A similar word is obtained by using [Synonyms toolkit (Wang and Hu, 2017)](https://github.com/chatopera/Synonyms), which is based on word2vec (Mikolov et al., 2013) similarity calculations. If an N-gram is selected to mask, we will find similar words individually. In rare cases, when there is no similar word, we will degrade to use random word replacement.
Here is an example of our pre-training task.
| | Example |
| -------------- | ----------------- |
| **Original Sentence** | we use a language model to predict the probability of the next word. |
| **MLM** | we use a language [M] to [M] ##di ##ct the pro [M] ##bility of the next word . |
| **Whole word masking** | we use a language [M] to [M] [M] [M] the [M] [M] [M] of the next word . |
| **N-gram masking** | we use a [M] [M] to [M] [M] [M] the [M] [M] [M] [M] [M] next word . |
| **MLM as correction** | we use a text system to ca ##lc ##ulate the po ##si ##bility of the next word . |
Except for the new pre-training task, we also incorporate the following techniques.
- Whole Word Masking (WWM)
- N-gram masking
- Sentence-Order Prediction (SOP)
**Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture.**
For more technical details, please check our paper: [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
jonatasgrosman/wav2vec2-large-xlsr-53-dutch | 9e474f681b88972e3a9e5065877e4cfca8b599e2 | 2022-07-27T23:36:29.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"transformers",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-large-xlsr-53-dutch | 4,195 | null | transformers | 959 | ---
language: nl
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- nl
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 Dutch by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice nl
type: common_voice
args: nl
metrics:
- name: Test WER
type: wer
value: 15.72
- name: Test CER
type: cer
value: 5.35
- name: Test WER (+LM)
type: wer
value: 12.84
- name: Test CER (+LM)
type: cer
value: 4.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Dev WER
type: wer
value: 35.79
- name: Dev CER
type: cer
value: 17.67
- name: Dev WER (+LM)
type: wer
value: 31.54
- name: Dev CER (+LM)
type: cer
value: 16.37
---
# Fine-tuned XLSR-53 large model for speech recognition in Dutch
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-dutch")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "nl"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-dutch"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| DE ABORIGINALS ZIJN DE OORSPRONKELIJKE BEWONERS VAN AUSTRALIË. | DE ABBORIGENALS ZIJN DE OORSPRONKELIJKE BEWONERS VAN AUSTRALIË |
| MIJN TOETSENBORD ZIT VOL STOF. | MIJN TOETSENBORD ZIT VOL STOF |
| ZE HAD DE BANK BESCHADIGD MET HAAR SKATEBOARD. | ZE HAD DE BANK BESCHADIGD MET HAAR SCHEETBOORD |
| WAAR LAAT JIJ JE ONDERHOUD DOEN? | WAAR LAAT JIJ HET ONDERHOUD DOEN |
| NA HET LEZEN VAN VELE BEOORDELINGEN HAD ZE EINDELIJK HAAR OOG LATEN VALLEN OP EEN LAPTOP MET EEN QWERTY TOETSENBORD. | NA HET LEZEN VAN VELE BEOORDELINGEN HAD ZE EINDELIJK HAAR OOG LATEN VALLEN OP EEN LAPTOP MET EEN QUERTITOETSEMBORD |
| DE TAMPONS ZIJN OP. | DE TAPONT ZIJN OP |
| MARIJKE KENT OLIVIER NU AL MEER DAN TWEE JAAR. | MAARRIJKEN KENT OLIEVIER NU AL MEER DAN TWEE JAAR |
| HET VOEREN VAN BROOD AAN EENDEN IS EIGENLIJK ONGEZOND VOOR DE BEESTEN. | HET VOEREN VAN BEUROT AAN EINDEN IS EIGENLIJK ONGEZOND VOOR DE BEESTEN |
| PARKET MOET JE STOFZUIGEN, TEGELS MOET JE DWEILEN. | PARKET MOET JE STOF ZUIGEN MAAR TEGELS MOET JE DWEILEN |
| IN ONZE BUURT KENT IEDEREEN ELKAAR. | IN ONZE BUURT KENT IEDEREEN ELKAAR |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-dutch --dataset mozilla-foundation/common_voice_6_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-dutch --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-dutch,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {D}utch},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-dutch}},
year={2021}
}
```
|
hf-internal-testing/tiny-vilt-random-vqa | f923b0b312e4ded9ed2c3e2128f54c6b46742331 | 2022-05-16T14:49:45.000Z | [
"pytorch",
"vilt",
"question-answering",
"arxiv:2102.03334",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | hf-internal-testing | null | hf-internal-testing/tiny-vilt-random-vqa | 4,185 | null | transformers | 960 | ---
license: apache-2.0
---
A tiny randomly-initialized [ViLT](https://arxiv.org/abs/2102.03334) used for unit tests in the Transformers VQA pipeline |
lysandre/tiny-vit-random | 4108f68f9fe6e09c515d9c22c4453a5c892ec97f | 2021-05-05T14:04:37.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | lysandre | null | lysandre/tiny-vit-random | 4,157 | null | transformers | 961 | Entry not found |
jeniya/BERTOverflow | 0361ca9842fd3f88b2e4eea1626d56c2e1265bce | 2021-05-19T20:47:17.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | jeniya | null | jeniya/BERTOverflow | 4,155 | 2 | transformers | 962 |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/). We would like to thank [Wuwei Lan](https://lanwuwei.github.io/) for helping us in training this model.
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
``` |
Helsinki-NLP/opus-mt-cs-en | 186ab5dff3e18ca970a492525c0ca4b398d525ab | 2021-09-09T21:29:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cs-en | 4,148 | null | transformers | 963 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-en
* source languages: cs
* target languages: en
* OPUS readme: [cs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2014-csen.cs.en | 34.1 | 0.612 |
| newstest2015-encs.cs.en | 30.4 | 0.565 |
| newstest2016-encs.cs.en | 31.8 | 0.584 |
| newstest2017-encs.cs.en | 28.7 | 0.556 |
| newstest2018-encs.cs.en | 30.3 | 0.566 |
| Tatoeba.cs.en | 58.0 | 0.721 |
|
monologg/bert-base-cased-goemotions-ekman | 77ca9484d57dfd55bca8ec15b503ce7485d207e9 | 2021-05-19T23:47:57.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | monologg | null | monologg/bert-base-cased-goemotions-ekman | 4,133 | 2 | transformers | 964 | Entry not found |
peterchou/simbert-chinese-base | 6a6ebb9f9d9b2264a8a012f96de01067f304476d | 2021-06-07T05:21:51.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | peterchou | null | peterchou/simbert-chinese-base | 4,118 | 2 | transformers | 965 | Entry not found |
facebook/data2vec-text-base | bd0db19c3500ee7a0b626791db67fa6e9fda9a0b | 2022-04-18T16:03:20.000Z | [
"pytorch",
"data2vec-text",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2202.03555",
"arxiv:1806.02847",
"transformers",
"exbert",
"license:mit"
] | feature-extraction | false | facebook | null | facebook/data2vec-text-base | 4,111 | 5 | transformers | 966 | ---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# Data2Vec-Text base model
Pretrained model on English language using the *data2vec* objective. It was introduced in
[this paper](https://arxiv.org/abs/2202.03555) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/main/examples/data2vec). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
## Abstract
*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
they were developed with a single modality in
mind. To get us closer to general self-supervised
learning, we present data2vec, a framework that
uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data
based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific
targets such as words, visual tokens or units of
human speech which are local in nature, data2vec
predicts contextualized latent representations that
contain information from the entire input. Experiments on the major benchmarks of speech
recognition, image classification, and natural language understanding demonstrate a new state of
the art or competitive performance to predominant approaches.*
## Intended uses & limitations
The model is intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=data2vec-text) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
## Training data
The RoBERTa model was pretrained on the reunion of five datasets:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books;
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ;
- [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news
articles crawled between September 2016 and February 2019.
- [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to
train GPT-2,
- [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas.
Together theses datasets weight 160GB of text.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.03555,
doi = {10.48550/ARXIV.2202.03555},
url = {https://arxiv.org/abs/2202.03555},
author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
uer/roberta-base-chinese-extractive-qa | d5e37a8228fa9d396ff4b093c21e8f0082ff11e1 | 2022-02-20T07:50:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"zh",
"transformers",
"autotrain_compatible"
] | question-answering | false | uer | null | uer/roberta-base-chinese-extractive-qa | 4,104 | 12 | transformers | 967 | ---
language: zh
widget:
- text: "著名诗歌《假如生活欺骗了你》的作者是"
context: "普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。"
---
# Chinese RoBERTa-Base Model for QA
## Model description
The model is used for extractive question answering. You can download the model from the link [roberta-base-chinese-extractive-qa](https://huggingface.co/uer/roberta-base-chinese-extractive-qa).
## How to use
You can use the model directly with a pipeline for extractive question answering:
```python
>>> from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained('uer/roberta-base-chinese-extractive-qa')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-chinese-extractive-qa')
>>> QA = pipeline('question-answering', model=model, tokenizer=tokenizer)
>>> QA_input = {'question': "著名诗歌《假如生活欺骗了你》的作者是",'context': "普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。"}
>>> QA(QA_input)
{'score': 0.9766426682472229, 'start': 0, 'end': 3, 'answer': '普希金'}
```
## Training data
Training data comes from three sources: [cmrc2018](https://github.com/ymcui/cmrc2018), [webqa](https://spaces.ac.cn/archives/4338), and [laisi](https://www.kesci.com/home/competition/5d142d8cbb14e6002c04e14a/content/0). We only use the train set of three datasets.
## Training procedure
The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
```
python3 run_cmrc.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path extractive_qa.json \
--dev_path datasets/cmrc2018/dev.json \
--output_model_path models/extractive_qa_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
```
Finally, we convert the fine-tuned model into Huggingface's format:
```
python3 scripts/convert_bert_extractive_qa_from_uer_to_huggingface.py --input_model_path extractive_qa_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
``` |
JamesStratford/Pidrow-bot-DialoGPT-Large | 93c968f2e1170bd410e447cde821057310bbfe27 | 2022-07-04T21:54:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JamesStratford | null | JamesStratford/Pidrow-bot-DialoGPT-Large | 4,095 | null | transformers | 968 | ---
tags:
- conversational
---
# Pidrow bot - large |
vinai/phobert-large | 6d9abcb5c5a28afca335255baace25ef76c1d5bf | 2022-06-08T04:44:41.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"arxiv:2003.00744",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vinai | null | vinai/phobert-large | 4,084 | 1 | transformers | 969 | # <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
**Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
|
IlyaGusev/news_tg_rubert | 074a7437d070e8fefd0e10890d418fb26a409403 | 2021-06-16T19:43:26.000Z | [
"pytorch",
"ru",
"license:apache-2.0"
] | null | false | IlyaGusev | null | IlyaGusev/news_tg_rubert | 4,066 | null | null | 970 | ---
language:
- ru
license: apache-2.0
---
# NewsTgRuBERT
Training script: https://github.com/dialogue-evaluation/Russian-News-Clustering-and-Headline-Generation/blob/main/train_mlm.py |
Hate-speech-CNERG/bert-base-uncased-hatexplain | e487c81b768c7532bf474bd5e486dedea4cf3848 | 2021-05-25T09:53:05.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:hatexplain",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/bert-base-uncased-hatexplain | 4,064 | 5 | transformers | 971 | ---
language: en
license: apache-2.0
datasets:
- hatexplain
---
The model is used for classifying a text as **Hatespeech**, **Offensive**, or **Normal**. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance.
The dataset and models are available here: https://github.com/punyajoy/HateXplain
**For more details about our paper**
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{mathew2020hatexplain,
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2012.10289},
year={2020}
}
~~~
|
sentence-transformers/quora-distilbert-multilingual | 43c541d8cdd793eed04a9c2d66c6f971198681da | 2022-06-15T20:31:58.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/quora-distilbert-multilingual | 4,048 | null | sentence-transformers | 972 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/quora-distilbert-multilingual
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/quora-distilbert-multilingual')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/quora-distilbert-multilingual')
model = AutoModel.from_pretrained('sentence-transformers/quora-distilbert-multilingual')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/quora-distilbert-multilingual)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
mrm8488/spanbert-finetuned-squadv2 | 5cc65f623285da8704169ac2d88cb8941db5520c | 2021-05-20T00:56:45.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"arxiv:1907.10529",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/spanbert-finetuned-squadv2 | 4,041 | 2 | transformers | 973 | ---
language: en
thumbnail:
---
# SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v2
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
## Details of SpanBERT
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Details of the downstream task (Q&A) - Dataset
[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **78.80** |
| **F1** | **82.22** |
### Raw metrics:
```json
{
"exact": 78.80064010780762,
"f1": 82.22801347271162,
"total": 11873,
"HasAns_exact": 78.74493927125506,
"HasAns_f1": 85.60951483831069,
"HasAns_total": 5928,
"NoAns_exact": 78.85618166526493,
"NoAns_f1": 78.85618166526493,
"NoAns_total": 5945,
"best_exact": 78.80064010780762,
"best_exact_thresh": 0.0,
"best_f1": 82.2280134727116,
"best_f1_thresh": 0.0
}
```
## Comparison:
| Model | EM | F1 score |
| ----------------------------------------------------------------------------------------- | --------- | --------- |
| [SpanBert official repo](https://github.com/facebookresearch/SpanBERT#pre-trained-models) | - | 83.6\* |
| [spanbert-finetuned-squadv2](https://huggingface.co/mrm8488/spanbert-finetuned-squadv2) | **78.80** | **82.22** |
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-finetuned-squadv2",
tokenizer="mrm8488/spanbert-finetuned-squadv2"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
# Output: {'answer': 'Manuel Romero','end': 13,'score': 6.836378586818937e-09, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
valhalla/longformer-base-4096-finetuned-squadv1 | 159b6205769b6f41a68d8d190af8d5df43ef16ca | 2021-02-10T16:35:40.000Z | [
"pytorch",
"tf",
"rust",
"longformer",
"question-answering",
"dataset:squad_v1",
"arxiv:2004.05150",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | valhalla | null | valhalla/longformer-base-4096-finetuned-squadv1 | 4,034 | 5 | transformers | 974 | ---
datasets:
- squad_v1
license: mit
---
# LONGFORMER-BASE-4096 fine-tuned on SQuAD v1
This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task.
[Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it
> `Longformer` is a BERT-like model for long documents.
The pre-trained model can handle sequences with upto 4096 tokens.
## Model Training
This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing).
Few things to keep in mind while training longformer for QA task,
by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that
1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this
` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it.
2. `input_ids` should always be a batch of examples.
## Results
|Metric | # Value |
|-------------|---------|
| Exact Match | 85.1466 |
| F1 | 91.5415 |
## Model in Action 🚀
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering,
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
```
The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added.
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
Jonesy/DialoGPT-small_JT | 15890d5e6e8b1966ef6b5ef1ff5c37c27acceda0 | 2021-10-15T18:56:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jonesy | null | Jonesy/DialoGPT-small_JT | 4,033 | null | transformers | 975 | ---
tags:
- conversational
---
# Johnny Test DialoGPT Model |
pranavpsv/gpt2-genre-story-generator | d617d856b2aae05be1c253a2e9daf8c99f1d9c6d | 2021-05-23T11:02:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pranavpsv | null | pranavpsv/gpt2-genre-story-generator | 4,020 | 6 | transformers | 976 |
# GPT2 Genre Based Story Generator
## Model description
GPT2 fine-tuned on genre-based story generation.
## Intended uses
Used to generate stories based on user inputted genre and starting prompts.
## How to use
#### Supported Genres
superhero, action, drama, horror, thriller, sci_fi
#### Input text format
\<BOS> \<genre> Some optional text...
**Example**: \<BOS> \<sci_fi> After discovering time travel,
```python
# Example of usage
from transformers import pipeline
story_gen = pipeline("text-generation", "pranavpsv/gpt2-genre-story-generator")
print(story_gen("<BOS> <superhero> Batman"))
```
## Training data
Initialized with pre-trained weights of "gpt2" checkpoint. Fine-tuned the model on stories of various genres.
|
mrm8488/longformer-base-4096-finetuned-squadv2 | d11ef97abccc471765cb85c8ffb1ca0de878adf5 | 2022-07-23T01:27:36.000Z | [
"pytorch",
"tf",
"longformer",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:2004.05150",
"transformers",
"QA",
"long context",
"Q&A",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/longformer-base-4096-finetuned-squadv2 | 4,015 | 3 | transformers | 977 | ---
language: en
datasets:
- squad_v2
tags:
- QA
- long context
- Q&A
---
# Longformer-base-4096 fine-tuned on SQuAD v2
[Longformer-base-4096 model](https://huggingface.co/allenai/longformer-base-4096) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
## Longformer-base-4096
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents.
`longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint and pretrained for MLM on long documents. It supports sequences of length up to 4,096.
Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
Dataset ID: ```squad_v2``` from [HuggingFace/Datasets](https://github.com/huggingface/datasets)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| squad_v2 | train | 130319 |
| squad_v2 | valid | 11873 |
How to load it from [datasets](https://github.com/huggingface/datasets)
```python
!pip install datasets
from datasets import load_dataset
dataset = load_dataset('squad_v2')
```
Check out more about this dataset and others in [Datasets Viewer](https://huggingface.co/datasets/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this one](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing)
## Model in Action 🚀
```python
import torch
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
ckpt = "mrm8488/longformer-base-4096-finetuned-squadv2"
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = AutoModelForQuestionAnswering.from_pretrained(ckpt)
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done ?"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# output => democratized NLP
```
## Usage with HF `pipleine`
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
ckpt = "mrm8488/longformer-base-4096-finetuned-squadv2"
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = AutoModelForQuestionAnswering.from_pretrained(ckpt)
qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this."
question = "What has Huggingface done?"
qa({"question": question, "context": text})
```
If given the same context we ask something that is not there, the output for **no answer** will be ```<s>```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
[](https://ko-fi.com/Y8Y3VYYE)
|
valhalla/bart-large-finetuned-squadv1 | 39066834d88b80b83e85b88e37471cee02e1a8c7 | 2021-06-14T10:20:35.000Z | [
"pytorch",
"jax",
"bart",
"question-answering",
"dataset:squad",
"arxiv:1910.13461",
"transformers",
"autotrain_compatible"
] | question-answering | false | valhalla | null | valhalla/bart-large-finetuned-squadv1 | 4,012 | 1 | transformers | 978 | ---
datasets:
- squad
---
# BART-LARGE finetuned on SQuADv1
This is bart-large model finetuned on SQuADv1 dataset for question answering task
## Model details
BART was propsed in the [paper](https://arxiv.org/abs/1910.13461) **BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension**.
BART is a seq2seq model intended for both NLG and NLU tasks.
To use BART for question answering tasks, we feed the complete document into the encoder and decoder, and use the top
hidden state of the decoder as a representation for each
word. This representation is used to classify the token. As given in the paper bart-large achives comparable to ROBERTa on SQuAD.
Another notable thing about BART is that it can handle sequences with upto 1024 tokens.
| Param | #Value |
|---------------------|--------|
| encoder layers | 12 |
| decoder layers | 12 |
| hidden size | 4096 |
| num attetion heads | 16 |
| on disk size | 1.63GB |
## Model training
This model was trained on google colab v100 GPU.
You can find the fine-tuning colab here
[](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing).
## Results
The results are actually slightly worse than given in the paper.
In the paper the authors mentioned that bart-large achieves 88.8 EM and 94.6 F1
| Metric | #Value |
|--------|--------|
| EM | 86.8022|
| F1 | 92.7342|
## Model in Action 🚀
```python3
from transformers import BartTokenizer, BartForQuestionAnswering
import torch
tokenizer = BartTokenizer.from_pretrained('valhalla/bart-large-finetuned-squadv1')
model = BartForQuestionAnswering.from_pretrained('valhalla/bart-large-finetuned-squadv1')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors='pt')
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2]
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = tokenizer.convert_tokens_to_ids(answer.split())
answer = tokenizer.decode(answer)
#answer => 'a nice puppet'
```
> Created with ❤️ by Suraj Patil [](https://github.com/patil-suraj/)
[](https://twitter.com/psuraj28)
|
jjzha/jobbert-base-cased | e6fcd777778602f9cdfd0f9a0fdff14e4e643a1f | 2022-07-26T08:14:26.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"transformers",
"JobBERT",
"job postings",
"autotrain_compatible"
] | fill-mask | false | jjzha | null | jjzha/jobbert-base-cased | 4,005 | 3 | transformers | 979 | ---
language:
- en
tags:
- JobBERT
- job postings
---
# JobBERT
This is the JobBERT model from:
Mike Zhang, Kristian Nørgaard Jensen, Sif Dam Sonniks, and Barbara Plank. __SkillSpan: Hard and Soft Skill Extraction from Job Postings__. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
This model is continuously pre-trained from a `bert-base-cased` checkpoint on ~3.2M sentences from job postings. More information can be found in the paper.
If you use this model, please cite the following paper:
```
@inproceedings{zhang-etal-2022-skillspan,
title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings",
author = "Zhang, Mike and
Jensen, Kristian N{\o}rgaard and
Sonniks, Sif and
Plank, Barbara",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.366",
pages = "4962--4984",
abstract = "Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.",
}
```
|
mrm8488/bert-italian-finedtuned-squadv1-it-alfa | 829047a5ce1a8ef73bde97666eff10b8e128b42e | 2021-05-20T00:24:19.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"it",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/bert-italian-finedtuned-squadv1-it-alfa | 4,001 | 5 | transformers | 980 | ---
language: it
thumbnail:
---
# Italian BERT fine-tuned on SQuAD_it v1
[Italian BERT base cased](https://huggingface.co/dbmdz/bert-base-italian-cased) fine-tuned on [italian SQuAD](https://github.com/crux82/squad-it) for **Q&A** downstream task.
## Details of Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the OPUS corpora collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the OSCAR corpus. Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
More in its official [model card](https://huggingface.co/dbmdz/bert-base-italian-cased)
Created by [Stefan](https://huggingface.co/stefan-it) at [MDZ](https://huggingface.co/dbmdz)
## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
[Italian SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset
into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian.
**The dataset contains more than 60,000 question/answer pairs derived from the original English dataset.** The dataset is split into training and test sets to support the replicability of the benchmarking of QA systems:
- `SQuAD_it-train.json`: it contains training examples derived from the original SQuAD 1.1 trainig material.
- `SQuAD_it-test.json`: it contains test/benchmarking examples derived from the origial SQuAD 1.1 development material.
More details about SQuAD-it can be found in [Croce et al. 2018]. The original paper can be found at this [link](https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29).
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)
## Results 📝
| Metric | # Value |
| ------ | --------- |
| **EM** | **62.51** |
| **F1** | **74.16** |
### Raw metrics
```json
{
"exact": 62.5180707057432,
"f1": 74.16038329042492,
"total": 7609,
"HasAns_exact": 62.5180707057432,
"HasAns_f1": 74.16038329042492,
"HasAns_total": 7609,
"best_exact": 62.5180707057432,
"best_exact_thresh": 0.0,
"best_f1": 74.16038329042492,
"best_f1_thresh": 0.0
}
```
## Comparison ⚖️
| Model | EM | F1 score |
| -------------------------------------------------------------------------------------------------------------------------------- | --------- | --------- |
| [DrQA-it trained on SQuAD-it ](https://github.com/crux82/squad-it/blob/master/README.md#evaluating-a-neural-model-over-squad-it) | 56.1 | 65.9 |
| This one | **62.51** | **74.16** |
## Model in action 🚀
Fast usage with **pipelines** 🧪
```python
from transformers import pipeline
nlp_qa = pipeline(
'question-answering',
model='mrm8488/bert-italian-finedtuned-squadv1-it-alfa',
tokenizer='mrm8488/bert-italian-finedtuned-squadv1-it-alfa'
)
nlp_qa(
{
'question': 'Per quale lingua stai lavorando?',
'context': 'Manuel Romero è colaborando attivamente con HF / trasformatori per il trader del poder de las últimas ' +
'técnicas di procesamiento de lenguaje natural al idioma español'
}
)
# Output: {'answer': 'español', 'end': 174, 'score': 0.9925341537498156, 'start': 168}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
Dataset citation
<details>
@InProceedings{10.1007/978-3-030-03840-3_29,
author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto",
editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo",
title="Neural Learning for Question Answering in Italian",
booktitle="AI*IA 2018 -- Advances in Artificial Intelligence",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="389--402",
isbn="978-3-030-03840-3"
}
</detail>
|
facebook/wav2vec2-xlsr-53-espeak-cv-ft | 2c733782da5604684829819a5eb744c193fe9398 | 2021-12-10T17:18:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"multi-lingual",
"dataset:common_voice",
"arxiv:2109.11680",
"transformers",
"speech",
"audio",
"phoneme-recognition",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-xlsr-53-espeak-cv-ft | 3,995 | 5 | transformers | 981 | ---
language: multi-lingual
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
- phoneme-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
license: apache-2.0
---
# Wav2Vec2-Large-XLSR-53 finetuned on multi-lingual Common Voice
This checkpoint leverages the pretrained checkpoint [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
and is fine-tuned on [CommonVoice](https://huggingface.co/datasets/common_voice) to recognize phonetic labels in multiple languages.
When using the model make sure that your speech input is sampled at 16kHz.
Note that the model outputs a string of phonetic labels. A dictionary mapping phonetic labels to words
has to be used to map the phonetic output labels to output words.
[Paper: Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680)
Authors: Qiantong Xu, Alexei Baevski, Michael Auli
**Abstract**
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xlsr-53-espeak-cv-ft")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-xlsr-53-espeak-cv-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
# retrieve logits
with torch.no_grad():
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
# => should give ['m ɪ s t ɚ k w ɪ l t ɚ ɪ z ð ɪ ɐ p ɑː s əl l ʌ v ð ə m ɪ d əl k l æ s ɪ z æ n d w iː aʊ ɡ l æ d t ə w ɛ l k ə m h ɪ z ɡ ɑː s p ə']
``` |
microsoft/trocr-base-stage1 | 5c48f939de25655eeca55d31e7893ada48d300d9 | 2022-07-01T07:36:00.000Z | [
"pytorch",
"vision-encoder-decoder",
"arxiv:2109.10282",
"transformers",
"trocr",
"image-to-text"
] | image-to-text | false | microsoft | null | microsoft/trocr-base-stage1 | 3,987 | 4 | transformers | 982 | ---
tags:
- trocr
- image-to-text
---
# TrOCR (base-sized model, pre-trained only)
TrOCR pre-trained only model. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-stage1')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-stage1')
# training
pixel_values = processor(image, return_tensors="pt").pixel_values # Batch size 1
decoder_input_ids = torch.tensor([[model.config.decoder.decoder_start_token_id]])
outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids)
```
### BibTeX entry and citation info
```bibtex
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ckiplab/gpt2-base-chinese | 5b35975016d00703e7d812b9197ea81f295b65c3 | 2022-05-10T03:28:12.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"zh",
"transformers",
"lm-head",
"license:gpl-3.0"
] | text-generation | false | ckiplab | null | ckiplab/gpt2-base-chinese | 3,972 | 5 | transformers | 983 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- gpt2
- zh
license: gpl-3.0
---
# CKIP GPT2 Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/gpt2-base-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
gsarti/biobert-nli | 4fe2765eca1ae00a266615626313a94192701870 | 2021-05-19T17:45:15.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | gsarti | null | gsarti/biobert-nli | 3,959 | 1 | transformers | 984 | # BioBERT-NLI
This is the model [BioBERT](https://github.com/dmis-lab/biobert) [1] fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [2].
The model uses the original BERT wordpiece vocabulary and was trained using the **average pooling strategy** and a **softmax loss**.
**Base model**: `monologg/biobert_v1.1_pubmed` from HuggingFace's `AutoModel`.
**Training time**: ~6 hours on the NVIDIA Tesla P100 GPU provided in Kaggle Notebooks.
**Parameters**:
| Parameter | Value |
|------------------|-------|
| Batch size | 64 |
| Training steps | 30000 |
| Warmup steps | 1450 |
| Lowercasing | False |
| Max. Seq. Length | 128 |
**Performances**: The performance was evaluated on the test portion of the [STS dataset](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) using Spearman rank correlation and compared to the performances of a general BERT base model obtained with the same procedure to verify their similarity.
| Model | Score |
|-------------------------------|-------------|
| `biobert-nli` (this) | 73.40 |
| `gsarti/scibert-nli` | 74.50 |
| `bert-base-nli-mean-tokens`[3]| 77.12 |
An example usage for similarity-based scientific paper retrieval is provided in the [Covid Papers Browser](https://github.com/gsarti/covid-papers-browser) repository.
**References:**
[1] J. Lee et al, [BioBERT: a pre-trained biomedical language representation model for biomedical text mining](https://academic.oup.com/bioinformatics/article/36/4/1234/5566506)
[2] A. Conneau et al., [Supervised Learning of Universal Sentence Representations from Natural Language Inference Data](https://www.aclweb.org/anthology/D17-1070/)
[3] N. Reimers et I. Gurevych, [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://www.aclweb.org/anthology/D19-1410/)
|
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | b0df630d23e5aa5f8a326017605337f6d0863ecd | 2021-10-17T11:15:54.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | 3,952 | 4 | transformers | 985 | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-DA SA Model
## Model description
**CAMeLBERT-DA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
izumi-lab/electra-base-japanese-discriminator | 9a50f726d330062f6055f9db584820796839f53c | 2022-03-19T09:43:01.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"transformers",
"license:cc-by-sa-4.0"
] | null | false | izumi-lab | null | izumi-lab/electra-base-japanese-discriminator | 3,952 | 1 | transformers | 986 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# ELECTRA base Japanese discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA base in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA base in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 512 tokens per instance, 256 instances per batch, and 766k training steps.
The size of the generator is 1/3 of the size of the discriminator.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2022additional-fin,
title={事前学習と追加事前学習による金融言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Training and Additional Pre-Training Financial Language Model},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第28回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 28},
pages={132-137},
year={2022}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
facebook/deit-small-patch16-224 | 164deee347853469b97442b3817f22eece80c7e3 | 2022-07-13T11:41:40.000Z | [
"pytorch",
"tf",
"vit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"arxiv:2006.03677",
"transformers",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/deit-small-patch16-224 | 3,945 | 1 | transformers | 987 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet-1k
---
# Data-efficient Image Transformer (small-sized model)
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is actually a more efficiently trained Vision Transformer (ViT).
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-small-patch16-224')
model = ViTForImageClassification.from_pretrained('facebook/deit-small-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| **DeiT-small** | **79.9** | **95.0** | **22M** | **https://huggingface.co/facebook/deit-small-patch16-224** |
| DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
monologg/kocharelectra-base-discriminator | bb615f43065306f6d155448d717c90f2367810b1 | 2020-05-27T17:34:11.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | monologg | null | monologg/kocharelectra-base-discriminator | 3,939 | null | transformers | 988 | Entry not found |
KETI-AIR/ke-t5-base | 3fdf631a2986e928ad8b22863f9da16da74d9906 | 2021-06-23T02:50:50.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-base | 3,933 | 2 | transformers | 989 | Entry not found |
sachaarbonel/bert-italian-cased-finetuned-pos | aaa6d3c06defeaac28cd16ccc4206e4e83b741b8 | 2021-05-19T00:47:05.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"it",
"dataset:xtreme",
"transformers",
"autotrain_compatible"
] | token-classification | false | sachaarbonel | null | sachaarbonel/bert-italian-cased-finetuned-pos | 3,925 | 2 | transformers | 990 | ---
language: it
datasets:
- xtreme
---
# Italian-Bert (Italian Bert) + POS 🎃🏷
This model is a fine-tuned on [xtreme udpos Italian](https://huggingface.co/nlp/viewer/?dataset=xtreme&config=udpos.Italian) version of [Bert Base Italian](https://huggingface.co/dbmdz/bert-base-italian-cased) for **POS** downstream task.
## Details of the downstream task (POS) - Dataset
- [Dataset: xtreme udpos Italian](https://huggingface.co/nlp/viewer/?dataset=xtreme&config=udpos.Italian) 📚
| Dataset | # Examples |
| ---------------------- | ----- |
| Train | 716 K |
| Dev | 85 K |
- [Fine-tune on NER script provided by @stefan-it](https://raw.githubusercontent.com/stefan-it/fine-tuned-berts-seq/master/scripts/preprocess.py)
- Labels covered:
```
ADJ
ADP
ADV
AUX
CCONJ
DET
INTJ
NOUN
NUM
PART
PRON
PROPN
PUNCT
SCONJ
SYM
VERB
X
```
## Metrics on evaluation set 🧾
| Metric | # score |
| :------------------------------------------------------------------------------------: | :-------: |
| F1 | **97.25**
| Precision | **97.15** |
| Recall | **97.36** |
## Model in action 🔨
Example of usage
```python
from transformers import pipeline
nlp_pos = pipeline(
"ner",
model="sachaarbonel/bert-italian-cased-finetuned-pos",
tokenizer=(
'sachaarbonel/bert-spanish-cased-finetuned-pos',
{"use_fast": False}
))
text = 'Roma è la Capitale d'Italia.'
nlp_pos(text)
'''
Output:
--------
[{'entity': 'PROPN', 'index': 1, 'score': 0.9995346665382385, 'word': 'roma'},
{'entity': 'AUX', 'index': 2, 'score': 0.9966597557067871, 'word': 'e'},
{'entity': 'DET', 'index': 3, 'score': 0.9994786977767944, 'word': 'la'},
{'entity': 'NOUN',
'index': 4,
'score': 0.9995198249816895,
'word': 'capitale'},
{'entity': 'ADP', 'index': 5, 'score': 0.9990894198417664, 'word': 'd'},
{'entity': 'PART', 'index': 6, 'score': 0.57159024477005, 'word': "'"},
{'entity': 'PROPN',
'index': 7,
'score': 0.9994804263114929,
'word': 'italia'},
{'entity': 'PUNCT', 'index': 8, 'score': 0.9772886633872986, 'word': '.'}]
'''
```
Yeah! Not too bad 🎉
> Created by [Sacha Arbonel/@sachaarbonel](https://twitter.com/sachaarbonel) | [LinkedIn](https://www.linkedin.com/in/sacha-arbonel)
> Made with <span style="color: #e25555;">♥</span> in Paris
|
indobenchmark/indobert-base-p2 | 94b4e0a82081fa57f227fcc2024d1ea89b57ac1f | 2021-05-19T20:24:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"id",
"dataset:Indo4B",
"arxiv:2009.05387",
"transformers",
"indobert",
"indobenchmark",
"indonlu",
"license:mit"
] | feature-extraction | false | indobenchmark | null | indobenchmark/indobert-base-p2 | 3,920 | null | transformers | 991 | ---
language: id
tags:
- indobert
- indobenchmark
- indonlu
license: mit
inference: false
datasets:
- Indo4B
---
# IndoBERT Base Model (phase2 - uncased)
[IndoBERT](https://arxiv.org/abs/2009.05387) is a state-of-the-art language model for Indonesian based on the BERT model. The pretrained model is trained using a masked language modeling (MLM) objective and next sentence prediction (NSP) objective.
## All Pre-trained Models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `indobenchmark/indobert-base-p1` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-base-p2` | 124.5M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p1` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-large-p2` | 335.2M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p1` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-base-p2` | 11.7M | Base | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p1` | 17.7M | Large | Indo4B (23.43 GB of text) |
| `indobenchmark/indobert-lite-large-p2` | 17.7M | Large | Indo4B (23.43 GB of text) |
## How to use
### Load model and tokenizer
```python
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("indobenchmark/indobert-base-p2")
model = AutoModel.from_pretrained("indobenchmark/indobert-base-p2")
```
### Extract contextual representation
```python
x = torch.LongTensor(tokenizer.encode('aku adalah anak [MASK]')).view(1,-1)
print(x, model(x)[0].sum())
```
## Authors
<b>IndoBERT</b> was trained and evaluated by Bryan Wilie\*, Karissa Vincentio\*, Genta Indra Winata\*, Samuel Cahyawijaya\*, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
|
inokufu/flaubert-base-uncased-xnli-sts | e4fcf0bc37d8d55fbbe1cc8096eaeaa4057d2560 | 2022-02-18T16:50:56.000Z | [
"pytorch",
"flaubert",
"feature-extraction",
"fr",
"dataset:xnli",
"dataset:stsb_multi_mt",
"arxiv:1809.05053",
"sentence-transformers",
"sentence-similarity",
"transformers",
"xnli",
"stsb_multi_mt"
] | sentence-similarity | false | inokufu | null | inokufu/flaubert-base-uncased-xnli-sts | 3,915 | 2 | sentence-transformers | 992 | ---
pipeline_tag: sentence-similarity
language: fr
tags:
- sentence-similarity
- transformers
- fr
- flaubert
- sentence-transformers
- feature-extraction
- xnli
- stsb_multi_mt
datasets:
- xnli
- stsb_multi_mt
---
# inokufu/flaubert-base-uncased-xnli-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Details
This model is based on the French flaubert-base-uncased pre-trained model [1, 2].
It was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).
It was then fine-tuned on a text semantic similarity task (on STS-fr data) [4]. This task consists in training the model to estimate the similarity between two sentences.
This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Apprendre le python", "Devenir expert en comptabilité"]
model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Apprendre le python", "Devenir expert en comptabilité"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts')
model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
STS (fr) score: 83.07%
## Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## References
[1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br>
[2] https://huggingface.co/flaubert/flaubert_base_uncased <br>
[3] https://arxiv.org/abs/1809.05053 <br>
[4] https://huggingface.co/datasets/stsb_multi_mt <br>
|
hireddivas/dialoGPT-small-sonic2 | 313bb626a1abb8a416bc552909c060057227ffb7 | 2022-07-30T04:00:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | hireddivas | null | hireddivas/dialoGPT-small-sonic2 | 3,896 | null | transformers | 993 | ---
tags:
- conversational
---
GPT-2 chatbot - talk to Sonic (improved?) |
izumi-lab/electra-small-japanese-fin-discriminator | 1bcdc65cbe828ebc2d33a369b4c51f8ffa2da1a9 | 2022-03-19T09:39:07.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:wikipedia",
"dataset:securities reports",
"dataset:summaries of financial results",
"arxiv:2003.10555",
"transformers",
"finance",
"license:cc-by-sa-4.0"
] | null | false | izumi-lab | null | izumi-lab/electra-small-japanese-fin-discriminator | 3,892 | null | transformers | 994 | ---
language: ja
license: cc-by-sa-4.0
tags:
- finance
datasets:
- wikipedia
- securities reports
- summaries of financial results
widget:
- text: 流動[MASK]は1億円となりました。
---
# ELECTRA small Japanese finance discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is the same of the discriminator.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
twmkn9/albert-base-v2-squad2 | eb613ac0a88a507c701b46de597aff50a1089dc1 | 2020-12-11T22:02:54.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | twmkn9 | null | twmkn9/albert-base-v2-squad2 | 3,881 | 2 | transformers | 995 | This model is [ALBERT base v2](https://huggingface.co/albert-base-v2) trained on SQuAD v2 as:
```
export SQUAD_DIR=../../squad2
python3 run_squad.py
--model_type albert
--model_name_or_path albert-base-v2
--do_train
--do_eval
--overwrite_cache
--do_lower_case
--version_2_with_negative
--save_steps 100000
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--per_gpu_train_batch_size 8
--num_train_epochs 3
--learning_rate 3e-5
--max_seq_length 384
--doc_stride 128
--output_dir ./tmp/albert_fine/
```
Performance on a dev subset is close to the original paper:
```
Results:
{
'exact': 78.71010200723923,
'f1': 81.89228117126069,
'total': 6078,
'HasAns_exact': 75.39518900343643,
'HasAns_f1': 82.04167868004215,
'HasAns_total': 2910,
'NoAns_exact': 81.7550505050505,
'NoAns_f1': 81.7550505050505,
'NoAns_total': 3168,
'best_exact': 78.72655478775913,
'best_exact_thresh': 0.0,
'best_f1': 81.90873395178066,
'best_f1_thresh': 0.0
}
```
We are hopeful this might save you time, energy, and compute. Cheers! |
google/bert_uncased_L-8_H-512_A-8 | 53b17ea0d90728ac770dfd13fcb3abdc75e4f4bf | 2021-05-19T17:35:53.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-8_H-512_A-8 | 3,858 | 1 | transformers | 996 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
Helsinki-NLP/opus-mt-en-da | 9786126ba34f1f86636af779ef13557bd9d1b246 | 2021-09-09T21:34:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"da",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-da | 3,853 | 1 | transformers | 997 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-da
* source languages: en
* target languages: da
* OPUS readme: [en-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.da | 60.4 | 0.745 |
|
wtrClover/DialoGPT-small-TwilightBot | c586ae8e0122f0287427a2087cf67fe5c55bc51b | 2022-01-28T00:29:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | wtrClover | null | wtrClover/DialoGPT-small-TwilightBot | 3,838 | null | transformers | 998 | ---
tags:
- conversational
---
# MLP DialoGPT Model based on Twilight Sparkle |
castorini/doc2query-t5-base-msmarco | 53926b5364548f0c81b39a31723184d3664aaf06 | 2021-11-24T17:57:59.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | castorini | null | castorini/doc2query-t5-base-msmarco | 3,832 | 5 | transformers | 999 | For more information, check [doc2query.ai](http://doc2query.ai) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.