modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mrm8488/RuPERTa-base | 677ce6c9e527ad01d4d7c4cb6a3d88b6feca4009 | 2021-05-20T18:15:46.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"es",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mrm8488 | null | mrm8488/RuPERTa-base | 380 | null | transformers | 2,600 | ---
language: es
thumbnail: https://i.imgur.com/DUlT077.jpg
widget:
- text: "España es un país muy <mask> en la UE"
---
# RuPERTa: the Spanish RoBERTa 🎃<img src="https://abs-0.twimg.com/emoji/v2/svg/1f1ea-1f1f8.svg" alt="spain flag" width="25"/>
RuPERTa-base (uncased) is a [RoBERTa model](https://github.com/pytorch/fairseq/tree/master/examples/roberta) trained on a *uncased* verison of [big Spanish corpus](https://github.com/josecannete/spanish-corpora).
RoBERTa iterates on BERT's pretraining procedure, including training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data.
The architecture is the same as `roberta-base`:
`roberta.base:` **RoBERTa** using the **BERT-base architecture 125M** params
## Benchmarks 🧾
WIP (I continue working on it) 🚧
| Task/Dataset | F1 | Precision | Recall | Fine-tuned model | Reproduce it |
| -------- | ----: | --------: | -----: | --------------------------------------------------------------------------------------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| POS | 97.39 | 97.47 | 97.32 | [RuPERTa-base-finetuned-pos](https://huggingface.co/mrm8488/RuPERTa-base-finetuned-pos) | [](https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/RuPERTa_base_finetuned_POS.ipynb)
| NER | 77.55 | 75.53 | 79.68 | [RuPERTa-base-finetuned-ner](https://huggingface.co/mrm8488/RuPERTa-base-finetuned-ner) |
| SQUAD-es v1 | to-do | | |[RuPERTa-base-finetuned-squadv1](https://huggingface.co/mrm8488/RuPERTa-base-finetuned-squadv1)
| SQUAD-es v2 | to-do | | |[RuPERTa-base-finetuned-squadv2](https://huggingface.co/mrm8488/RuPERTa-base-finetuned-squadv2)
## Model in action 🔨
### Usage for POS and NER 🏷
```python
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
id2label = {
"0": "B-LOC",
"1": "B-MISC",
"2": "B-ORG",
"3": "B-PER",
"4": "I-LOC",
"5": "I-MISC",
"6": "I-ORG",
"7": "I-PER",
"8": "O"
}
tokenizer = AutoTokenizer.from_pretrained('mrm8488/RuPERTa-base-finetuned-ner')
model = AutoModelForTokenClassification.from_pretrained('mrm8488/RuPERTa-base-finetuned-ner')
text ="Julien, CEO de HF, nació en Francia."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0]
for m in last_hidden_states:
for index, n in enumerate(m):
if(index > 0 and index <= len(text.split(" "))):
print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])
# Output:
'''
Julien,: I-PER
CEO: O
de: O
HF,: B-ORG
nació: I-PER
en: I-PER
Francia.: I-LOC
'''
```
For **POS** just change the `id2label` dictionary and the model path to [mrm8488/RuPERTa-base-finetuned-pos](https://huggingface.co/mrm8488/RuPERTa-base-finetuned-pos)
### Fast usage for LM with `pipelines` 🧪
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
model = AutoModelWithLMHead.from_pretrained('mrm8488/RuPERTa-base')
tokenizer = AutoTokenizer.from_pretrained("mrm8488/RuPERTa-base", do_lower_case=True)
from transformers import pipeline
pipeline_fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
pipeline_fill_mask("España es un país muy <mask> en la UE")
```
```json
[
{
"score": 0.1814306527376175,
"sequence": "<s> españa es un país muy importante en la ue</s>",
"token": 1560
},
{
"score": 0.024842597544193268,
"sequence": "<s> españa es un país muy fuerte en la ue</s>",
"token": 2854
},
{
"score": 0.02473250962793827,
"sequence": "<s> españa es un país muy pequeño en la ue</s>",
"token": 2948
},
{
"score": 0.023991240188479424,
"sequence": "<s> españa es un país muy antiguo en la ue</s>",
"token": 5240
},
{
"score": 0.0215945765376091,
"sequence": "<s> españa es un país muy popular en la ue</s>",
"token": 5782
}
]
```
## Acknowledgments
I thank [🤗/transformers team](https://github.com/huggingface/transformers) for answering my doubts and Google for helping me with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program.
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
alisawuffles/roberta-large-wanli | b435dba21967eb5aef9744b84c61b5b103123ee0 | 2022-06-07T21:02:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"arxiv:2201.05955",
"transformers"
] | text-classification | false | alisawuffles | null | alisawuffles/roberta-large-wanli | 380 | 2 | transformers | 2,601 | ---
widget:
- text: "I almost forgot to eat lunch.</s></s>I didn't forget to eat lunch."
- text: "I almost forgot to eat lunch.</s></s>I forgot to eat lunch."
- text: "I ate lunch.</s></s>I almost forgot to eat lunch."
---
This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://arxiv.org/abs/2201.05955)). It outperforms the `roberta-large-mnli` model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.
### How to use
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli')
tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli')
x = tokenizer("I almost forgot to eat lunch.", "I didn't forget to eat lunch.", hypothesis, return_tensors='pt', max_length=128, truncation=True)
logits = model(**x).logits
probs = logits.softmax(dim=1).squeeze(0)
label_id = torch.argmax(probs).item()
prediction = model.config.id2label[label_id]
```
### Citation
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` |
Deniskin/gpt3_medium | e338af13162e52e22f5a89a9986e5f7f3f66fbec | 2021-05-21T09:41:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Deniskin | null | Deniskin/gpt3_medium | 379 | null | transformers | 2,602 | Entry not found |
Stevo/DiagloGPT-medium-spamton | 3667391bd5125de5c048b3934bb5fb86d514cc7c | 2021-11-17T17:42:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Stevo | null | Stevo/DiagloGPT-medium-spamton | 379 | 1 | transformers | 2,603 | ---
tags:
- conversational
---
@ Deltarune Spamton DialoGPT Model |
anton-l/wav2vec2-base-lang-id | 1d4eda836bb7b7c53053393b65ddfbe1811e4d10 | 2021-10-01T12:36:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:common_language",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | anton-l | null | anton-l/wav2vec2-base-lang-id | 379 | 3 | transformers | 2,604 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: wav2vec2-base-lang-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-lang-id
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the anton-l/common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9836
- Accuracy: 0.7945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9568 | 1.0 | 173 | 3.2866 | 0.1146 |
| 1.9243 | 2.0 | 346 | 2.1241 | 0.3840 |
| 1.2923 | 3.0 | 519 | 1.5498 | 0.5489 |
| 0.8659 | 4.0 | 692 | 1.4953 | 0.6126 |
| 0.5539 | 5.0 | 865 | 1.2431 | 0.6926 |
| 0.4101 | 6.0 | 1038 | 1.1443 | 0.7232 |
| 0.2945 | 7.0 | 1211 | 1.0870 | 0.7544 |
| 0.1552 | 8.0 | 1384 | 1.1080 | 0.7661 |
| 0.0968 | 9.0 | 1557 | 0.9836 | 0.7945 |
| 0.0623 | 10.0 | 1730 | 1.0252 | 0.7993 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
bioformers/bioformer-cased-v1.0 | 18d58ede143f4a89cc7a512e55dad960e4d3ad9f | 2021-11-11T22:13:21.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"en",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | bioformers | null | bioformers/bioformer-cased-v1.0 | 379 | 3 | transformers | 2,605 | ---
language:
- en
license: apache-2.0
---
Bioformer is a lightweight BERT model for biomedical text mining. Bioformer uses a biomedical vocabulary and is pre-trained from scratch only on biomedical domain corpora. Our experiments show that Bioformer is 3x as fast as BERT-base, and achieves comparable or even better performance than BioBERT/PubMedBERT on downstream NLP tasks.
Bioformer has 8 layers (transformer blocks) with a hidden embedding size of 512, and the number of self-attention heads is 8. Its total number of parameters is 42,820,610.
## Vocabulary of Bioformer
Bioformer uses a cased WordPiece vocabulary trained from a biomedical corpus, which included all PubMed abstracts (33 million, as of Feb 1, 2021) and 1 million PMC full-text articles. PMC has 3.6 million articles but we down-sampled them to 1 million such that the total size of PubMed abstracts and PMC full-text articles are approximately equal. To mitigate the out-of-vocabulary issue and include special symbols (e.g. male and female symbols) in biomedical literature, we trained Bioformer’s vocabulary from the Unicode text of the two resources. The vocabulary size of Bioformer is 32768 (2^15), which is similar to that of the original BERT.
## Pre-training of Bioformer
Bioformer was pre-trained from scratch on the same corpus as the vocabulary (33 million PubMed abstracts + 1 million PMC full-text articles). For the masked language modeling (MLM) objective, we used whole-word masking with a masking rate of 15%. There are debates on whether the next sentence prediction (NSP) objective could improve the performance on downstream tasks. We include it in our pre-training experiment in case the prediction of the next sentence is needed by end-users. Sentence segmentation of all training text was performed using [SciSpacy](https://allenai.github.io/scispacy/).
Pre-training of Bioformer was performed on a single Cloud TPU device (TPUv2, 8 cores, 8GB memory per core). The maximum input sequence length was fixed to 512, and the batch size was set to 256. We pre-trained Bioformer for 2 million steps, which took about 8.3 days.
## Awards
Bioformer achieved top performance (highest micro-F1 score) in the BioCreative VII COVID-19 multi-label topic classification challenge (https://biocreative.bioinformatics.udel.edu/media/store/files/2021/TRACK5_pos_1_BC7_submission_221.pdf)
## Acknowledgment
Bioformer is partly supported by the Google TPU Research Cloud (TRC) program.
## Questions
If you have any questions, please submit an issue here: https://github.com/WGLab/bioformer/issues
You can also send an email to Li Fang ([email protected]) |
textattack/bert-base-uncased-STS-B | 8a1d8f1cc0523b7af746daf104ddd0c1ce5911c1 | 2021-05-20T07:38:28.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/bert-base-uncased-STS-B | 379 | null | transformers | 2,606 | Entry not found |
sijunhe/nezha-cn-base | bc68bfa5072f5c06055a2b59ba4858e9eea5183f | 2022-06-24T03:53:56.000Z | [
"pytorch",
"nezha",
"fill-mask",
"arxiv:1909.00204",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | sijunhe | null | sijunhe/nezha-cn-base | 379 | 2 | transformers | 2,607 | ---
license: afl-3.0
---
**Please use 'Bert' related tokenizer classes and 'Nezha' related model classes**
[NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch)
## Example Usage
```
from transformers import BertTokenizer, NezhaModel
tokenizer = BertTokenizer.from_pretrained('sijunhe/nezha-cn-base')
model = NezhaModel.from_pretrained("sijunhe/nezha-cn-base")
text = "我爱北京天安门"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
``` |
cambridgeltl/mirror-bert-base-uncased-sentence-drophead | a61b9b9024f427b7a562209fac022e85986e9f00 | 2021-09-19T22:47:41.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08027",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/mirror-bert-base-uncased-sentence-drophead | 378 | null | transformers | 2,608 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-bert-base-uncased-sentence-drophead
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf), using [drophead](https://aclanthology.org/2020.findings-emnlp.178.pdf) instead of dropout as feature space augmentation. Trained with unlabelled raw sentences, using [bert-base-uncased](https://huggingface.co/bert-base-uncased) as the base model. Please use mean-pooling over *all tokens* as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
cointegrated/rubert-base-cased-dp-paraphrase-detection | 2288d9e23758fc028af565680d451d21f7693390 | 2022-06-29T12:54:01.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"dataset:merionum/ru_paraphraser",
"transformers",
"sentence-similarity"
] | text-classification | false | cointegrated | null | cointegrated/rubert-base-cased-dp-paraphrase-detection | 378 | null | transformers | 2,609 | ---
language: ["ru"]
tags:
- sentence-similarity
- text-classification
datasets:
- merionum/ru_paraphraser
---
This is a version of paraphrase detector by DeepPavlov ([details in the documentation](http://docs.deeppavlov.ai/en/master/features/overview.html#ranking-model-docs)) ported to the `Transformers` format.
All credit goes to the authors of DeepPavlov.
The model has been trained on the dataset from http://paraphraser.ru/.
It classifies texts as paraphrases (class 1) or non-paraphrases (class 0).
```python
import torch
from transformers import AutoModelForSequenceClassification, BertTokenizer
model_name = 'cointegrated/rubert-base-cased-dp-paraphrase-detection'
model = AutoModelForSequenceClassification.from_pretrained(model_name).cuda()
tokenizer = BertTokenizer.from_pretrained(model_name)
def compare_texts(text1, text2):
batch = tokenizer(text1, text2, return_tensors='pt').to(model.device)
with torch.inference_mode():
proba = torch.softmax(model(**batch).logits, -1).cpu().numpy()
return proba[0] # p(non-paraphrase), p(paraphrase)
print(compare_texts('Сегодня на улице хорошая погода', 'Сегодня на улице отвратительная погода'))
# [0.7056226 0.2943774]
print(compare_texts('Сегодня на улице хорошая погода', 'Отличная погодка сегодня выдалась'))
# [0.16524374 0.8347562 ]
```
P.S. In the DeepPavlov repository, the tokenizer uses `max_seq_length=64`.
This model, however, uses `model_max_length=512`.
Therefore, the results on long texts may be inadequate. |
Sheerwin02/DialoGPT-small-isla | 0b6dff78f1bfbb1cc4d50c6d099420b57a4968da | 2022-02-28T06:03:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Sheerwin02 | null | Sheerwin02/DialoGPT-small-isla | 378 | null | transformers | 2,610 | ---
tags:
- conversational
---
#isla DialoGPT Model
|
scales-okn/docket-language-model | 1339be44c4185b118318131a7fbca1d65f12c4ac | 2022-06-04T15:09:27.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | scales-okn | null | scales-okn/docket-language-model | 378 | null | transformers | 2,611 | ---
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-large-ddlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-ddlm
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/models/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.9823 | 0.01 | 1000 | 0.9163 |
| 0.8817 | 0.02 | 2000 | 0.9022 |
| 0.9647 | 0.03 | 3000 | 0.8879 |
| 0.8646 | 0.04 | 4000 | 0.8577 |
| 0.9159 | 0.06 | 5000 | 0.8677 |
| 0.8449 | 0.07 | 6000 | 0.8221 |
| 0.8681 | 0.08 | 7000 | 0.8332 |
| 0.8738 | 0.09 | 8000 | 0.8334 |
| 0.8638 | 0.1 | 9000 | 0.8236 |
| 0.9066 | 0.11 | 10000 | 0.8200 |
| 0.8686 | 0.12 | 11000 | 0.8092 |
| 0.7736 | 0.13 | 12000 | 0.8199 |
| 0.8054 | 0.14 | 13000 | 0.7972 |
| 0.8934 | 0.16 | 14000 | 0.7998 |
| 0.7884 | 0.17 | 15000 | 0.7895 |
| 0.8278 | 0.18 | 16000 | 0.7586 |
| 0.8482 | 0.19 | 17000 | 0.7562 |
| 0.8716 | 0.2 | 18000 | 0.7819 |
| 0.8881 | 0.21 | 19000 | 0.7878 |
| 0.8397 | 0.22 | 20000 | 0.7989 |
| 0.811 | 0.23 | 21000 | 0.7846 |
| 0.7762 | 0.24 | 22000 | 0.7753 |
| 0.7778 | 0.25 | 23000 | 0.7878 |
| 0.737 | 0.27 | 24000 | 0.7473 |
| 0.8451 | 0.28 | 25000 | 0.7460 |
| 0.823 | 0.29 | 26000 | 0.7300 |
| 0.7472 | 0.3 | 27000 | 0.7292 |
| 0.8048 | 0.31 | 28000 | 0.7697 |
| 0.7962 | 0.32 | 29000 | 0.7359 |
| 0.8048 | 0.33 | 30000 | 0.7409 |
| 0.8095 | 0.34 | 31000 | 0.7434 |
| 0.7451 | 0.35 | 32000 | 0.7534 |
| 0.6997 | 0.37 | 33000 | 0.7602 |
| 0.8116 | 0.38 | 34000 | 0.7566 |
| 0.7963 | 0.39 | 35000 | 0.7245 |
| 0.786 | 0.4 | 36000 | 0.7311 |
| 0.7991 | 0.41 | 37000 | 0.7230 |
| 0.723 | 0.42 | 38000 | 0.7209 |
| 0.789 | 0.43 | 39000 | 0.7418 |
| 0.7296 | 0.44 | 40000 | 0.7325 |
| 0.7363 | 0.45 | 41000 | 0.7134 |
| 0.758 | 0.47 | 42000 | 0.6948 |
| 0.711 | 0.48 | 43000 | 0.6992 |
| 0.7984 | 0.49 | 44000 | 0.7055 |
| 0.8402 | 0.5 | 45000 | 0.7108 |
| 0.8553 | 0.51 | 46000 | 0.7005 |
| 0.7538 | 0.52 | 47000 | 0.7208 |
| 0.7169 | 0.53 | 48000 | 0.7291 |
| 0.7345 | 0.54 | 49000 | 0.7195 |
| 0.758 | 0.55 | 50000 | 0.6694 |
| 0.7868 | 0.56 | 51000 | 0.6938 |
| 0.6966 | 0.58 | 52000 | 0.6867 |
| 0.7389 | 0.59 | 53000 | 0.6862 |
| 0.7529 | 0.6 | 54000 | 0.7175 |
| 0.7345 | 0.61 | 55000 | 0.6970 |
| 0.766 | 0.62 | 56000 | 0.7017 |
| 0.7043 | 0.63 | 57000 | 0.6916 |
| 0.6474 | 0.64 | 58000 | 0.7129 |
| 0.7456 | 0.65 | 59000 | 0.6802 |
| 0.7512 | 0.66 | 60000 | 0.6951 |
| 0.6816 | 0.68 | 61000 | 0.7072 |
| 0.7206 | 0.69 | 62000 | 0.6967 |
| 0.6439 | 0.7 | 63000 | 0.6798 |
| 0.7309 | 0.71 | 64000 | 0.7163 |
| 0.6925 | 0.72 | 65000 | 0.6794 |
| 0.6833 | 0.73 | 66000 | 0.6637 |
| 0.6643 | 0.74 | 67000 | 0.6855 |
| 0.6433 | 0.75 | 68000 | 0.7035 |
| 0.7595 | 0.76 | 69000 | 0.7008 |
| 0.7214 | 0.78 | 70000 | 0.6618 |
| 0.7111 | 0.79 | 71000 | 0.6850 |
| 0.7375 | 0.8 | 72000 | 0.6909 |
| 0.6779 | 0.81 | 73000 | 0.7042 |
| 0.6646 | 0.82 | 74000 | 0.6634 |
| 0.6616 | 0.83 | 75000 | 0.7020 |
| 0.6762 | 0.84 | 76000 | 0.6638 |
| 0.7509 | 0.85 | 77000 | 0.6541 |
| 0.6963 | 0.86 | 78000 | 0.6781 |
| 0.6949 | 0.87 | 79000 | 0.6576 |
| 0.6781 | 0.89 | 80000 | 0.6900 |
| 0.65 | 0.9 | 81000 | 0.6835 |
| 0.7205 | 0.91 | 82000 | 0.6712 |
| 0.6901 | 0.92 | 83000 | 0.6699 |
| 0.6972 | 0.93 | 84000 | 0.6456 |
| 0.7041 | 0.94 | 85000 | 0.6497 |
| 0.6864 | 0.95 | 86000 | 0.6432 |
| 0.7308 | 0.96 | 87000 | 0.6497 |
| 0.6886 | 0.97 | 88000 | 0.6674 |
| 0.6947 | 0.99 | 89000 | 0.6638 |
| 0.6567 | 1.0 | 90000 | 0.6242 |
| 0.7185 | 1.01 | 91000 | 0.6704 |
| 0.7435 | 1.02 | 92000 | 0.6681 |
| 0.7108 | 1.03 | 93000 | 0.6619 |
| 0.6942 | 1.04 | 94000 | 0.6306 |
| 0.6998 | 1.05 | 95000 | 0.6409 |
| 0.6481 | 1.06 | 96000 | 0.6476 |
| 0.727 | 1.07 | 97000 | 0.6354 |
| 0.647 | 1.09 | 98000 | 0.6222 |
| 0.6622 | 1.1 | 99000 | 0.6119 |
| 0.6346 | 1.11 | 100000 | 0.6471 |
| 0.6203 | 1.12 | 101000 | 0.6655 |
| 0.6765 | 1.13 | 102000 | 0.6473 |
| 0.6703 | 1.14 | 103000 | 0.6308 |
| 0.6793 | 1.15 | 104000 | 0.6531 |
| 0.683 | 1.16 | 105000 | 0.6693 |
| 0.6654 | 1.17 | 106000 | 0.6241 |
| 0.6626 | 1.18 | 107000 | 0.6215 |
| 0.6976 | 1.2 | 108000 | 0.6479 |
| 0.7494 | 1.21 | 109000 | 0.6345 |
| 0.691 | 1.22 | 110000 | 0.6322 |
| 0.6568 | 1.23 | 111000 | 0.6265 |
| 0.705 | 1.24 | 112000 | 0.6281 |
| 0.6307 | 1.25 | 113000 | 0.6202 |
| 0.6828 | 1.26 | 114000 | 0.6158 |
| 0.6403 | 1.27 | 115000 | 0.6495 |
| 0.6615 | 1.28 | 116000 | 0.6298 |
| 0.6237 | 1.3 | 117000 | 0.6234 |
| 0.6707 | 1.31 | 118000 | 0.6267 |
| 0.6823 | 1.32 | 119000 | 0.6299 |
| 0.6333 | 1.33 | 120000 | 0.6169 |
| 0.685 | 1.34 | 121000 | 0.6371 |
| 0.6941 | 1.35 | 122000 | 0.6245 |
| 0.6358 | 1.36 | 123000 | 0.6291 |
| 0.6754 | 1.37 | 124000 | 0.6400 |
| 0.6286 | 1.38 | 125000 | 0.6148 |
| 0.7036 | 1.4 | 126000 | 0.6033 |
| 0.645 | 1.41 | 127000 | 0.6295 |
| 0.6578 | 1.42 | 128000 | 0.6348 |
| 0.651 | 1.43 | 129000 | 0.6222 |
| 0.5558 | 1.44 | 130000 | 0.6231 |
| 0.6601 | 1.45 | 131000 | 0.6302 |
| 0.6304 | 1.46 | 132000 | 0.6127 |
| 0.6177 | 1.47 | 133000 | 0.6047 |
| 0.5933 | 1.48 | 134000 | 0.6169 |
| 0.6307 | 1.49 | 135000 | 0.6012 |
| 0.6018 | 1.51 | 136000 | 0.5900 |
| 0.6724 | 1.52 | 137000 | 0.6086 |
| 0.6367 | 1.53 | 138000 | 0.6414 |
| 0.6515 | 1.54 | 139000 | 0.6267 |
| 0.5902 | 1.55 | 140000 | 0.5913 |
| 0.6523 | 1.56 | 141000 | 0.5992 |
| 0.6005 | 1.57 | 142000 | 0.6128 |
| 0.6179 | 1.58 | 143000 | 0.6089 |
| 0.6154 | 1.59 | 144000 | 0.6353 |
| 0.6298 | 1.61 | 145000 | 0.5997 |
| 0.5623 | 1.62 | 146000 | 0.5974 |
| 0.5787 | 1.63 | 147000 | 0.6165 |
| 0.6099 | 1.64 | 148000 | 0.6246 |
| 0.658 | 1.65 | 149000 | 0.6116 |
| 0.6567 | 1.66 | 150000 | 0.5938 |
| 0.6227 | 1.67 | 151000 | 0.5948 |
| 0.5858 | 1.68 | 152000 | 0.5822 |
| 0.6227 | 1.69 | 153000 | 0.5802 |
| 0.6699 | 1.71 | 154000 | 0.6067 |
| 0.5989 | 1.72 | 155000 | 0.6073 |
| 0.6184 | 1.73 | 156000 | 0.6124 |
| 0.6404 | 1.74 | 157000 | 0.6169 |
| 0.639 | 1.75 | 158000 | 0.5997 |
| 0.6433 | 1.76 | 159000 | 0.5989 |
| 0.5574 | 1.77 | 160000 | 0.5796 |
| 0.5983 | 1.78 | 161000 | 0.6036 |
| 0.6532 | 1.79 | 162000 | 0.5888 |
| 0.6679 | 1.8 | 163000 | 0.6038 |
| 0.62 | 1.82 | 164000 | 0.5984 |
| 0.5541 | 1.83 | 165000 | 0.6003 |
| 0.6192 | 1.84 | 166000 | 0.5786 |
| 0.6613 | 1.85 | 167000 | 0.6064 |
| 0.5923 | 1.86 | 168000 | 0.6018 |
| 0.5894 | 1.87 | 169000 | 0.5912 |
| 0.6462 | 1.88 | 170000 | 0.5902 |
| 0.5811 | 1.89 | 171000 | 0.6030 |
| 0.6358 | 1.9 | 172000 | 0.5915 |
| 0.614 | 1.92 | 173000 | 0.5886 |
| 0.5969 | 1.93 | 174000 | 0.6084 |
| 0.6146 | 1.94 | 175000 | 0.6003 |
| 0.6051 | 1.95 | 176000 | 0.5835 |
| 0.6268 | 1.96 | 177000 | 0.5999 |
| 0.6436 | 1.97 | 178000 | 0.5965 |
| 0.6167 | 1.98 | 179000 | 0.5789 |
| 0.5647 | 1.99 | 180000 | 0.5669 |
| 0.6038 | 2.0 | 181000 | 0.6009 |
| 0.6082 | 2.02 | 182000 | 0.5799 |
| 0.6483 | 2.03 | 183000 | 0.5716 |
| 0.5503 | 2.04 | 184000 | 0.5806 |
| 0.6231 | 2.05 | 185000 | 0.5699 |
| 0.5892 | 2.06 | 186000 | 0.5979 |
| 0.5933 | 2.07 | 187000 | 0.5709 |
| 0.594 | 2.08 | 188000 | 0.5719 |
| 0.5838 | 2.09 | 189000 | 0.5879 |
| 0.6039 | 2.1 | 190000 | 0.5984 |
| 0.5911 | 2.11 | 191000 | 0.5953 |
| 0.563 | 2.13 | 192000 | 0.5772 |
| 0.5671 | 2.14 | 193000 | 0.5771 |
| 0.6051 | 2.15 | 194000 | 0.5972 |
| 0.5852 | 2.16 | 195000 | 0.5917 |
| 0.5757 | 2.17 | 196000 | 0.5819 |
| 0.6557 | 2.18 | 197000 | 0.5655 |
| 0.6055 | 2.19 | 198000 | 0.5820 |
| 0.6067 | 2.2 | 199000 | 0.5801 |
| 0.6422 | 2.21 | 200000 | 0.5590 |
| 0.624 | 2.23 | 201000 | 0.5573 |
| 0.6222 | 2.24 | 202000 | 0.5661 |
| 0.5597 | 2.25 | 203000 | 0.5786 |
| 0.5746 | 2.26 | 204000 | 0.5622 |
| 0.6269 | 2.27 | 205000 | 0.5804 |
| 0.6241 | 2.28 | 206000 | 0.5696 |
| 0.6519 | 2.29 | 207000 | 0.5367 |
| 0.6161 | 2.3 | 208000 | 0.5666 |
| 0.5415 | 2.31 | 209000 | 0.5633 |
| 0.633 | 2.33 | 210000 | 0.5499 |
| 0.5566 | 2.34 | 211000 | 0.5822 |
| 0.6158 | 2.35 | 212000 | 0.5826 |
| 0.5574 | 2.36 | 213000 | 0.5429 |
| 0.5748 | 2.37 | 214000 | 0.5736 |
| 0.5818 | 2.38 | 215000 | 0.5599 |
| 0.6226 | 2.39 | 216000 | 0.5407 |
| 0.5733 | 2.4 | 217000 | 0.5759 |
| 0.6268 | 2.41 | 218000 | 0.5725 |
| 0.5885 | 2.42 | 219000 | 0.5771 |
| 0.5708 | 2.44 | 220000 | 0.5654 |
| 0.5783 | 2.45 | 221000 | 0.5756 |
| 0.61 | 2.46 | 222000 | 0.5647 |
| 0.5848 | 2.47 | 223000 | 0.5532 |
| 0.5869 | 2.48 | 224000 | 0.5519 |
| 0.5717 | 2.49 | 225000 | 0.5621 |
| 0.5675 | 2.5 | 226000 | 0.5446 |
| 0.6321 | 2.51 | 227000 | 0.5812 |
| 0.568 | 2.52 | 228000 | 0.5673 |
| 0.5577 | 2.54 | 229000 | 0.5590 |
| 0.5888 | 2.55 | 230000 | 0.5628 |
| 0.6389 | 2.56 | 231000 | 0.5828 |
| 0.5782 | 2.57 | 232000 | 0.5543 |
| 0.5871 | 2.58 | 233000 | 0.5575 |
| 0.5593 | 2.59 | 234000 | 0.5625 |
| 0.6167 | 2.6 | 235000 | 0.5450 |
| 0.5828 | 2.61 | 236000 | 0.5627 |
| 0.5411 | 2.62 | 237000 | 0.5498 |
| 0.6168 | 2.64 | 238000 | 0.5891 |
| 0.6508 | 2.65 | 239000 | 0.5811 |
| 0.6322 | 2.66 | 240000 | 0.5649 |
| 0.6131 | 2.67 | 241000 | 0.5473 |
| 0.5419 | 2.68 | 242000 | 0.5583 |
| 0.5685 | 2.69 | 243000 | 0.5635 |
| 0.5267 | 2.7 | 244000 | 0.5481 |
| 0.5357 | 2.71 | 245000 | 0.5474 |
| 0.585 | 2.72 | 246000 | 0.5281 |
| 0.5894 | 2.73 | 247000 | 0.5457 |
| 0.5665 | 2.75 | 248000 | 0.5579 |
| 0.5409 | 2.76 | 249000 | 0.5412 |
| 0.6051 | 2.77 | 250000 | 0.5447 |
| 0.5866 | 2.78 | 251000 | 0.5535 |
| 0.5348 | 2.79 | 252000 | 0.5377 |
| 0.5606 | 2.8 | 253000 | 0.5524 |
| 0.5142 | 2.81 | 254000 | 0.5441 |
| 0.543 | 2.82 | 255000 | 0.5499 |
| 0.5763 | 2.83 | 256000 | 0.5241 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.11.0
|
flair/upos-multi-fast | 4610445ba7d097da7e0ff23c3a782e9c474382c8 | 2021-03-02T22:22:55.000Z | [
"pytorch",
"en",
"de",
"fr",
"it",
"nl",
"pl",
"es",
"sv",
"da",
"no",
"fi",
"cs",
"dataset:ontonotes",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | flair | null | flair/upos-multi-fast | 377 | 4 | flair | 2,612 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language:
- en
- de
- fr
- it
- nl
- pl
- es
- sv
- da
- no
- fi
- cs
datasets:
- ontonotes
widget:
- text: "Ich liebe Berlin, as they say."
---
## Multilingual Universal Part-of-Speech Tagging in Flair (fast model)
This is the fast multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,88** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech)
Predicts universal POS tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
|ADJ | adjective |
| ADP | adposition |
| ADV | adverb |
| AUX | auxiliary |
| CCONJ | coordinating conjunction |
| DET | determiner |
| INTJ | interjection |
| NOUN | noun |
| NUM | numeral |
| PART | particle |
| PRON | pronoun |
| PROPN | proper noun |
| PUNCT | punctuation |
| SCONJ | subordinating conjunction |
| SYM | symbol |
| VERB | verb |
| X | other |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/upos-multi-fast")
# make example sentence
sentence = Sentence("Ich liebe Berlin, as they say. ")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('pos'):
print(entity)
```
This yields the following output:
```
Span [1]: "Ich" [− Labels: PRON (0.9999)]
Span [2]: "liebe" [− Labels: VERB (0.9999)]
Span [3]: "Berlin" [− Labels: PROPN (0.9997)]
Span [4]: "," [− Labels: PUNCT (1.0)]
Span [5]: "as" [− Labels: SCONJ (0.9991)]
Span [6]: "they" [− Labels: PRON (0.9998)]
Span [7]: "say" [− Labels: VERB (0.9998)]
Span [8]: "." [− Labels: PUNCT (1.0)]
```
So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import MultiCorpus
from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \
UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH
from flair.embeddings import StackedEmbeddings, FlairEmbeddings
# 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large)
corpus = MultiCorpus([
UD_ENGLISH(in_memory=False),
UD_GERMAN(in_memory=False),
UD_DUTCH(in_memory=False),
UD_FRENCH(in_memory=False),
UD_ITALIAN(in_memory=False),
UD_SPANISH(in_memory=False),
UD_POLISH(in_memory=False),
UD_CZECH(in_memory=False),
UD_DANISH(in_memory=False),
UD_SWEDISH(in_memory=False),
UD_NORWEGIAN(in_memory=False),
UD_FINNISH(in_memory=False),
])
# 2. what tag do we want to predict?
tag_type = 'upos'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# contextual string embeddings, forward
FlairEmbeddings('multi-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('multi-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=False)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/upos-multi-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
uclanlp/visualbert-nlvr2-coco-pre | 155352ba56549d808a84e5c1f891300bf24f019b | 2021-05-31T11:11:50.000Z | [
"pytorch",
"visual_bert",
"pretraining",
"transformers"
] | null | false | uclanlp | null | uclanlp/visualbert-nlvr2-coco-pre | 377 | null | transformers | 2,613 | Entry not found |
kakife3586/bad | 19f022d577b7ff8b69f87a3c89aa066405398985 | 2022-07-22T19:16:26.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | kakife3586 | null | kakife3586/bad | 377 | 1 | transformers | 2,614 | Entry not found |
KoboldAI/GPT-Neo-1.3B-Adventure | 803261b84356c383860f1140667bc49c7cfff2dc | 2022-03-22T09:49:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers",
"license:mit"
] | text-generation | false | KoboldAI | null | KoboldAI/GPT-Neo-1.3B-Adventure | 376 | 1 | transformers | 2,615 | ---
language: en
license: mit
pipeline_tag: text-generation
---
# GPT-Neo 1.3B - Adventure
## Model Description
GPT-Neo 1.3B-Adventure is a finetune created using EleutherAI's GPT-Neo 1.3B model.
## Training data
The training data is a direct copy of the "cys" dataset by VE, a CYOA-based dataset.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-1.3B-Adventure')
>>> generator("> You wake up.", do_sample=True, min_length=50)
[{'generated_text': '> You wake up"\nYou get out of bed, don your armor and get out of the door in search for new adventures.'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
``` |
Adapting/dialogue_agent_nlplab2022 | 51060a1364717812d951374c778573eae185f053 | 2022-06-30T07:46:33.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Adapting | null | Adapting/dialogue_agent_nlplab2022 | 376 | null | transformers | 2,616 | Dataset trained on: https://huggingface.co/datasets/Adapting/empathetic_dialogues_with_special_tokens
Commit hash of model versions
1. blenderbot-400M-distill - 10 epochs fine-tuning: **b86f62986872b4c1a9921acdb8cd226761d736cf**
2. blenderbot-400M-distill - 20 epochs fine-tuning: **e803a10542ea7e4f116e89aca0f7250fb71a8a04**
3. blenderbot-400M-distill - 30 epochs fine-tuning: **4e9e1331124134dc879adcbad6c0cad06d55ef1e** |
cahya/distilbert-base-indonesian | 9e948656420019310c5334dfcc3dd086c67d405a | 2021-02-08T09:06:09.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"id",
"dataset:wikipedia",
"dataset:id_newspapers_2018",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | cahya | null | cahya/distilbert-base-indonesian | 375 | 1 | transformers | 2,617 | ---
language: "id"
license: "mit"
datasets:
- wikipedia
- id_newspapers_2018
widget:
- text: "ayahku sedang bekerja di sawah untuk [MASK] padi."
---
# Indonesian DistilBERT base model (uncased)
## Model description
This model is a distilled version of the [Indonesian BERT base model](https://huggingface.co/cahya/bert-base-indonesian-1.5G).
This model is uncased.
This is one of several other language models that have been pre-trained with indonesian datasets. More detail about
its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='cahya/distilbert-base-indonesian')
>>> unmasker("Ayahku sedang bekerja di sawah untuk [MASK] padi")
[
{
"sequence": "[CLS] ayahku sedang bekerja di sawah untuk menanam padi [SEP]",
"score": 0.6853187084197998,
"token": 12712,
"token_str": "menanam"
},
{
"sequence": "[CLS] ayahku sedang bekerja di sawah untuk bertani padi [SEP]",
"score": 0.03739545866847038,
"token": 15484,
"token_str": "bertani"
},
{
"sequence": "[CLS] ayahku sedang bekerja di sawah untuk memetik padi [SEP]",
"score": 0.02742469497025013,
"token": 30338,
"token_str": "memetik"
},
{
"sequence": "[CLS] ayahku sedang bekerja di sawah untuk penggilingan padi [SEP]",
"score": 0.02214187942445278,
"token": 28252,
"token_str": "penggilingan"
},
{
"sequence": "[CLS] ayahku sedang bekerja di sawah untuk tanam padi [SEP]",
"score": 0.0185895636677742,
"token": 11308,
"token_str": "tanam"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
model_name='cahya/distilbert-base-indonesian'
tokenizer = DistilBertTokenizer.from_pretrained(model_name)
model = DistilBertModel.from_pretrained(model_name)
text = "Silakan diganti dengan text apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in Tensorflow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
model_name='cahya/distilbert-base-indonesian'
tokenizer = DistilBertTokenizer.from_pretrained(model_name)
model = TFDistilBertModel.from_pretrained(model_name)
text = "Silakan diganti dengan text apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
This model was distiled with 522MB of indonesian Wikipedia and 1GB of
[indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018).
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```[CLS] Sentence A [SEP] Sentence B [SEP]```
|
clagator/biobert_squad2_cased | e7545839557ddb772e1d4df26c5fd26de41592a2 | 2021-05-19T14:22:23.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | clagator | null | clagator/biobert_squad2_cased | 375 | null | transformers | 2,618 | Entry not found |
smmzhu/DialoGPT-medium-sam | 3c41a977ed304546e1185275414a4ab2225562a1 | 2022-07-11T06:58:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | smmzhu | null | smmzhu/DialoGPT-medium-sam | 375 | null | transformers | 2,619 | ---
tags:
- conversational
---
# My Awesome Model |
Doquey/DialoGPT-small-Luisbot1 | 6da56bf264c4fb5f49e835aed78342293bc7da40 | 2021-09-01T23:20:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Doquey | null | Doquey/DialoGPT-small-Luisbot1 | 373 | null | transformers | 2,620 | ---
tags:
- conversational
---
#Rick DialoGPT model |
Geotrend/bert-base-en-zh-cased | eebc9a2d0689a2073d0b0dd401e5d0decfa2864c | 2021-05-18T19:52:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-zh-cased | 373 | null | transformers | 2,621 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
valurank/distilroberta-clickbait | 294a3fc0c737db43110a157136edb0e7728c28ac | 2022-06-08T20:24:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-clickbait | 373 | null | transformers | 2,622 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-clickbait
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-clickbait
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of headlines.
It achieves the following results on the evaluation set:
- Loss: 0.0268
- Acc: 0.9963
## Training and evaluation data
The following data sources were used:
* 32k headlines classified as clickbait/not-clickbait from [kaggle](https://www.kaggle.com/amananandrai/clickbait-dataset)
* A dataset of headlines from https://github.com/MotiBaadror/Clickbait-Detection
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0195 | 1.0 | 981 | 0.0192 | 0.9954 |
| 0.0026 | 2.0 | 1962 | 0.0172 | 0.9963 |
| 0.0031 | 3.0 | 2943 | 0.0275 | 0.9945 |
| 0.0003 | 4.0 | 3924 | 0.0268 | 0.9963 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Xuhui/ToxDect-roberta-large | 7b97c89938cb241d3ae9235257bbe4916d4f0c75 | 2021-11-07T16:35:03.000Z | [
"pytorch",
"roberta",
"text-classification",
"arxiv:2102.00086",
"transformers"
] | text-classification | false | Xuhui | null | Xuhui/ToxDect-roberta-large | 372 | 2 | transformers | 2,623 | ---
language:
-
-
thumbnail:
tags:
-
-
-
license:
datasets:
-
-
metrics:
-
-
---
# Toxic language detection
## Model description
A toxic language detection model trained on tweets. The base model is Roberta-large. For more information,
including the **training data**, **limitations and bias**, please refer to the [paper](https://arxiv.org/pdf/2102.00086.pdf) and
Github [repo](https://github.com/XuhuiZhou/Toxic_Debias) for more details.
#### How to use
Note that LABEL_1 means toxic and LABEL_0 means non-toxic in the output.
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='Xuhui/ToxDect-roberta-large', return_all_scores=True)
prediction = classifier("You are f**king stupid!", )
print(prediction)
"""
Output:
[[{'label': 'LABEL_0', 'score': 0.002632011892274022}, {'label': 'LABEL_1', 'score': 0.9973680377006531}]]
"""
```
## Training procedure
The random seed for this model is 22. For other details, please refer to the Github [repo](https://github.com/XuhuiZhou/Toxic_Debias) for more details.
### BibTeX entry and citation info
```bibtex
@inproceedings{zhou-etal-2020-debiasing,
title = {Challenges in Automated Debiasing for Toxic Language Detection},
author = {Zhou, Xuhui and Sap, Maarten and Swayamdipta, Swabha and Choi, Yejin and Smith, Noah A.},
booktitle = {EACL},
abbr = {EACL},
html = {https://www.aclweb.org/anthology/2021.eacl-main.274.pdf},
code = {https://github.com/XuhuiZhou/Toxic_Debias},
year = {2021},
bibtex_show = {true},
selected = {true}
}
``` |
asahi417/tner-xlm-roberta-base-uncased-ontonotes5 | 580551e1a8cf979711f3956848e4479345bd045f | 2021-02-13T00:08:01.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-xlm-roberta-base-uncased-ontonotes5 | 372 | 1 | transformers | 2,624 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-ontonotes5")
``` |
cahya/bert-base-indonesian-1.5G | a4400ab68607dea3f7f1522f9fed74909980bd77 | 2021-05-19T13:37:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"id",
"dataset:wikipedia",
"dataset:id_newspapers_2018",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | cahya | null | cahya/bert-base-indonesian-1.5G | 372 | 0 | transformers | 2,625 | ---
language: "id"
license: "mit"
datasets:
- wikipedia
- id_newspapers_2018
widget:
- text: "Ibu ku sedang bekerja [MASK] sawah."
---
# Indonesian BERT base model (uncased)
## Model description
It is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This
model is uncased.
This is one of several other language models that have been pre-trained with indonesian datasets. More detail about
its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='cahya/bert-base-indonesian-1.5G')
>>> unmasker("Ibu ku sedang bekerja [MASK] supermarket")
[{'sequence': '[CLS] ibu ku sedang bekerja di supermarket [SEP]',
'score': 0.7983310222625732,
'token': 1495},
{'sequence': '[CLS] ibu ku sedang bekerja. supermarket [SEP]',
'score': 0.090003103017807,
'token': 17},
{'sequence': '[CLS] ibu ku sedang bekerja sebagai supermarket [SEP]',
'score': 0.025469014421105385,
'token': 1600},
{'sequence': '[CLS] ibu ku sedang bekerja dengan supermarket [SEP]',
'score': 0.017966199666261673,
'token': 1555},
{'sequence': '[CLS] ibu ku sedang bekerja untuk supermarket [SEP]',
'score': 0.016971781849861145,
'token': 1572}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
model_name='cahya/bert-base-indonesian-1.5G'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertModel.from_pretrained(model_name)
text = "Silakan diganti dengan text apa saja."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in Tensorflow:
```python
from transformers import BertTokenizer, TFBertModel
model_name='cahya/bert-base-indonesian-1.5G'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = TFBertModel.from_pretrained(model_name)
text = "Silakan diganti dengan text apa saja."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
This model was pre-trained with 522MB of indonesian Wikipedia and 1GB of
[indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018).
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are
then of the form:
```[CLS] Sentence A [SEP] Sentence B [SEP]``` |
Maxwere/DiabloGPT-medium-maxbot | 618d143972c3cfb64458586ab6a26fa62de8e5e3 | 2022-03-03T00:41:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Maxwere | null | Maxwere/DiabloGPT-medium-maxbot | 372 | null | transformers | 2,626 | ---
tags:
- conversational
---
# Max DiabloGPT Model |
Artem1/t5_squad_v1 | 8a6a2db32c91a1c1fa48c05b1740b22cb9fcdbd6 | 2022-07-12T11:25:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Artem1 | null | Artem1/t5_squad_v1 | 372 | null | transformers | 2,627 | Entry not found |
ChrisVCB/DialoGPT-medium-cmjs | 705a3520a4cb682316d11b238d89be3135da4f34 | 2021-12-10T20:02:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ChrisVCB | null | ChrisVCB/DialoGPT-medium-cmjs | 371 | null | transformers | 2,628 | ---
tags:
- conversational
---
# CMJS DialoGPT Model |
SajjadAyoubi/bert-base-fa-qa | 610d047db3fcef30d0811d44bef629c69beba299 | 2021-05-18T22:30:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SajjadAyoubi | null | SajjadAyoubi/bert-base-fa-qa | 371 | 5 | transformers | 2,629 | ### How to use
#### Requirements
Transformers require `transformers` and `sentencepiece`, both of which can be
installed using `pip`.
```sh
pip install transformers sentencepiece
```
#### Pipelines 🚀
In case you are not familiar with Transformers, you can use pipelines instead.
Note that, pipelines can't have _no answer_ for the questions.
```python
from transformers import pipeline
model_name = "SajjadAyoubi/bert-base-fa-qa"
qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
for question in questions:
print(qa_pipeline({"context": text, "question": question}))
>>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'}
>>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'}
>>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'}
```
#### Manual approach 🔥
Using the Manual approach, it is possible to have _no answer_ with even better
performance.
- PyTorch
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from src.utils import AnswerPredictor
model_name = "SajjadAyoubi/bert-base-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py and you can read more about it
predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
- TensorFlow 2.X
```python
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
from src.utils import TFAnswerPredictor
model_name = "SajjadAyoubi/bert-base-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py, you can read more about it
predictor = TFAnswerPredictor(model, tokenizer, n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```text
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
|
castorini/ance-msmarco-doc-firstp | 5b60aa5a8612e3db04390a0e91fda0d3154b0a8e | 2021-05-20T15:17:20.000Z | [
"pytorch",
"roberta",
"arxiv:2007.00808",
"transformers"
] | null | false | castorini | null | castorini/ance-msmarco-doc-firstp | 371 | null | transformers | 2,630 | This model is converted from the original ANCE [repo](https://github.com/microsoft/ANCE) and fitted into Pyserini:
> Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. [Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval](https://arxiv.org/pdf/2007.00808.pdf)
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-ance.md)
|
elgeish/cs224n-squad2.0-albert-base-v2 | 95a8cdd3beb799d79384b8b50f310edefeb97492 | 2020-12-11T21:38:54.000Z | [
"pytorch",
"albert",
"question-answering",
"arxiv:2004.07067",
"transformers",
"exbert",
"autotrain_compatible"
] | question-answering | false | elgeish | null | elgeish/cs224n-squad2.0-albert-base-v2 | 371 | null | transformers | 2,631 | ---
tags:
- exbert
---
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-base-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 78.94044093451794,
"f1": 81.7724930324639,
"total": 6078,
"HasAns_exact": 76.28865979381443,
"HasAns_f1": 82.20385314478195,
"HasAns_total": 2910,
"NoAns_exact": 81.37626262626263,
"NoAns_f1": 81.37626262626263,
"NoAns_total": 3168,
"best_exact": 78.95689371503784,
"best_exact_thresh": 0.0,
"best_f1": 81.78894581298378,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 24,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "albert-base-v2",
"model_type": "albert",
"num_train_epochs": 3,
"per_gpu_train_batch_size": 8,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 8,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-large-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-large-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base | 3320b11eacb143b6e6e6d71b727224dbd8f8b65a | 2021-08-05T08:22:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base | 371 | null | sentence-transformers | 2,632 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base
This is a port of the [DPR Model](https://github.com/facebookresearch/DPR) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base')
model = AutoModel.from_pretrained('sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/facebook-dpr-ctx_encoder-single-nq-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [DPR Model](https://github.com/facebookresearch/DPR) |
ShreyaR/finetuned-roberta-depression | a34232531498f0975e3c67ce0ce02ebd9488945c | 2022-05-20T04:38:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ShreyaR | null | ShreyaR/finetuned-roberta-depression | 371 | 5 | transformers | 2,633 | ---
license: mit
tags:
- generated_from_trainer
widget:
- text: "I feel so low and numb, don't feel like doing anything. Just passing my days"
- text: "Sleep is my greatest and most comforting escape whenever I wake up these days. The literal very first emotion I feel is just misery and reminding myself of all my problems."
- text: "I went to a movie today. It was below my expectations but the day was fine."
- text: "The first day of work was a little hectic but met pretty good colleagues, we went for a team dinner party at the end of the day."
metrics:
- accuracy
model-index:
- name: finetuned-roberta-depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-roberta-depression
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1385
- Accuracy: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0238 | 1.0 | 625 | 0.1385 | 0.9745 |
| 0.0333 | 2.0 | 1250 | 0.1385 | 0.9745 |
| 0.0263 | 3.0 | 1875 | 0.1385 | 0.9745 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dbmdz/bert-base-cased-finetuned-conll03-english | 4a108fba55732fd6570a15f7475ba13e83c6b8c5 | 2021-05-19T14:43:37.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | dbmdz | null | dbmdz/bert-base-cased-finetuned-conll03-english | 369 | null | transformers | 2,634 | Entry not found |
Starry/KARENTRIES | a256558b696d33b1aa2ed352ed15aad2b0c53116 | 2022-03-10T17:59:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Starry | null | Starry/KARENTRIES | 369 | null | transformers | 2,635 | ---
tags:
- conversational
---
# DialoGPT model |
KoboldAI/OPT-6B-nerys-v2 | 9e1f1498391df2c28ce35a9290a5a24b8022a43b | 2022-07-04T07:45:47.000Z | [
"pytorch",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"transformers",
"license:other"
] | text-generation | false | KoboldAI | null | KoboldAI/OPT-6B-nerys-v2 | 369 | 2 | transformers | 2,636 | ---
language: en
license: other
commercial: no
---
# OPT 6B - Nerys
## Model Description
OPT 6B-Nerys is a finetune created using Facebook's OPT model.
## Training data
The training data contains around 2500 ebooks in various genres (the "Pike" dataset), a CYOA dataset called "CYS" and 50 Asian "Light Novels" (the "Manga-v1" dataset).
Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
This dataset has been cleaned in the same way as fairseq-dense-13B-Nerys-v2
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/OPT-6B-Nerys-v2')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
### License
OPT-6B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### BibTeX entry and citation info
```
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
csebuetnlp/mT5_m2o_russian_crossSum | 6eedefa5d23a200078f75cb69430fa663562ff36 | 2022-04-22T15:05:47.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",
"si",
"so",
"es",
"sw",
"ta",
"te",
"th",
"ti",
"tr",
"uk",
"ur",
"uz",
"vi",
"cy",
"yo",
"arxiv:2112.08804",
"transformers",
"summarization",
"mT5",
"autotrain_compatible"
] | summarization | false | csebuetnlp | null | csebuetnlp/mT5_m2o_russian_crossSum | 368 | null | transformers | 2,637 | ---
tags:
- summarization
- mT5
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
licenses:
- cc-by-nc-sa-4.0
widget:
- text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."
---
# mT5-m2o-russian-CrossSum
This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **russian**, i.e. this model tries to **summarize text written in any language in Russian.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2o_russian_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
``` |
google/ddpm-celebahq-256 | cd5c944777ea2668051904ead6cc120739b86c4d | 2022-07-21T15:00:31.000Z | [
"diffusers",
"arxiv:2006.11239",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ddpm-celebahq-256 | 368 | 2 | diffusers | 2,638 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-celebahq-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4.  |
Lauler/deformer | 8c196932a1fee0f57293b4a9ad0e0e49fc469145 | 2021-12-29T07:21:08.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Lauler | null | Lauler/deformer | 367 | null | transformers | 2,639 | ---
widget:
- text: "dem har sökt upp de för att prata."
example_title: "de/dem exempel 1"
- text: "Jag såg de komma runt hörnet och gå i riktning mot dem byggnaderna."
example_title: "de/dem exempel 2"
---
## DeFormer
DeFormer är en modell som har tränats på att skilja mellan `de` och `dem` i svenska meningar. Modellen kan testas direkt i panelerna till höger under **Hosted Inference API** genom att skriva in en mening och trycka på **Compute**.
**Instruktioner (VIKTIGT):**
Använd endast de/dem med små bokstäver vid testning. I träningen av modellen gjordes alla "De" och "Dem" om till gemener. Avsluta meningen med skiljetecken (punkt, frågetecken, osv) för bäst möjliga resultat.
## Träningsdata
DeFormer har tränats på meningar från Europarlamentet och svenskspråkiga Wikimedia. Dessa hämtades från [OPUS](https://opus.nlpl.eu/). Källorna valdes ut för att de antogs ha ett korrekt språkbruk.
Endast meningar innehållandes `de` eller `dem` -- eller bägge två -- behölls i konstruktionen av träningsdataset. I tabellen nedan återfinns beskrivande statistik över antalet meningar som behölls från respektive dataset, samt frekvenser över förekomsten av `de/dem`.
| Datakälla | Meningar | # De | # Dem | De/Dem ratio |
| ----------- | ----------- | ------- | ------- | ------------ |
| [Europaparl sv.txt.gz](https://opus.nlpl.eu/download.php?f=Europarl/v8/mono/sv.txt.gz) | 500660 | 465977 | 54331 | 8.57x |
| [JRC-Acquis raw.sv.gz](https://opus.nlpl.eu/download.php?f=JRC-Acquis/mono/JRC-Acquis.raw.sv.gz) | 417951 | 408576 | 17028 | 23.99x |
| [Wikimedia sv.txt.gz](https://opus.nlpl.eu/download.php?f=wikimedia/v20210402/mono/sv.txt.gz) | 630601 | 602393 | 38852 | 15.48x |
| **Total** | **1549212** | **1476946** | **110211** | **13.40x** |
Vid träningen av DeFormer introducerades slumpmässiga substitioner, där `de` eller `dem` byttes ut mot den motsatta formen. Modellen utmanades sedan att klassificera huruvida ett givet ord tillhörde ett av följande kategorier
1. **`ord`** (alla bakgrundsord som inte är de/dem tillhör denna kategori)
2. **`DE`**
3. **`DEM`**
Innan observationerna skickades in till modellträning byttes `de` ut mot `dem` med 47 procent sannolikhet, medan `dem` byttes till `de` i 40 procent av fallen.
## Träffsäkerhet/Accuracy
DeFormer utvärderades på ett valideringsset bestående av 31200 meningar från samma datakälla (svenska wiki + europaparlamentet + JRC) som modellen tränats på. Slumpmässiga fel introducerades för att utmana modellen. 47 procent av förekommande `de` i ursprungsmeningarna ändrades till `dem`, medan 40 procent av förekommande `dem` ändrades till `de`. Tabellen nedan visar att DeFormer är väldigt träffsäker. De få "felaktiga" prediktioner som modellen outputtar är nästan samtliga `de/dem som`-konstruktioner med bisatser. Majoriteten av dessa är egentligen inte att anse som felaktiga, eftersom [båda formerna är accepterade](https://www4.isof.se/cgi-bin/srfl/visasvar.py?sok=dem%20som&svar=79718&log_id=705355).
| | Accuracy |
| ----------- | ----------- |
| de | 99.9\% |
| dem | 98.6\% | |
Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit | 62a5cc04518c1339a2c88bdaa63f1dedaa61146a | 2022-06-19T06:34:12.000Z | [
"pytorch",
"gptj",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit | 367 | 2 | sentence-transformers | 2,640 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
kornosk/bert-election2020-twitter-stance-biden-KE-MLM | b8913cc4d8d2e857667f20908c0c028bd3d1183d | 2022-05-02T22:58:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers",
"twitter",
"stance-detection",
"election2020",
"politics",
"license:gpl-3.0"
] | text-classification | false | kornosk | null | kornosk/bert-election2020-twitter-stance-biden-KE-MLM | 367 | 1 | transformers | 2,641 | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM)
Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden-KE-MLM"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Biden!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Biden is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` |
roberta-base-openai-detector | 2de46c869ee117c00af7b2e9e4cba743c2cbc778 | 2022-07-22T08:00:35.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1904.09751",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:mit"
] | text-classification | false | null | null | roberta-base-openai-detector | 366 | 3 | transformers | 2,642 | ---
language: en
license: mit
tags:
- exbert
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa Base OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
- **Model Type:** Fine-tuned transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- [Explore the detector model here](https://huggingface.co/openai-detector )
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
The model developers write that:
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
## Citation Information
```bibtex
@article{solaiman2019release,
title={Release strategies and the social impacts of language models},
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
journal={arXiv preprint arXiv:1908.09203},
year={2019}
}
```
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
More information needed.
|
ivanlau/language-detection-fine-tuned-on-xlm-roberta-base | 4207aaa00b26aea91d99cb1abc2d7f56814fbe05 | 2021-12-17T10:33:13.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:common_language",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ivanlau | null | ivanlau/language-detection-fine-tuned-on-xlm-roberta-base | 366 | 1 | transformers | 2,643 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: language-detection-fine-tuned-on-xlm-roberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: common_language
type: common_language
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.9738386718094919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-fine-tuned-on-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [common_language](https://huggingface.co/datasets/common_language) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1886
- Accuracy: 0.9738
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1 | 1.0 | 22194 | 0.1886 | 0.9738 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
### Notebook
[notebook](https://github.com/IvanLauLinTiong/language-detector/blob/main/xlm_roberta_base_commonlanguage_language_detector.ipynb) |
valurank/finetuned-distilbert-news-article-categorization | 3046ba5378260ac2ed8b7c49265fd1f4e9e68d97 | 2022-07-03T21:23:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/finetuned-distilbert-news-article-categorization | 366 | null | transformers | 2,644 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: finetuned-distilbert-news-article-categorization
results: []
---
### finetuned-distilbert-news-article-catgorization
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the news_article_categorization dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1548
- F1_score(weighted): 0.96
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
The model was trained on some subset of the news_article_categorization dataset and it was validated on the remaining subset of the data
### Training procedure
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-5
- train_batch_size: 3
- eval_batch_size: 3
- seed: 17
- optimizer: AdamW(lr=1e-5 and epsilon=1e-08)
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0
- num_epochs: 2
### Training results
| Training Loss | Epoch | Validation Loss | f1 score |
|:-------------:|:-----:|:---------------: |:------:|
| 0.6359 | 1.0 | 0.1739 | 0.9619 |
| 0.1548 | 2.0 | 0.1898 | 0.9648 |
|
SI2M-Lab/DarijaBERT | f88ae61231ac5b42ced6733310b92a2133ea67a7 | 2022-06-20T14:55:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"transformers",
"autotrain_compatible"
] | fill-mask | false | SI2M-Lab | null | SI2M-Lab/DarijaBERT | 365 | 3 | transformers | 2,645 | ---
language: ar
widget:
- text: " جاب ليا [MASK] ."
- text: "مشيت نجيب[MASK] فالفرماسيان ."
---
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija".
**DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model was trained on a total of ~3 Million sequences of Darija dialect representing 691MB of text or a total of ~100M tokens.
The model was trained on a dataset issued from three different sources:
* Stories written in Darija scrapped from a dedicated website
* Youtube comments from 40 different Moroccan channels
* Tweets crawled based on a list of Darija keywords.
More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert)
**Loading the model**
The model can be loaded directly using the Huggingface library:
```python
from transformers import AutoTokenizer, AutoModel
DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("SI2M-Lab/DarijaBERT")
DarijaBert_model = AutoModel.from_pretrained("SI2M-Lab/DarijaBERT")
```
**Acknowledgments**
We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs.
|
NlpHUST/t5-small-vi-summarization | d579ec553e3bd8574413a7736a005acea50f8508 | 2021-06-23T03:36:33.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NlpHUST | null | NlpHUST/t5-small-vi-summarization | 365 | 1 | transformers | 2,646 | # T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization
#### Example Using
``` bash
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-small-vi-summarization")
tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-small-vi-summarization")
model.to(device)
src = "Theo BHXH Việt Nam, nhiều doanh nghiệp vẫn chỉ đóng BHXH cho người lao động theo mức lương. \\\\
Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\\\
BHXH Việt Nam vừa có báo cáo về tình hình thực hiện chính sách BHXH thời gian qua. \\\\
Theo đó, tình trạng nợ, trốn đóng BHXH, BHTN vẫn xảy ra ở hầu hết các tỉnh, thành. \\\\
Thống kê tới ngày 31/12/2020, tổng số nợ BHXH, BHYT, BHTN là hơn 13.500 tỷ đồng, \\\\
chiếm 3,35 % số phải thu, trong đó: Số nợ BHXH bắt buộc là hơn 8.600 tỷ đồng, \\\\
nợ BHTN là 335 tỷ đồng. Liên quan tới tiền lương đóng BHXH, báo cáo của \\\\
BHXH Việt Nam cho thấy: Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, \\\\
bảng lương để đóng BHXH bằng mức thấp nhất. Tức là bằng mức lương tối \\\\
thiểu vùng, cộng thêm 7 % đối với lao động đã qua đào tạo nghề và cộng \\\\
thêm 5 % hoặc 7 % đối với lao động làm nghề hoặc công việc nặng nhọc, \\\\
độc hại, nguy hiểm, đặc biệt nặng nhọc độc hại và nguy hiểm. Đối với \\\\
lao động giữ chức vụ, khoảng 80 % doanh nghiệp đã xây dựng thang, \\\\
bảng lương cụ thể theo chức danh. Đơn cử như với chức vụ giám đốc \\\\
sản xuất, giám đốc điều hành, trưởng phòng. Còn lại các doanh nghiệp \\\\
xây dựng đối với lao động giữ chức vụ theo thang lương, bảng lương \\\\
chuyên môn nghiệp vụ và bảng phụ cấp chức vụ, phụ cấp trách nhiệm. \\\\
Thống kê của BHXH Việt Nam cũng cho thấy, đa số doanh nghiệp đã đăng \\\\
ký đóng BHXH cho người lao động theo mức lương mà không có khoản bổ \\\\
sung khác. Mặc dù quy định từ ngày 1/1/2018, tiền lương tháng đóng BHXH \\\\
gồm mức lương và thêm khoản bổ sung khác."
tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device)
model.eval()
summary_ids = model.generate(
tokenized_text,
max_length=256,
num_beams=5,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(output)
```
#### Output
``` bash
Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, bảng lương để đóng BHXH bằng mức thấp nhất. \\
Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\
Thống kê của BHXH Việt Nam cho thấy, nhiều doanh nghiệp vẫn chỉ đóng BHXH \\
cho người lao động theo mức lương mà không có khoản bổ sung khác.
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]). |
ajitrajasekharan/biomedical | dfd137b5429ef4e3ca250053967f4384b6a99c02 | 2022-02-05T08:44:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ajitrajasekharan | null | ajitrajasekharan/biomedical | 364 | 1 | transformers | 2,647 | ---
language:
- {en} # Example: fr
license: mit
widget:
- text: "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]"
example_title: "Test for entity type: Disease"
- text: "Overexpression of [MASK] occurs across a wide range of cancers"
example_title: "Test for entity type: Gene"
- text: "Patients treated with [MASK] are vulnerable to infectious diseases"
example_title: "Test for entity type: Drug"
- text: "A eGFR level below [MASK] indicates chronic kidney disease"
example_title: "Test for entity type: Measure "
- text: "In the [MASK], increased daily imatinib dose induced MMR"
example_title: "Test for entity type: STUDY/TRIAL"
- text: "Paul Erdos died at [MASK]"
example_title: "Test for entity type: TIME"
inference:
parameters:
top_k: 10
tags:
- {fill-mask} # Example: audio
- exbert
---
This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
[App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach.
[Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
### Ensemble model performance
<img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
minimaxir/reddit | d8caacc8da7948b068192e398f0d9b4d5137d815 | 2021-05-23T09:36:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | minimaxir | null | minimaxir/reddit | 364 | 0 | transformers | 2,648 | Entry not found |
TurkuNLP/wikibert-base-en-cased | aba47c7a4b7f05ea5ec5e645e0843d31d9018f18 | 2020-05-24T19:59:24.000Z | [
"pytorch",
"transformers"
] | null | false | TurkuNLP | null | TurkuNLP/wikibert-base-en-cased | 363 | null | transformers | 2,649 | Entry not found |
facebook/wav2vec2-xls-r-1b-21-to-en | 286d0470e6ed5468b5a4ef0a9bd15b0aebe1a034 | 2022-05-26T22:24:32.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"multilingual",
"fr",
"de",
"es",
"ca",
"it",
"ru",
"zh",
"pt",
"fa",
"et",
"mn",
"nl",
"tr",
"ar",
"sv",
"lv",
"sl",
"ta",
"ja",
"id",
"cy",
"en",
"dataset:common_voice",
"dataset:multilingual_librispeech",
"dataset:covost2",
"arxiv:2111.09296",
"transformers",
"speech",
"xls_r",
"xls_r_translation",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-xls-r-1b-21-to-en | 363 | 1 | transformers | 2,650 | ---
language:
- multilingual
- fr
- de
- es
- ca
- it
- ru
- zh
- pt
- fa
- et
- mn
- nl
- tr
- ar
- sv
- lv
- sl
- ta
- ja
- id
- cy
- en
datasets:
- common_voice
- multilingual_librispeech
- covost2
tags:
- speech
- xls_r
- automatic-speech-recognition
- xls_r_translation
pipeline_tag: automatic-speech-recognition
license: apache-2.0
widget:
- example_title: Swedish
src: https://cdn-media.huggingface.co/speech_samples/cv_swedish_1.mp3
- example_title: Arabic
src: https://cdn-media.huggingface.co/speech_samples/common_voice_ar_19058308.mp3
- example_title: Russian
src: https://cdn-media.huggingface.co/speech_samples/common_voice_ru_18849022.mp3
- example_title: German
src: https://cdn-media.huggingface.co/speech_samples/common_voice_de_17284683.mp3
- example_title: French
src: https://cdn-media.huggingface.co/speech_samples/common_voice_fr_17299386.mp3
- example_title: Indonesian
src: https://cdn-media.huggingface.co/speech_samples/common_voice_id_19051309.mp3
- example_title: Italian
src: https://cdn-media.huggingface.co/speech_samples/common_voice_it_17415776.mp3
- example_title: Japanese
src: https://cdn-media.huggingface.co/speech_samples/common_voice_ja_19482488.mp3
- example_title: Mongolian
src: https://cdn-media.huggingface.co/speech_samples/common_voice_mn_18565396.mp3
- example_title: Dutch
src: https://cdn-media.huggingface.co/speech_samples/common_voice_nl_17691471.mp3
- example_title: Russian
src: https://cdn-media.huggingface.co/speech_samples/common_voice_ru_18849022.mp3
- example_title: Turkish
src: https://cdn-media.huggingface.co/speech_samples/common_voice_tr_17341280.mp3
- example_title: Catalan
src: https://cdn-media.huggingface.co/speech_samples/common_voice_ca_17367522.mp3
- example_title: English
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
- example_title: Dutch
src: https://cdn-media.huggingface.co/speech_samples/common_voice_nl_17691471.mp3
---
# Wav2Vec2-XLS-R-2b-21-EN
Facebook's Wav2Vec2 XLS-R fine-tuned for **Speech Translation.**

This is a [SpeechEncoderDecoderModel](https://huggingface.co/transformers/model_doc/speechencoderdecoder.html) model.
The encoder was warm-started from the [**`facebook/wav2vec2-xls-r-1b`**](https://huggingface.co/facebook/wav2vec2-xls-r-1b) checkpoint and
the decoder from the [**`facebook/mbart-large-50`**](https://huggingface.co/facebook/mbart-large-50) checkpoint.
Consequently, the encoder-decoder model was fine-tuned on 21 `{lang}` -> `en` translation pairs of the [Covost2 dataset](https://huggingface.co/datasets/covost2).
The model can translate from the following spoken languages `{lang}` -> `en` (English):
{`fr`, `de`, `es`, `ca`, `it`, `ru`, `zh-CN`, `pt`, `fa`, `et`, `mn`, `nl`, `tr`, `ar`, `sv-SE`, `lv`, `sl`, `ta`, `ja`, `id`, `cy`} -> `en`
For more information, please refer to Section *5.1.2* of the [official XLS-R paper](https://arxiv.org/abs/2111.09296).
## Usage
### Demo
The model can be tested directly on the speech recognition widget on this model card!
Simple record some audio in one of the possible spoken languages or pick an example audio file to see how well the checkpoint can translate the input.
### Example
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
# replace following lines to load an audio file of your choice
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
audio_file = librispeech_en[0]["file"]
asr = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-xls-r-1b-21-to-en", feature_extractor="facebook/wav2vec2-xls-r-1b-21-to-en")
translation = asr(audio_file)
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-1b-21-to-en")
processor = Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-1b-21-to-en")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["array"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Results `{lang}` -> `en`
See the row of **XLS-R (1B)** for the performance on [Covost2](https://huggingface.co/datasets/covost2) for this model.

## More XLS-R models for `{lang}` -> `en` Speech Translation
- [Wav2Vec2-XLS-R-300M-21-EN](https://huggingface.co/facebook/wav2vec2-xls-r-300m-21-to-en)
- [Wav2Vec2-XLS-R-1B-21-EN](https://huggingface.co/facebook/wav2vec2-xls-r-1b-21-to-en)
- [Wav2Vec2-XLS-R-2B-21-EN](https://huggingface.co/facebook/wav2vec2-xls-r-2b-21-to-en)
- [Wav2Vec2-XLS-R-2B-22-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16)
|
google/realm-cc-news-pretrained-embedder | ba2b2d766d08235b15608e0b95f2bbbe3f8dfed6 | 2022-01-05T18:47:59.000Z | [
"pytorch",
"realm",
"en",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/realm-cc-news-pretrained-embedder | 363 | null | transformers | 2,651 | ---
language: en
license: apache-2.0
---
# realm-cc-news-pretrained-embedder
## Model description
The REALM checkpoint pretrained with CC-News as target corpus and Wikipedia as knowledge corpus, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmEmbedder
embedder = RealmEmbedder.from_pretrained("qqaatw/realm-cc-news-pretrained-embedder")
```
|
anton-l/wav2vec2-base-superb-sd | 3b9021739e0bb551b176f8acb7e5a8a0cf33d944 | 2021-12-14T09:57:00.000Z | [
"pytorch",
"wav2vec2",
"audio-frame-classification",
"transformers"
] | null | false | anton-l | null | anton-l/wav2vec2-base-superb-sd | 362 | null | transformers | 2,652 | |
google/tapas-small-finetuned-sqa | 7b8f70a63c913442114f3b6b516d48657f044f0a | 2021-11-29T13:09:34.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:msr_sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-small-finetuned-sqa | 362 | null | transformers | 2,653 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- msr_sqa
---
# TAPAS small model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_small` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
**SMALL** | **noreset** | **0.5876** | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
**SMALL** | **reset** | **0.6155** | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
``` |
microsoft/beit-large-patch16-512 | f03bc4a94ad012c74bdd32d80ec7169a751034f9 | 2022-01-28T10:20:07.000Z | [
"pytorch",
"jax",
"beit",
"image-classification",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/beit-large-patch16-512 | 362 | 1 | transformers | 2,654 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# BEiT (large-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 512x512. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import BeitFeatureExtractor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-512')
model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-512')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
BrunoNogueira/DialoGPT-kungfupanda | 89c1a1f2bb2c5ab517ec9319094a8c0405de9240 | 2021-09-23T18:49:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BrunoNogueira | null | BrunoNogueira/DialoGPT-kungfupanda | 361 | null | transformers | 2,655 | ---
tags:
- conversational
---
#DialoGPT-kungfupanda |
aluserhuggingface/DialoGPT-small-harrypotter | 99262a7670e2d0053c5c46724110f4135fc33e67 | 2022-02-18T19:41:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | aluserhuggingface | null | aluserhuggingface/DialoGPT-small-harrypotter | 361 | null | transformers | 2,656 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
lassl/gpt2-ko-small | e7d6eaeacf4937a07a1c57394b9d70448b0a0141 | 2022-02-20T00:13:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ko",
"transformers",
"korean",
"lassl",
"license:apache-2.0"
] | text-generation | false | lassl | null | lassl/gpt2-ko-small | 361 | 3 | transformers | 2,657 | ---
license: apache-2.0
language: ko
tags:
- korean
- lassl
---
# LASSL gpt2-ko-small
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lassl/gpt2-ko-small")
tokenizer = AutoTokenizer.from_pretrained("lassl/gpt2-ko-small")
```
## Evaluation
Evaulation results will be released soon.
## Corpora
This model was trained from 6,831,079 examples (whose have 3,497,512,448 tokens). 6,831,079 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`.
```bash
corpora/
├── [707M] kowiki_latest.txt
├── [ 26M] modu_dialogue_v1.2.txt
├── [1.3G] modu_news_v1.1.txt
├── [9.7G] modu_news_v2.0.txt
├── [ 15M] modu_np_v1.1.txt
├── [1008M] modu_spoken_v1.2.txt
├── [6.5G] modu_written_v1.0.txt
└── [413M] petition.txt
```
|
poom-sci/bert-base-uncased-multi-emotion | 09b04517929ddbb0a90d439018418d47377bcac3 | 2021-11-14T16:22:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:go_emotions",
"transformers",
"translation",
"license:apache-2.0"
] | text-classification | false | poom-sci | null | poom-sci/bert-base-uncased-multi-emotion | 361 | null | transformers | 2,658 | ---
language:
- en
tags:
- translation
license: apache-2.0
datasets:
- go_emotions
---
created for study |
Narrativa/mT5-base-finetuned-tydiQA-question-generation | 5e36ab2e87781ca9cbd4e7101e6904c7e5cf7568 | 2021-08-23T10:05:14.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"multilingual",
"dataset:tydiqa",
"arxiv:2010.11934",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Narrativa | null | Narrativa/mT5-base-finetuned-tydiQA-question-generation | 360 | 1 | transformers | 2,659 | ---
language: multilingual
datasets:
- tydiqa
widget:
- text: "answer: monitoring and managing PR strategy including relations with the media and journalists context: Sofía has a degree in Communications and public relations agency experience where she was in charge of monitoring and managing PR strategy including relations with the media and journalists."
---
# mT5-base fine-tuned on TyDiQA for multilingual Question Generation 🗺📖❓
[Google's mT5-base](https://huggingface.co/google/mt5-base) fine-tuned on [TyDi QA](https://huggingface.co/nlp/viewer/?dataset=tydiqa&config=secondary_task) (secondary task) for **multingual Question Generation** downstream task (by answer prepending).
## Details of mT5
[Google's mT5](https://github.com/google-research/multilingual-t5)
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
**Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel*
## Details of the dataset 📚
**TyDi QA** is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).
| Dataset | Task | Split | # samples |
| -------- | ----- |------| --------- |
| TyDi QA | GoldP | train| 49881 |
| TyDi QA | GoldP | valid| 5077 |
## Results on validation dataset 📝
### WIP
## Model in Action 🚀
### WIP
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
cpierse/gpt2_film_scripts | 423a12f965f84b34e8e9bd85cfb32e7cec634e7d | 2021-05-21T15:09:47.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cpierse | null | cpierse/gpt2_film_scripts | 360 | null | transformers | 2,660 | Entry not found |
google/multiberts-seed_10 | 2eb55a013190f8ec91466b5cd9404699b379f48a | 2021-11-05T22:26:09.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_10",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_10 | 360 | null | transformers | 2,661 | ---
language: en
tags:
- multiberts
- multiberts-seed_10
license: apache-2.0
---
# MultiBERTs - Seed 10
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #10.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_10')
model = TFBertModel.from_pretrained("google/multiberts-seed_10")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_10')
model = BertModel.from_pretrained("google/multiberts-seed_10")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/muril-large-cased | ace319f0d17524297957e249aff028b0b7357ab5 | 2021-10-16T03:28:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1810.04805",
"arxiv:1911.02116",
"arxiv:2003.11080",
"arxiv:2009.05166",
"arxiv:2103.10730",
"transformers"
] | feature-extraction | false | google | null | google/muril-large-cased | 360 | 10 | transformers | 2,662 | # MuRIL Large
Multilingual Representations for Indian Languages : A BERT Large (24L) model pre-trained on 17 Indian languages, and their transliterated counterparts.
## Overview
This model uses a BERT large architecture [1] pretrained from scratch using the
Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6]
Indian languages.
We use a training paradigm similar to multilingual bert, with a few
modifications as listed:
* We include translation and transliteration segment pairs in training as
well.
* We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to
enhance low-resource performance. [7]
See the Training section for more details.
## Training
The MuRIL model is pre-trained on monolingual segments as well as parallel
segments as detailed below :
* Monolingual Data : We make use of publicly available corpora from Wikipedia
and Common Crawl for 17 Indian languages.
* Parallel Data : We have two types of parallel data :
* Translated Data : We obtain translations of the above monolingual
corpora using the Google NMT pipeline. We feed translated segment pairs
as input. We also make use of the publicly available PMINDIA corpus.
* Transliterated Data : We obtain transliterations of Wikipedia using the
IndicTrans [8] library. We feed transliterated segment pairs as input.
We also make use of the publicly available Dakshina dataset.
We keep an exponent value of 0.3 to calculate duplication multiplier values for
upsampling of lower resourced languages and set dupe factors accordingly. Note,
we limit transliterated pairs to Wikipedia only.
The model was trained using a self-supervised masked language modeling task. We
do whole word masking with a maximum of 80 predictions. The model was trained
for 1500K steps, with a batch size of 8192, and a max sequence length of 512.
### Trainable parameters
All parameters in the module are trainable, and fine-tuning all parameters is
the recommended practice.
## Uses & Limitations
This model is intended to be used for a variety of downstream NLP tasks for
Indian languages. This model is trained on transliterated data as well, a
phenomenon commonly observed in the Indian context. This model is not expected
to perform well on languages other than the ones used in pre-training, i.e. 17
Indian languages.
## Evaluation
We provide the results of fine-tuning this model on a set of downstream tasks.<br/>
We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/>
All results are computed in a zero-shot setting, with English being the high resource training set language.<br/>
The results for XLM-R (Large) are taken from the XTREME paper [9].
* Shown below are results on datasets from the XTREME benchmark (in %)
<br/>
PANX (F1) | bn | en | hi | ml | mr | ta | te | ur | Average
:------------ | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ------:
XLM-R (large) | 78.8 | 84.7 | 73.0 | 67.8 | 68.1 | 59.5 | 55.8 | 56.4 | 68.0
MuRIL (large) | 85.8 | 85.0 | 78.3 | 75.6 | 77.3 | 71.1 | 65.6 | 83.0 | 77.7
<br/>
UDPOS (F1) | en | hi | mr | ta | te | ur | Average
:------------ | ---: | ---: | ---: | ---: | ---: | ---: | ------:
XLM-R (large) | 96.1 | 76.4 | 80.8 | 65.2 | 86.6 | 70.3 | 79.2
MuRIL (large) | 95.7 | 71.3 | 85.7 | 62.6 | 85.8 | 62.8 | 77.3
<br/>
XNLI (Accuracy) | en | hi | ur | Average
:-------------- | ---: | ---: | ---: | ------:
XLM-R (large) | 88.7 | 75.6 | 71.7 | 78.7
MuRIL (large) | 88.4 | 75.8 | 71.7 | 78.6
<br/>
XQUAD (F1/EM) | en | hi | Average
:------------ | --------: | --------: | --------:
XLM-R (large) | 86.5/75.7 | 76.7/59.7 | 81.6/67.7
MuRIL (large) | 88.2/77.8 | 78.4/62.4 | 83.3/70.1
<br/>
MLQA (F1/EM) | en | hi | Average
:------------ | --------: | --------: | --------:
XLM-R (large) | 83.5/70.6 | 70.6/53.1 | 77.1/61.9
MuRIL (large) | 84.4/71.7 | 72.2/54.1 | 78.3/62.9
<br/>
TyDiQA (F1/EM) | en | bn | te | Average
:------------- | --------: | --------: | --------: | --------:
XLM-R (large) | 71.5/56.8 | 64.0/47.8 | 70.1/43.6 | 68.5/49.4
MuRIL (large) | 75.9/66.8 | 67.1/53.1 | 71.5/49.8 | 71.5/56.6
<br/>
The fine-tuning hyperparameters are as follows:
Task | Batch Size | Learning Rate | Epochs | Warm-up Ratio
:----- | ---------: | ------------: | -----: | ------------:
PANX | 32 | 2e-5 | 10 | 0.1
UDPOS | 64 | 5e-6 | 10 | 0.1
XNLI | 128 | 2e-5 | 5 | 0.1
XQuAD | 32 | 3e-5 | 2 | 0.1
MLQA | 32 | 3e-5 | 2 | 0.1
TyDiQA | 32 | 3e-5 | 3 | 0.1
## References
\[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT:
Pre-training of Deep Bidirectional Transformers for Language
Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint
arXiv:1810.04805, 2018.
\[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia)
\[3]: [Common Crawl](http://commoncrawl.org/the-data/)
\[4]:
[PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html)
\[5]: [Dakshina](https://github.com/google-research-datasets/dakshina)
\[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi),
Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya
(or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu
(ur).
\[7]: Conneau, Alexis, et al.
[Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf).
arXiv preprint arXiv:1911.02116 (2019).
\[8]: [IndicTrans](https://github.com/libindic/indic-trans)
\[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M.
(2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating
cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv
preprint arXiv:2003.11080.
\[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020).
[FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf)
arXiv preprint arXiv:2009.05166.
## Citation
If you find MuRIL useful in your applications, please cite the following paper:
```
@misc{khanuja2021muril,
title={MuRIL: Multilingual Representations for Indian Languages},
author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar},
year={2021},
eprint={2103.10730},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please mail your queries/feedback to [email protected].
|
google/realm-orqa-nq-openqa | ea416e495785cd9612f659b69af3a7857c91fe2a | 2022-01-05T18:00:40.000Z | [
"pytorch",
"realm",
"en",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/realm-orqa-nq-openqa | 360 | 2 | transformers | 2,663 | ---
language: en
license: apache-2.0
---
# realm-orqa-nq-openqa
## Model description
The REALM checkpoint finetuned with Natural Questions(NQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmForOpenQA
openqa = RealmForOpenQA.from_pretrained("qqaatw/realm-orqa-nq-openqa")
``` |
voidful/bart-distractor-generation | 31759a44a1319ed9804ba667cbb9b0cc03faff11 | 2021-04-04T16:18:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:race",
"transformers",
"distractor",
"generation",
"seq2seq",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/bart-distractor-generation | 360 | 2 | transformers | 2,664 | ---
language: en
tags:
- bart
- distractor
- generation
- seq2seq
datasets:
- race
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
widget:
- text: "When you ' re having a holiday , one of the main questions to ask is which hotel or apartment to choose . However , when it comes to France , you have another special choice : treehouses . In France , treehouses are offered to travelers as a new choice in many places . The price may be a little higher , but you do have a chance to _ your childhood memories . Alain Laurens , one of France ' s top treehouse designers , said , ' Most of the people might have the experience of building a den when they were young . And they like that feeling of freedom when they are children . ' Its fairy - tale style gives travelers a special feeling . It seems as if they are living as a forest king and enjoying the fresh air in the morning . Another kind of treehouse is the ' star cube ' . It gives travelers the chance of looking at the stars shining in the sky when they are going to sleep . Each ' star cube ' not only offers all the comfortable things that a hotel provides for travelers , but also gives them a chance to look for stars by using a telescope . The glass roof allows you to look at the stars from your bed . </s> The passage mainly tells us </s> treehouses in france."
---
# bart-distractor-generation
## Model description
This model is a sequence-to-sequence distractor generator which takes an answer, question and context as an input, and generates a distractor as an output. It is based on a pretrained `bart-base` model.
For details, please see https://github.com/voidful/BDG.
## Intended uses & limitations
The model is trained to generate examinations-style multiple choice distractor. The model performs best with full sentence answers.
#### How to use
The model takes concatenated context, question and answers as an input sequence, and will generate a full distractor sentence as an output sequence. The max sequence length is 1024 tokens. Inputs should be organised into the following format:
```
context </s> question </s> answer
```
The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method.
For details, please see https://github.com/voidful/BDG.
#### Limitations and bias
The model is limited to generating distractor in the same style as those found in [RACE](https://www.aclweb.org/anthology/D17-1082/). The generated distractors can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context, question and answer do not match, the generated distractor is likely to be incoherent. |
stanleychu2/system_400M | 954462cbb137d6b684df0a6daa60f135a528799b | 2022-03-07T09:10:43.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stanleychu2 | null | stanleychu2/system_400M | 360 | null | transformers | 2,665 | Entry not found |
Batsy24/DialoGPT-medium-Twilight_BellaBot | da351b3d0906189e60536326b516606cce4210c6 | 2021-11-18T09:15:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Batsy24 | null | Batsy24/DialoGPT-medium-Twilight_BellaBot | 359 | null | transformers | 2,666 | ---
tags:
- conversational
---
# Bella Swan DialoGPT model |
ccdv/lsg-base-4096 | 63ab0e0d4b0b6d2c8d882070c6fa01d360c9b17b | 2022-07-25T05:36:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"long context",
"autotrain_compatible"
] | fill-mask | false | ccdv | null | ccdv/lsg-base-4096 | 359 | 1 | transformers | 2,667 | ---
language: en
tags:
- long context
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
The model is trained starting from a RoBERTa-base checkpoint on 16Gb of data (Wikipedia, Bookcorpus etc...) using the same number of parameters/layers and the same tokenizer.
Support encoder-decoder and causal masking but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-base-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
|
google/multiberts-seed_11 | d6aff2e50dfed3e7de6efc38459cb0070901f176 | 2021-11-05T22:28:19.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_11",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_11 | 359 | null | transformers | 2,668 | ---
language: en
tags:
- multiberts
- multiberts-seed_11
license: apache-2.0
---
# MultiBERTs - Seed 11
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #11.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_11')
model = TFBertModel.from_pretrained("google/multiberts-seed_11")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_11')
model = BertModel.from_pretrained("google/multiberts-seed_11")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_12 | b6933aed309729c8305d033bbb9dde829a901c04 | 2021-11-05T22:30:05.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_12",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_12 | 359 | null | transformers | 2,669 | ---
language: en
tags:
- multiberts
- multiberts-seed_12
license: apache-2.0
---
# MultiBERTs - Seed 12
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #12.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_12')
model = TFBertModel.from_pretrained("google/multiberts-seed_12")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_12')
model = BertModel.from_pretrained("google/multiberts-seed_12")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_8 | 95242f2431926edbd81cac9a7c643ce60193d979 | 2021-11-05T22:21:22.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_8",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_8 | 359 | null | transformers | 2,670 | ---
language: en
tags:
- multiberts
- multiberts-seed_8
license: apache-2.0
---
# MultiBERTs - Seed 8
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #8.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_8')
model = TFBertModel.from_pretrained("google/multiberts-seed_8")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_8')
model = BertModel.from_pretrained("google/multiberts-seed_8")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_9 | 225772f575cc16e5e1f27e06a6c4d888dc82ddab | 2021-11-05T22:23:00.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_9",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_9 | 359 | null | transformers | 2,671 | ---
language: en
tags:
- multiberts
- multiberts-seed_9
license: apache-2.0
---
# MultiBERTs - Seed 9
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #9.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_9')
model = TFBertModel.from_pretrained("google/multiberts-seed_9")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_9')
model = BertModel.from_pretrained("google/multiberts-seed_9")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
huggingface-course/distilbert-base-uncased-finetuned-imdb | 10abb8b26c1b5163624c5ccc649986aa7ace845b | 2021-11-11T17:42:21.000Z | [
"pytorch",
"tf",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | huggingface-course | null | huggingface-course/distilbert-base-uncased-finetuned-imdb | 359 | null | transformers | 2,672 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.708 | 1.0 | 157 | 2.4715 |
| 2.5627 | 2.0 | 314 | 2.4145 |
| 2.5385 | 3.0 | 471 | 2.4451 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
stanleychu2/user_400M | 96801813e5be15b4bc5b85529097e35019f973b5 | 2022-03-07T09:10:11.000Z | [
"pytorch",
"blenderbot",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stanleychu2 | null | stanleychu2/user_400M | 359 | null | transformers | 2,673 | Entry not found |
jinmang2/kpfbert | e060b6039623d70d4b6df98245376f75a6e3a4e5 | 2022-04-05T16:03:00.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | jinmang2 | null | jinmang2/kpfbert | 359 | null | transformers | 2,674 | # KpfBERT
https://github.com/jinmang2/kpfbert |
tscholak/t5.1.1.lm100k.large | 9b42e0fff7709a21b3da746bf3fce8774d5136a2 | 2021-10-09T13:42:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tscholak | null | tscholak/t5.1.1.lm100k.large | 358 | 1 | transformers | 2,675 | Entry not found |
l3cube-pune/hing-bert | fccd12879be703ba59ae4d306f4fb10685559812 | 2022-06-26T15:13:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"hi",
"en",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"transformers",
"codemix",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | l3cube-pune | null | l3cube-pune/hing-bert | 358 | 1 | transformers | 2,676 | ---
license: cc-by-4.0
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingBERT
HingBERT is a Hindi-English code-mixed BERT model trained on roman text. It is a base BERT model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@InProceedings{nayak-joshi:2022:WILDRE6,
author = {Nayak, Ravindra and Joshi, Raviraj},
title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7--12}
}
``` |
valurank/distilroberta-current | ed30e164cbf3eb30df98cb71576bad2098df9529 | 2022-06-08T20:20:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilroberta-current | 358 | null | transformers | 2,677 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilroberta-current
results: []
---
# distilroberta-current
This model classifies articles as current (covering or discussing current events) or not current (not relating to current events).
The model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a dataset of articles labeled using weak-supervision and manual labeling
It achieves the following results on the evaluation set:
- Loss: 0.1745
- Acc: 0.9355
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 12345
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 11 | 0.6559 | 0.7097 |
| 0.6762 | 2.0 | 22 | 0.5627 | 0.7097 |
| 0.5432 | 3.0 | 33 | 0.4606 | 0.7097 |
| 0.5432 | 4.0 | 44 | 0.3651 | 0.8065 |
| 0.411 | 5.0 | 55 | 0.2512 | 0.9194 |
| 0.269 | 6.0 | 66 | 0.2774 | 0.9355 |
| 0.269 | 7.0 | 77 | 0.2062 | 0.8710 |
| 0.2294 | 8.0 | 88 | 0.2598 | 0.9355 |
| 0.1761 | 9.0 | 99 | 0.1745 | 0.9355 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
HYPJUDY/layoutlmv3-base-finetuned-funsd | 90be3a172349c0290245ffa81d41c6f5c9f24040 | 2022-07-19T02:25:28.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"arxiv:2204.08387",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | HYPJUDY | null | HYPJUDY/layoutlmv3-base-finetuned-funsd | 358 | 3 | transformers | 2,678 | ---
license: mit
---
# layoutlmv3-base-finetuned-funsd
The model [layoutlmv3-base-finetuned-funsd](https://huggingface.co/HYPJUDY/layoutlmv3-base-finetuned-funsd) is fine-tuned on the FUNSD dataset initialized from [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base).
This finetuned model achieves an F1 score of 90.59 on the test split of the FUNSD dataset.
[Paper](https://arxiv.org/pdf/2204.08387.pdf) | [Code](https://aka.ms/layoutlmv3) | [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)
If you find LayoutLMv3 helpful, please cite the following paper:
```
@article{huang2022layoutlmv3,
title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking},
author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei},
journal={arXiv preprint arXiv:2204.08387},
year={2022}
}
```
## License
The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project.
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
|
Wi/arxiv-topics-distilbert-base-cased | 20469cc8a5611f69a6b8911188d7dee8b27a493d | 2022-07-12T01:02:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"arxiv",
"topic-classification",
"license:apache-2.0"
] | text-classification | false | Wi | null | Wi/arxiv-topics-distilbert-base-cased | 358 | 0 | transformers | 2,679 | ---
language: en
license: apache-2.0
tags:
- arxiv
- topic-classification
- distilbert
widget:
- text: "Title: The Design of Radio Telescope Array Configurations using Multiobjective\n\
\ Optimization: Imaging Performance versus Cable Length\nAbstract: The next generation\
\ of radio telescope interferometric arrays requires\ncareful design of the array\
\ configuration to optimize the performance of the\noverall system. We have developed\
\ a framework, based on a genetic algorithm,\nfor rapid exploration and optimization\
\ of the objective space pertaining to\nmultiple objectives. We have evaluated\
\ a large space of possible designs for\n27-, 60-, 100-, and 160-station arrays.\
\ The 27-station optimizations can be\ncompared to the well-known VLA case, and\
\ the larger array designs apply to\narrays currently under design such as LOFAR,\
\ ATA, and the SKA. In the initial\nimplementation of our framework we evaluate\
\ designs with respect to two\nmetrics, array imaging performance and the length\
\ of cable necessary to connect\nthe stations. Imaging performance is measured\
\ by the degree to which the\nsampling of the uv plane is uniform. For the larger\
\ arrays we find that\nwell-known geometric designs perform well and occupy the\
\ Pareto front of\noptimum solutions. For the 27-element case we find designs,\
\ combining features\nof the well-known designs, that are more optimal as measured\
\ by these two\nmetrics. The results obtained by the multiobjective genetic optimization\
\ are\ncorroborated by simulated annealing, which also reveals the role of entropy\
\ in\narray optimization. Our framework is general, and may be applied to other\n\
design goals and issues, such as particular schemes for sampling the uv plane,\n\
array robustness, and phased deployment of arrays.\nAuthors: Babak E. Cohanim,\
\ Jacqueline N. Hewitt, Olivier de Weck"
- text: "Title: Evidence for a Neutron Star in the non-pulsating massive X-ray binary\n\
\ 4U2206+54\nAbstract: We present an analysis of archival RXTE and BeppoSAX data\
\ of the X-ray source\n4U2206+54 . For the first time, high energy data (> 30\
\ kev) are analyzed for\nthis source. The data are well described by comptonization\
\ models (CompTT and\nBMC) in which seed photons with temperatures between 1.1\
\ kev and 1.5 kev are\ncomptonized by a hot plasma at 50 kev thereby producing\
\ a hard tail which\nextends up to, at least, 100 kev. We offer a new method of\
\ identification of\nneutron star systems using a temperature - luminosity relation.\
\ If a given\nX-ray source is characterized by a low bolometric luminosity and\
\ a relatively\nhigh color blackbody temperature (>1 kev) it has necessarily to\
\ be a neutron\nstar rather than a black hole. From these arguments it is shown\
\ that the area\nof the soft photon source must be small (r ~ 1 km) and that the\
\ accretion disk,\nif present, must be truncated very far from the compact object.\
\ Here we report\non the possible existence of a cyclotron line around 30 kev.\
\ The presence of a\nneutron star in the system is strongly favored by the available\
\ data.\nAuthors: J. M. Torrej\xF3n, I. Kreykenbohm, A. Orr, L. Titarchuk, I.\
\ Negueruela"
- text: "Title: Solving the Schrodinger Equation for a Quantum Well\n with a Non-Uniform\
\ Potential\nAbstract: We present a numerical solution to the Schrodinger equation\
\ for a\nquantum well with a non-uniform potential. The potential is a Gaussian\n\
with a non-uniform distribution of energies. The solution is a solution to the\n\
Schrodinger equation with a non-uniform potential. The solution is a\nnon-uniform\
\ solution to the Schrodinger equation with a non-uniform potential.\nAuthors:\
\ George K. Kostopoulos, John A. Kostopoulos, and John C. Kostopoulos"
- text: "Title: Inverting Black-Scholes Model for Option Pricing\n with a Non-Uniformly\
\ Distributed Risk\nAbstract: We present a numerical solution to the Black-Scholes\
\ model for\noption pricing with a non-uniformly distributed risk. The solution\
\ is a\nnon-uniform solution to the Black-Scholes model with a non-uniformly\n\
distributed risk. The solution is a non-uniform solution to the\nBlack-Scholes\
\ model with a non-uniformly distributed risk.\nAuthors: Z. Starosov, L. Randhawa"
---
# DistilBERT on ArXiv
This model was developed to predict the top-level category of a paper, given the
paper's abstract, title, and list of authors. It was trained over a subset of
data pulled from the ArXiv API.
|
mymusise/CPM-Generate-distill | 1611f575f3db84c83fd120ca8fd826953b4afdbf | 2021-05-23T10:40:31.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"zh",
"transformers"
] | text-generation | false | mymusise | null | mymusise/CPM-Generate-distill | 357 | 4 | transformers | 2,680 | ---
language: zh
widget:
- text: "天下熙熙,"
- text: "天气不错,"
---
<h1 align="center">
CPM-Generate-distill
</h1>
CPM(Chinese Pre-Trained Language Models), which has 2.6B parameters, made by the research team of Beijing Zhiyuan Institute of artificial intelligence and Tsinghua University @TsinghuaAI.
[repo: CPM-Generate](https://github.com/TsinghuaAI/CPM-Generate)
The One Thing You Need to Know is this model is not uploaded by official, the conver script is [here](https://github.com/mymusise/CPM-TF2Transformer/blob/main/transfor_CMP.ipynb)
And the `CPM-Generate-distill` is the distill model of `CPM`.
# How to use
How to use this model directly from the 🤗/transformers library:
```python
from transformers import XLNetTokenizer, TFGPT2LMHeadModel
from transformers import TextGenerationPipeline
import jieba
# add spicel process
class XLNetTokenizer(XLNetTokenizer):
translator = str.maketrans(" \n", "\u2582\u2583")
def _tokenize(self, text, *args, **kwargs):
text = [x.translate(self.translator) for x in jieba.cut(text, cut_all=False)]
text = " ".join(text)
return super()._tokenize(text, *args, **kwargs)
def _decode(self, *args, **kwargs):
text = super()._decode(*args, **kwargs)
text = text.replace(' ', '').replace('\u2582', ' ').replace('\u2583', '\n')
return text
tokenizer = XLNetTokenizer.from_pretrained('mymusise/CPM-Generate-distill')
model = TFGPT2LMHeadModel.from_pretrained("mymusise/CPM-Generate-distill")
text_generater = TextGenerationPipeline(model, tokenizer)
print(text_generater("天下熙熙,", max_length=15, top_k=1, use_cache=True, prefix=''))
```

|
junnyu/wobert_chinese_plus_base | 298f3eb6ce959a8d191d608f85ba90b3e65740cf | 2021-07-07T01:18:40.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"zh",
"transformers",
"wobert",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/wobert_chinese_plus_base | 356 | 1 | transformers | 2,681 | ---
language: zh
tags:
- wobert
inference: False
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/WoBERT
### pytorch版本
https://github.com/JunnYu/WoBERT_pytorch
## 安装(主要为了安装WoBertTokenizer)
```bash
pip install git+https://github.com/JunnYu/WoBERT_pytorch.git
```
## 使用
```python
import torch
from transformers import BertForMaskedLM as WoBertForMaskedLM
from wobert import WoBertTokenizer
pretrained_model_or_path_list = [
"junnyu/wobert_chinese_plus_base", "junnyu/wobert_chinese_base"
]
for path in pretrained_model_or_path_list:
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = WoBertTokenizer.from_pretrained(path)
model = WoBertForMaskedLM.from_pretrained(path)
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits[0]
outputs_sentence = ""
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(outputs[i].topk(k=5)[1])
outputs_sentence += "[" + "||".join(tokens) + "]"
else:
outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id],
skip_special_tokens=True))
print(outputs_sentence)
# RoFormer 今天[天气||天||心情||阳光||空气]很好,我[想||要||打算||准备||喜欢]去公园玩。
# PLUS WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||打算||准备||就]去公园玩。
# WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||就||准备||也]去公园玩。
```
## 引用
Bibtex:
```tex
@techreport{zhuiyiwobert,
title={WoBERT: Word-based Chinese BERT model - ZhuiyiAI},
author={Jianlin Su},
year={2020},
url="https://github.com/ZhuiyiTechnology/WoBERT",
}
``` |
Helsinki-NLP/opus-mt-ht-en | c45c603fdf8b878d6002148783a21d85e5ece0b5 | 2021-09-09T22:10:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ht",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ht-en | 355 | null | transformers | 2,682 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ht-en
* source languages: ht
* target languages: en
* OPUS readme: [ht-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.en | 37.5 | 0.542 |
| Tatoeba.ht.en | 57.0 | 0.689 |
|
google/multiberts-seed_13 | db04544bc9fe3a2c40fff2a460003e1a4d31c85f | 2021-11-05T22:31:43.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_13",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_13 | 355 | null | transformers | 2,683 | ---
language: en
tags:
- multiberts
- multiberts-seed_13
license: apache-2.0
---
# MultiBERTs - Seed 13
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #13.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_13')
model = TFBertModel.from_pretrained("google/multiberts-seed_13")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_13')
model = BertModel.from_pretrained("google/multiberts-seed_13")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
raruidol/ArgumentRelation | eabc480c237123f0a4f9487d0872ec460d3c6b66 | 2022-01-26T15:04:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | raruidol | null | raruidol/ArgumentRelation | 355 | 1 | transformers | 2,684 | # Argument Relation Mining
Best performing model trained in the "Transformer-Based Models for Automatic Detection of Argument Relations: A Cross-Domain Evaluation" paper.
Code available in https://github.com/raruidol/ArgumentRelationMining
Cite:
```
@article{ruiz2021transformer,
title={Transformer-based models for automatic identification of argument relations: A cross-domain evaluation},
author={Ruiz-Dolz, Ramon and Alemany, Jose and Heras, Stella and Garcia-Fornes, Ana},
journal={IEEE Intelligent Systems},
year={2021},
publisher={IEEE}
}
```
|
speechbrain/asr-wav2vec2-commonvoice-fr | 269ef5853401e2aa2e47fe6fd7027b067c630c9b | 2022-06-05T15:38:43.000Z | [
"wav2vec2",
"feature-extraction",
"fr",
"dataset:commonvoice",
"speechbrain",
"CTC",
"pytorch",
"Transformer",
"hf-asr-leaderboard",
"license:apache-2.0",
"automatic-speech-recognition",
"model-index"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-commonvoice-fr | 355 | 5 | speechbrain | 2,685 | ---
language:
- fr
thumbnail: null
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- pytorch
- speechbrain
- Transformer
- hf-asr-leaderboard
license: apache-2.0
datasets:
- commonvoice
metrics:
- wer
- cer
model-index:
- name: asr-wav2vec2-commonvoice-fr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CommonVoice 6.1 (French)
type: mozilla-foundation/common_voice_6_1
config: fr
split: test
args:
language: fr
metrics:
- name: Test WER
type: wer
value: '9.96'
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on CommonVoice French (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (French Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test CER | Test WER | GPUs |
|:-------------:|:--------------:|:--------------:| :--------:|
| 24-08-21 | 3.19 | 9.96 | 2xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (FR).
- Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([LeBenchmark/wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large)) is combined with two DNN layers and finetuned on CommonVoice FR.
The obtained final acoustic representation is given to the CTC greedy decoder.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in French)
```python
from speechbrain.pretrained import EncoderASR
asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-fr", savedir="pretrained_models/asr-wav2vec2-commonvoice-fr")
asr_model.transcribe_file('speechbrain/asr-wav2vec2-commonvoice-fr/example-fr.wav')
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/CTC/
python train_with_wav2vec.py hparams/train_fr_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1T9DfdZwcNI9CURxhLCi8GA5JVz8adiY8?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
|
stanford-crfm/arwen-gpt2-medium-x21 | fc6844fbbbdc91bc63546c672282b8fb1d70b5d3 | 2022-06-20T11:36:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stanford-crfm | null | stanford-crfm/arwen-gpt2-medium-x21 | 355 | null | transformers | 2,686 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | 54e2905e7c756883b00877cd48ed710a304af0d1 | 2021-10-17T11:07:13.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | 354 | null | transformers | 2,687 | ---
language:
- ar
license: apache-2.0
widget:
- text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
---
# CAMeLBERT MSA NER Model
## Model description
**CAMeLBERT MSA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678).
"* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT MSA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
SajjadAyoubi/xlm-roberta-large-fa-qa | 8e071d1d25324e15ebe203a245019e4e4f782e25 | 2021-04-21T07:23:30.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | SajjadAyoubi | null | SajjadAyoubi/xlm-roberta-large-fa-qa | 354 | null | transformers | 2,688 | ### How to use
#### Requirements
Transformers require `transformers` and `sentencepiece`, both of which can be
installed using `pip`.
```sh
pip install transformers sentencepiece
```
#### Pipelines 🚀
In case you are not familiar with Transformers, you can use pipelines instead.
Note that, pipelines can't have _no answer_ for the questions.
```python
from transformers import pipeline
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
qa_pipeline = pipeline("question-answering", model=model_name, tokenizer=model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
for question in questions:
print(qa_pipeline({"context": text, "question": question}))
>>> {'score': 0.4839823544025421, 'start': 8, 'end': 18, 'answer': 'سجاد ایوبی'}
>>> {'score': 0.3747948706150055, 'start': 24, 'end': 32, 'answer': '۲۰ سالمه'}
>>> {'score': 0.5945395827293396, 'start': 38, 'end': 55, 'answer': 'پردازش زبان طبیعی'}
```
#### Manual approach 🔥
Using the Manual approach, it is possible to have _no answer_ with even better
performance.
- PyTorch
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from src.utils import AnswerPredictor
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py and you can read more about it
predictor = AnswerPredictor(model, tokenizer, device="cpu", n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
- TensorFlow 2.X
```python
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
from src.utils import TFAnswerPredictor
model_name = "SajjadAyoubi/lm-roberta-large-fa-qa"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_name)
text = "سلام من سجاد ایوبی هستم ۲۰ سالمه و به پردازش زبان طبیعی علاقه دارم"
questions = ["اسمم چیه؟", "چند سالمه؟", "به چی علاقه دارم؟"]
# this class is from src/utils.py, you can read more about it
predictor = TFAnswerPredictor(model, tokenizer, n_best=10)
preds = predictor(questions, [text] * 3, batch_size=3)
for k, v in preds.items():
print(v)
```
Produces an output such below:
```text
100%|██████████| 1/1 [00:00<00:00, 3.56it/s]
{'score': 8.040637016296387, 'text': 'سجاد ایوبی'}
{'score': 9.901972770690918, 'text': '۲۰'}
{'score': 12.117212295532227, 'text': 'پردازش زبان طبیعی'}
```
Or you can access the whole demonstration using [HowToUse iPython Notebook on Google Colab](https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/HowToUse.ipynb)
|
microsoft/beit-large-finetuned-ade-640-640 | db2221bdd42a0f4c934ccd08a0eec10060ebd4d8 | 2022-02-22T09:08:30.000Z | [
"pytorch",
"beit",
"dataset:scene_parse_150",
"arxiv:2106.08254",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
] | image-segmentation | false | microsoft | null | microsoft/beit-large-finetuned-ade-640-640 | 354 | null | transformers | 2,689 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# BEiT (large-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](https://huggingface.co/datasets/scene_parse_150) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
norie4/DialoGPT-small-kyutebot | f0c0206ce3e265361c03fd0b84f4a09f1872210f | 2022-01-31T08:32:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | norie4 | null | norie4/DialoGPT-small-kyutebot | 354 | null | transformers | 2,690 | ---
tags:
- conversational
---
# mingbot DialoGPT Model |
Graphcore/gpt2-wikitext-103 | 2024fe13f55566bf24d75919f6880857beb1b537 | 2022-05-25T18:26:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:wikitext",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Graphcore | null | Graphcore/gpt2-wikitext-103 | 354 | 1 | transformers | 2,691 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: clm_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Graphcore/gpt2-wikitext-103
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9902
## Training and evaluation data
- [HuggingFace/wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--ipu_config_name Graphcore/gpt2-small-ipu \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1 \
--do_train \
--do_eval \
--num_train_epochs 10 \
--dataloader_num_workers 64 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 128 \
--output_dir /tmp/clm_output \
--logging_steps 5 \
--learning_rate 1e-5 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.1 \
--ipu_config_overrides="embedding_serialization_factor=4,optimizer_state_offchip=true,inference_device_iterations=5" \
--dataloader_drop_last \
--pod_type pod16
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 1024
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
"epoch": 10.0,
"train_loss": 3.1787637246621623,
"train_runtime": 4372.4031,
"train_samples": 114248,
"train_samples_per_second": 261.293,
"train_steps_per_second": 0.254
***** eval metrics *****
"eval_loss": 2.990234375,
"eval_samples": 240,
"perplexity": 19.89034374461794
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ahmad/parsT5-base | cfcb398d4d33113e3b8c63f15875e52c6be62077 | 2021-11-03T13:47:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Ahmad | null | Ahmad/parsT5-base | 353 | 3 | transformers | 2,692 | A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
```
from transformers import T5ForConditionalGeneration,AutoTokenizer
import torch
model_name = "Ahmad/parsT5-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer.encode('دانش آموزان به <extra_id_0> میروند و <extra_id_1> میخوانند.', return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(input_ids)
for h in hypotheses:
print(tokenizer.decode(h))
```
Steps: 725000
Accuracy: 0.66
Training More?
========
To train the model further please refer to its github repository at:
https://github.com/puraminy/parsT5
|
Contrastive-Tension/BERT-Base-Swe-CT-STSb | 7554c0a9f8abb8bc61193eb6cf26a243d586d565 | 2021-05-18T17:51:43.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Base-Swe-CT-STSb | 353 | null | transformers | 2,693 | Entry not found |
google/multiberts-seed_15 | eff6de8f489d357cae470131562c20adadde6fb0 | 2021-11-05T22:35:05.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_15",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_15 | 353 | null | transformers | 2,694 | ---
language: en
tags:
- multiberts
- multiberts-seed_15
license: apache-2.0
---
# MultiBERTs - Seed 15
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #15.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_15')
model = TFBertModel.from_pretrained("google/multiberts-seed_15")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_15')
model = BertModel.from_pretrained("google/multiberts-seed_15")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
HooshvareLab/bert-base-parsbert-armanner-uncased | 70a465658022ef6721ac95374b2bc38340d5fdc5 | 2021-05-18T20:42:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | HooshvareLab | null | HooshvareLab/bert-base-parsbert-armanner-uncased | 352 | null | transformers | 2,695 | ---
language: fa
license: apache-2.0
---
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|----------|------------|--------------|----------|----------------|------------|
| ARMAN | 93.10* | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
|
Laptop/DialoGPT-small-gandalf | 2e91521c0ec43d77db6447f26a9b3511b9f6c9ae | 2021-08-27T21:48:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Laptop | null | Laptop/DialoGPT-small-gandalf | 352 | null | transformers | 2,696 | ---
tags:
- conversational
---
# Gandalf DialoGPT Model |
google/multiberts-seed_14 | 3f10c679bb54bdcd581412d4f47a7fd589e41622 | 2021-11-05T22:33:27.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_14",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_14 | 352 | null | transformers | 2,697 | ---
language: en
tags:
- multiberts
- multiberts-seed_14
license: apache-2.0
---
# MultiBERTs - Seed 14
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #14.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_14')
model = TFBertModel.from_pretrained("google/multiberts-seed_14")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_14')
model = BertModel.from_pretrained("google/multiberts-seed_14")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
huggingtweets/getfiscal | 761c0fe3c38033fe81940399fdd9963dffd48bc5 | 2021-05-22T05:24:29.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/getfiscal | 351 | null | transformers | 2,698 | ---
language: en
thumbnail: https://www.huggingtweets.com/getfiscal/1616662151704/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1164709780885106690/5nqTrvC0_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Don Hughes 🦌 🤖 AI Bot </div>
<div style="font-size: 15px">@getfiscal bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@getfiscal's tweets](https://twitter.com/getfiscal).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3221 |
| Retweets | 1002 |
| Short tweets | 409 |
| Tweets kept | 1810 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/d6p1oytn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @getfiscal's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28d4ali8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28d4ali8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/getfiscal')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
puugz/DialoGPT-small-spiderman | 979eca1cebf1ae920bae64439d36aee8b6a62a24 | 2022-02-20T14:01:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | puugz | null | puugz/DialoGPT-small-spiderman | 351 | null | transformers | 2,699 | ---
tags:
- conversational
---
# Spider-Man DialoGPT Model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.