modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
efederici/it5-efficient-small-lfqa | 9b34753efd25e1e849f0ec9b900aeb5210c14d62 | 2022-05-03T13:33:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"dataset:custom",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | efederici | null | efederici/it5-efficient-small-lfqa | 25 | null | transformers | 7,700 | ---
license: apache-2.0
language:
- it
datasets:
- custom
---
# it5-efficient-small-lfqa
It is a T5 ([IT5](https://huggingface.co/stefan-it/it5-efficient-small-el32)) efficient small model trained on a lfqa dataset.
<p align="center">
<img src="https://www.marcorossiartecontemporanea.net/wp-content/uploads/2021/04/MARCTM0413-9CFBn1gs-scaled.jpg" width="400"> </br>
Mirco Marchelli, Voce in capitolo, 2019
</p>
## Training Data
This model was trained on a lfqa dataset. The model provides long-form answers to open domain questions.
## Usage and Performance
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("efederici/it5-efficient-small-lfqa")
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/it5-efficient-small-lfqa")
query = "con chi si era messo in contatto elon musk?"
# concatenated texts/document text
doc = """
La notizia dell’acquisizione da parte di Elon Musk del 9,2 per cento delle azioni di Twitter e del suo successivo ingresso nel consiglio di amministrazione della società hanno attirato grandi attenzioni, non solo da parte degli analisti finanziari, ma anche di chi si occupa di social media e del modo in cui viene impiegata la piattaforma da centinaia di milioni di persone in tutto il mondo. Musk, che ha un grande seguito su Twitter, in passato aveva più volte criticato il social network, accusandolo di non tutelare a sufficienza le libertà di espressione, anche in casi limite come l’assalto al Congresso degli Stati Uniti del 2021.
Alcune settimane fa, Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021, e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere di soluzioni per migliorarla. Secondo fonti del New York Times, dopo i primi contatti, Agrawal aveva proposto a Musk di avere un ruolo più attivo oltre a quello di azionista, offrendogli la possibilità di entrare nel consiglio di amministrazione.
"""
query_and_docs = f"Domanda: {query} Contesto: {doc}"
model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt")
output = model.generate(input_ids=model_input["input_ids"],
attention_mask=model_input["attention_mask"],
min_length=10,
max_length=256,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,
top_k=None,
top_p=None,
no_repeat_ngram_size=3,
num_return_sequences=1)
tokenizer.batch_decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
The model will predict: 'Elon Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021 e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere soluzioni per migliorarla.' |
leonweber/semantic_relations | f75b57098db886096604641b1892c54728c44418 | 2022-05-14T12:55:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | leonweber | null | leonweber/semantic_relations | 25 | null | transformers | 7,701 | Entry not found |
danlupu/sentiment-analysis | 69eaa529b7ffb74ed958ef53551765e7b8a1168c | 2022-05-17T08:55:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | danlupu | null | danlupu/sentiment-analysis | 25 | null | transformers | 7,702 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: sentiment-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8657718120805369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3124
- Accuracy: 0.8667
- F1: 0.8658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
thunninoi/wav2vec2-japanese-hiragana-vtuber | aeda6196fa45f4a55827043a80087b158b62059d | 2022-06-02T04:31:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | thunninoi | null | thunninoi/wav2vec2-japanese-hiragana-vtuber | 25 | null | transformers | 7,703 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4134
- Wer: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 3
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 75
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4299 | 1.0 | 247 | 0.7608 | 0.4853 |
| 0.8045 | 2.0 | 494 | 0.6603 | 0.4449 |
| 0.6061 | 3.0 | 741 | 0.5527 | 0.4233 |
| 0.4372 | 4.0 | 988 | 0.6262 | 0.4029 |
| 0.3226 | 5.0 | 1235 | 0.4528 | 0.3462 |
| 0.2581 | 6.0 | 1482 | 0.4961 | 0.3226 |
| 0.2147 | 7.0 | 1729 | 0.4856 | 0.3075 |
| 0.1736 | 8.0 | 1976 | 0.4372 | 0.3063 |
| 0.1488 | 9.0 | 2223 | 0.3771 | 0.2761 |
| 0.1286 | 10.0 | 2470 | 0.4373 | 0.2590 |
| 0.1118 | 11.0 | 2717 | 0.3840 | 0.2594 |
| 0.1037 | 12.0 | 2964 | 0.4241 | 0.2590 |
| 0.0888 | 13.0 | 3211 | 0.4150 | 0.2410 |
| 0.0923 | 14.0 | 3458 | 0.3811 | 0.2524 |
| 0.0813 | 15.0 | 3705 | 0.4164 | 0.2459 |
| 0.0671 | 16.0 | 3952 | 0.3498 | 0.2288 |
| 0.0669 | 17.0 | 4199 | 0.3697 | 0.2247 |
| 0.0586 | 18.0 | 4446 | 0.3550 | 0.2251 |
| 0.0533 | 19.0 | 4693 | 0.4024 | 0.2231 |
| 0.0542 | 20.0 | 4940 | 0.4130 | 0.2121 |
| 0.0532 | 21.0 | 5187 | 0.3464 | 0.2231 |
| 0.0451 | 22.0 | 5434 | 0.3346 | 0.1966 |
| 0.0413 | 23.0 | 5681 | 0.4599 | 0.2088 |
| 0.0401 | 24.0 | 5928 | 0.4031 | 0.2162 |
| 0.0345 | 25.0 | 6175 | 0.3726 | 0.2084 |
| 0.033 | 26.0 | 6422 | 0.4619 | 0.2076 |
| 0.0366 | 27.0 | 6669 | 0.4071 | 0.2202 |
| 0.0343 | 28.0 | 6916 | 0.4114 | 0.2088 |
| 0.0319 | 29.0 | 7163 | 0.3605 | 0.2015 |
| 0.0304 | 30.0 | 7410 | 0.4097 | 0.2015 |
| 0.0253 | 31.0 | 7657 | 0.4152 | 0.1970 |
| 0.0235 | 32.0 | 7904 | 0.3829 | 0.2043 |
| 0.0255 | 33.0 | 8151 | 0.3976 | 0.2011 |
| 0.0201 | 34.0 | 8398 | 0.4247 | 0.2088 |
| 0.022 | 35.0 | 8645 | 0.3831 | 0.1945 |
| 0.0175 | 36.0 | 8892 | 0.3838 | 0.2007 |
| 0.0201 | 37.0 | 9139 | 0.4377 | 0.1986 |
| 0.0176 | 38.0 | 9386 | 0.4546 | 0.2043 |
| 0.021 | 39.0 | 9633 | 0.4341 | 0.2039 |
| 0.0191 | 40.0 | 9880 | 0.4043 | 0.1937 |
| 0.0159 | 41.0 | 10127 | 0.4098 | 0.2064 |
| 0.0148 | 42.0 | 10374 | 0.4027 | 0.1905 |
| 0.0129 | 43.0 | 10621 | 0.4104 | 0.1933 |
| 0.0123 | 44.0 | 10868 | 0.3738 | 0.1925 |
| 0.0159 | 45.0 | 11115 | 0.3946 | 0.1933 |
| 0.0091 | 46.0 | 11362 | 0.3971 | 0.1880 |
| 0.0082 | 47.0 | 11609 | 0.4042 | 0.1986 |
| 0.0108 | 48.0 | 11856 | 0.4092 | 0.1884 |
| 0.0123 | 49.0 | 12103 | 0.3674 | 0.1941 |
| 0.01 | 50.0 | 12350 | 0.3750 | 0.1876 |
| 0.0094 | 51.0 | 12597 | 0.3781 | 0.1831 |
| 0.008 | 52.0 | 12844 | 0.4051 | 0.1852 |
| 0.0079 | 53.0 | 13091 | 0.3981 | 0.1937 |
| 0.0068 | 54.0 | 13338 | 0.4425 | 0.1929 |
| 0.0061 | 55.0 | 13585 | 0.4183 | 0.1986 |
| 0.0074 | 56.0 | 13832 | 0.3502 | 0.1880 |
| 0.0071 | 57.0 | 14079 | 0.3908 | 0.1892 |
| 0.0079 | 58.0 | 14326 | 0.3908 | 0.1913 |
| 0.0042 | 59.0 | 14573 | 0.3801 | 0.1864 |
| 0.0049 | 60.0 | 14820 | 0.4065 | 0.1839 |
| 0.0063 | 61.0 | 15067 | 0.4170 | 0.1900 |
| 0.0049 | 62.0 | 15314 | 0.3903 | 0.1856 |
| 0.0031 | 63.0 | 15561 | 0.4042 | 0.1896 |
| 0.0054 | 64.0 | 15808 | 0.3890 | 0.1839 |
| 0.0061 | 65.0 | 16055 | 0.3831 | 0.1847 |
| 0.0052 | 66.0 | 16302 | 0.3898 | 0.1847 |
| 0.0032 | 67.0 | 16549 | 0.4230 | 0.1831 |
| 0.0017 | 68.0 | 16796 | 0.4241 | 0.1823 |
| 0.0022 | 69.0 | 17043 | 0.4360 | 0.1856 |
| 0.0026 | 70.0 | 17290 | 0.4233 | 0.1815 |
| 0.0028 | 71.0 | 17537 | 0.4225 | 0.1835 |
| 0.0018 | 72.0 | 17784 | 0.4163 | 0.1856 |
| 0.0034 | 73.0 | 18031 | 0.4120 | 0.1876 |
| 0.0019 | 74.0 | 18278 | 0.4129 | 0.1876 |
| 0.0023 | 75.0 | 18525 | 0.4134 | 0.1884 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
cardiffnlp/tweet-topic-19-single | 1576eb36befe9b52b6709159b51608b68f17e954 | 2022-06-09T10:33:26.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"arxiv:2202.03829",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/tweet-topic-19-single | 25 | null | transformers | 7,704 | # tweet-topic-19-single
This is a roBERTa-base model trained on ~90m tweets until the end of 2019 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m)), and finetuned for single-label topic classification on a corpus of 6,997 tweets.
The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
- 0 -> arts_&_culture;
- 1 -> business_&_entrepreneurs;
- 2 -> pop_culture;
- 3 -> daily_life;
- 4 -> sports_&_gaming;
- 5 -> science_&_technology
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = f"cardiffnlp/tweet-topic-19-single"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "Tesla stock is on the rise!"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# TF
#model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = model.config.id2label
#text = "Tesla stock is on the rise!"
#encoded_input = tokenizer(text, return_tensors='tf')
#output = model(**encoded_input)
#scores = output[0][0]
#scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = class_mapping[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) business_&_entrepreneurs 0.8575
2) science_&_technology 0.0604
3) pop_culture 0.0295
4) daily_life 0.0217
5) sports_&_gaming 0.0154
6) arts_&_culture 0.0154
``` |
KoichiYasuoka/deberta-large-japanese-unidic-luw-upos | 8be2d2238eaf4c162e9488779db68d349a9521cf | 2022-06-26T14:56:51.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-unidic-luw-upos | 25 | null | transformers | 7,705 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# deberta-large-japanese-unidic-luw-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-large-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
gauravnuti/agro_ner | bf1c59dc0735cc9ce2558faab6fcd8e378a02ba8 | 2022-06-20T12:56:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | gauravnuti | null | gauravnuti/agro_ner | 25 | null | transformers | 7,706 | Entry not found |
Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469 | 036376acd226d15c3135cdde0c738992fd036066 | 2022-06-20T16:21:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Siddish/autotrain-data-yes-or-no-classifier-on-circa",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Siddish | null | Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469 | 25 | null | transformers | 7,707 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Siddish/autotrain-data-yes-or-no-classifier-on-circa
co2_eq_emissions: 0.1287915253247826
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1009033469
- CO2 Emissions (in grams): 0.1287915253247826
## Validation Metrics
- Loss: 0.4084862470626831
- Accuracy: 0.8722054859679721
- Macro F1: 0.6340608446004876
- Micro F1: 0.8722054859679722
- Weighted F1: 0.8679846554644491
- Macro Precision: 0.645023001823007
- Micro Precision: 0.8722054859679721
- Weighted Precision: 0.8656545967138464
- Macro Recall: 0.6283763558287574
- Micro Recall: 0.8722054859679721
- Weighted Recall: 0.8722054859679721
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Siddish/autotrain-yes-or-no-classifier-on-circa-1009033469", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
twieland/MIX3_ja-en_helsinki | a57a9dcca99a2ce6bcdfaae4b855e9ba734c752d | 2022-06-28T11:46:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/MIX3_ja-en_helsinki | 25 | null | transformers | 7,708 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX3_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX3_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 2.8699 | 0.01 | 5000 | 2.3465 |
| 2.6168 | 0.02 | 10000 | 2.2205 |
| 2.5083 | 0.03 | 15000 | 2.2382 |
| 2.4359 | 0.04 | 20000 | 2.1670 |
| 2.3821 | 0.06 | 25000 | 2.1122 |
| 2.3358 | 0.07 | 30000 | 2.0902 |
| 2.3045 | 0.08 | 35000 | 2.0461 |
| 2.2782 | 0.09 | 40000 | 2.0290 |
| 2.2481 | 0.1 | 45000 | 1.9910 |
| 2.2267 | 0.11 | 50000 | 2.0059 |
| 2.2056 | 0.12 | 55000 | 1.9858 |
| 2.1903 | 0.13 | 60000 | 1.9725 |
| 2.173 | 0.15 | 65000 | 1.9797 |
| 2.154 | 0.16 | 70000 | 1.9654 |
| 2.1429 | 0.17 | 75000 | 1.9567 |
| 2.1304 | 0.18 | 80000 | 1.9348 |
| 2.1232 | 0.19 | 85000 | 1.9361 |
| 2.116 | 0.2 | 90000 | 1.9277 |
| 2.1016 | 0.21 | 95000 | 1.9193 |
| 2.0984 | 0.22 | 100000 | 1.9064 |
| 2.0797 | 0.24 | 105000 | 1.9177 |
| 2.0767 | 0.25 | 110000 | 1.8975 |
| 2.0642 | 0.26 | 115000 | 1.8782 |
| 2.0595 | 0.27 | 120000 | 1.9012 |
| 2.0533 | 0.28 | 125000 | 1.8977 |
| 2.044 | 0.29 | 130000 | 1.8984 |
| 2.0374 | 0.3 | 135000 | 1.9221 |
| 2.0305 | 0.31 | 140000 | 1.9243 |
| 2.02 | 0.32 | 145000 | 1.8773 |
| 2.0195 | 0.34 | 150000 | 1.8676 |
| 2.0151 | 0.35 | 155000 | 1.8637 |
| 2.0065 | 0.36 | 160000 | 1.8556 |
| 2.0037 | 0.37 | 165000 | 1.8399 |
| 1.9963 | 0.38 | 170000 | 1.8452 |
| 1.9878 | 0.39 | 175000 | 1.8644 |
| 1.9871 | 0.4 | 180000 | 1.8576 |
| 1.9779 | 0.41 | 185000 | 1.8509 |
| 1.9721 | 0.43 | 190000 | 1.8405 |
| 1.9724 | 0.44 | 195000 | 1.8594 |
| 1.9685 | 0.45 | 200000 | 1.8540 |
| 1.9634 | 0.46 | 205000 | 1.8694 |
| 1.9583 | 0.47 | 210000 | 1.8591 |
| 1.9557 | 0.48 | 215000 | 1.8539 |
| 1.9494 | 0.49 | 220000 | 1.8673 |
| 1.9484 | 0.5 | 225000 | 1.8021 |
| 1.9395 | 0.52 | 230000 | 1.8309 |
| 1.9384 | 0.53 | 235000 | 1.7933 |
| 1.937 | 0.54 | 240000 | 1.8199 |
| 1.9315 | 0.55 | 245000 | 1.8065 |
| 1.9276 | 0.56 | 250000 | 1.7857 |
| 1.9248 | 0.57 | 255000 | 1.8207 |
| 1.9195 | 0.58 | 260000 | 1.7898 |
| 1.9187 | 0.59 | 265000 | 1.8097 |
| 1.9138 | 0.6 | 270000 | 1.7909 |
| 1.9094 | 0.62 | 275000 | 1.7995 |
| 1.9098 | 0.63 | 280000 | 1.8165 |
| 1.9038 | 0.64 | 285000 | 1.8132 |
| 1.9034 | 0.65 | 290000 | 1.7951 |
| 1.899 | 0.66 | 295000 | 1.7880 |
| 1.8965 | 0.67 | 300000 | 1.7953 |
| 1.8941 | 0.68 | 305000 | 1.7986 |
| 1.8919 | 0.69 | 310000 | 1.7964 |
| 1.8875 | 0.71 | 315000 | 1.8041 |
| 1.884 | 0.72 | 320000 | 1.7764 |
| 1.8798 | 0.73 | 325000 | 1.8019 |
| 1.8801 | 0.74 | 330000 | 1.7790 |
| 1.8809 | 0.75 | 335000 | 1.7849 |
| 1.8736 | 0.76 | 340000 | 1.7800 |
| 1.8727 | 0.77 | 345000 | 1.7900 |
| 1.8722 | 0.78 | 350000 | 1.7727 |
| 1.8699 | 0.8 | 355000 | 1.7597 |
| 1.8672 | 0.81 | 360000 | 1.7824 |
| 1.8638 | 0.82 | 365000 | 1.7674 |
| 1.8609 | 0.83 | 370000 | 1.7715 |
| 1.8584 | 0.84 | 375000 | 1.7694 |
| 1.8568 | 0.85 | 380000 | 1.7776 |
| 1.8523 | 0.86 | 385000 | 1.7697 |
| 1.8584 | 0.87 | 390000 | 1.7436 |
| 1.8474 | 0.88 | 395000 | 1.7644 |
| 1.8492 | 0.9 | 400000 | 1.7732 |
| 1.8465 | 0.91 | 405000 | 1.7611 |
| 1.846 | 0.92 | 410000 | 1.7717 |
| 1.8431 | 0.93 | 415000 | 1.7514 |
| 1.8402 | 0.94 | 420000 | 1.7353 |
| 1.8398 | 0.95 | 425000 | 1.7720 |
| 1.8314 | 0.96 | 430000 | 1.7728 |
| 1.8322 | 0.97 | 435000 | 1.7491 |
| 1.8284 | 0.99 | 440000 | 1.7561 |
| 1.8301 | 1.0 | 445000 | 1.7499 |
| 1.8182 | 1.01 | 450000 | 1.7514 |
| 1.8111 | 1.02 | 455000 | 1.7596 |
| 1.8116 | 1.03 | 460000 | 1.7455 |
| 1.8098 | 1.04 | 465000 | 1.7495 |
| 1.809 | 1.05 | 470000 | 1.7446 |
| 1.8088 | 1.06 | 475000 | 1.7290 |
| 1.8127 | 1.08 | 480000 | 1.7453 |
| 1.8051 | 1.09 | 485000 | 1.7495 |
| 1.8026 | 1.1 | 490000 | 1.7453 |
| 1.8028 | 1.11 | 495000 | 1.7615 |
| 1.8046 | 1.12 | 500000 | 1.7491 |
| 1.8052 | 1.13 | 505000 | 1.7280 |
| 1.7997 | 1.14 | 510000 | 1.7482 |
| 1.7976 | 1.15 | 515000 | 1.7368 |
| 1.7981 | 1.16 | 520000 | 1.7354 |
| 1.7949 | 1.18 | 525000 | 1.7076 |
| 1.7943 | 1.19 | 530000 | 1.7020 |
| 1.7911 | 1.2 | 535000 | 1.7121 |
| 1.7909 | 1.21 | 540000 | 1.7170 |
| 1.7926 | 1.22 | 545000 | 1.7310 |
| 1.7856 | 1.23 | 550000 | 1.7218 |
| 1.7875 | 1.24 | 555000 | 1.7362 |
| 1.7801 | 1.25 | 560000 | 1.7484 |
| 1.7854 | 1.27 | 565000 | 1.7466 |
| 1.7799 | 1.28 | 570000 | 1.7248 |
| 1.7823 | 1.29 | 575000 | 1.7355 |
| 1.7765 | 1.3 | 580000 | 1.7188 |
| 1.7779 | 1.31 | 585000 | 1.6993 |
| 1.7751 | 1.32 | 590000 | 1.7154 |
| 1.7762 | 1.33 | 595000 | 1.7348 |
| 1.7725 | 1.34 | 600000 | 1.7272 |
| 1.7701 | 1.36 | 605000 | 1.7157 |
| 1.7644 | 1.37 | 610000 | 1.7161 |
| 1.7707 | 1.38 | 615000 | 1.6961 |
| 1.764 | 1.39 | 620000 | 1.6930 |
| 1.7639 | 1.4 | 625000 | 1.6927 |
| 1.7654 | 1.41 | 630000 | 1.6989 |
| 1.7623 | 1.42 | 635000 | 1.6892 |
| 1.7598 | 1.43 | 640000 | 1.6911 |
| 1.7575 | 1.44 | 645000 | 1.7199 |
| 1.7574 | 1.46 | 650000 | 1.6992 |
| 1.7526 | 1.47 | 655000 | 1.6981 |
| 1.7556 | 1.48 | 660000 | 1.6860 |
| 1.7558 | 1.49 | 665000 | 1.7099 |
| 1.7539 | 1.5 | 670000 | 1.6950 |
| 1.7454 | 1.51 | 675000 | 1.6999 |
| 1.748 | 1.52 | 680000 | 1.6871 |
| 1.7476 | 1.53 | 685000 | 1.6884 |
| 1.7493 | 1.55 | 690000 | 1.6984 |
| 1.745 | 1.56 | 695000 | 1.6999 |
| 1.7397 | 1.57 | 700000 | 1.7036 |
| 1.7429 | 1.58 | 705000 | 1.7223 |
| 1.7367 | 1.59 | 710000 | 1.7111 |
| 1.7403 | 1.6 | 715000 | 1.6691 |
| 1.7361 | 1.61 | 720000 | 1.6693 |
| 1.737 | 1.62 | 725000 | 1.6884 |
| 1.7347 | 1.63 | 730000 | 1.6641 |
| 1.7323 | 1.65 | 735000 | 1.6628 |
| 1.7329 | 1.66 | 740000 | 1.6759 |
| 1.7292 | 1.67 | 745000 | 1.6654 |
| 1.7275 | 1.68 | 750000 | 1.6738 |
| 1.7266 | 1.69 | 755000 | 1.6792 |
| 1.7259 | 1.7 | 760000 | 1.6752 |
| 1.7231 | 1.71 | 765000 | 1.6641 |
| 1.7238 | 1.72 | 770000 | 1.6676 |
| 1.7223 | 1.74 | 775000 | 1.6563 |
| 1.722 | 1.75 | 780000 | 1.6541 |
| 1.7195 | 1.76 | 785000 | 1.6560 |
| 1.7171 | 1.77 | 790000 | 1.6786 |
| 1.7187 | 1.78 | 795000 | 1.6434 |
| 1.7186 | 1.79 | 800000 | 1.6538 |
| 1.7115 | 1.8 | 805000 | 1.6535 |
| 1.7119 | 1.81 | 810000 | 1.6738 |
| 1.7106 | 1.83 | 815000 | 1.6597 |
| 1.7088 | 1.84 | 820000 | 1.6486 |
| 1.7079 | 1.85 | 825000 | 1.6576 |
| 1.7062 | 1.86 | 830000 | 1.6676 |
| 1.7084 | 1.87 | 835000 | 1.6449 |
| 1.7059 | 1.88 | 840000 | 1.6515 |
| 1.7057 | 1.89 | 845000 | 1.6609 |
| 1.7021 | 1.9 | 850000 | 1.6482 |
| 1.7005 | 1.91 | 855000 | 1.6653 |
| 1.6988 | 1.93 | 860000 | 1.6801 |
| 1.6964 | 1.94 | 865000 | 1.6830 |
| 1.6954 | 1.95 | 870000 | 1.6589 |
| 1.693 | 1.96 | 875000 | 1.6553 |
| 1.689 | 1.97 | 880000 | 1.6554 |
| 1.69 | 1.98 | 885000 | 1.6424 |
| 1.6893 | 1.99 | 890000 | 1.6628 |
| 1.6772 | 2.0 | 895000 | 1.6709 |
| 1.6703 | 2.02 | 900000 | 1.6627 |
| 1.6726 | 2.03 | 905000 | 1.6612 |
| 1.669 | 2.04 | 910000 | 1.6595 |
| 1.6696 | 2.05 | 915000 | 1.6427 |
| 1.6672 | 2.06 | 920000 | 1.6497 |
| 1.669 | 2.07 | 925000 | 1.6288 |
| 1.6675 | 2.08 | 930000 | 1.6443 |
| 1.6685 | 2.09 | 935000 | 1.6316 |
| 1.6671 | 2.11 | 940000 | 1.6451 |
| 1.6673 | 2.12 | 945000 | 1.6313 |
| 1.6649 | 2.13 | 950000 | 1.6363 |
| 1.6655 | 2.14 | 955000 | 1.6440 |
| 1.6637 | 2.15 | 960000 | 1.6238 |
| 1.6632 | 2.16 | 965000 | 1.6226 |
| 1.6599 | 2.17 | 970000 | 1.6171 |
| 1.6602 | 2.18 | 975000 | 1.6466 |
| 1.658 | 2.19 | 980000 | 1.6341 |
| 1.6571 | 2.21 | 985000 | 1.6500 |
| 1.6572 | 2.22 | 990000 | 1.6225 |
| 1.6572 | 2.23 | 995000 | 1.6296 |
| 1.6552 | 2.24 | 1000000 | 1.6437 |
| 1.6548 | 2.25 | 1005000 | 1.6162 |
| 1.6552 | 2.26 | 1010000 | 1.6223 |
| 1.6544 | 2.27 | 1015000 | 1.6355 |
| 1.6464 | 2.28 | 1020000 | 1.6250 |
| 1.652 | 2.3 | 1025000 | 1.6217 |
| 1.6481 | 2.31 | 1030000 | 1.6079 |
| 1.6466 | 2.32 | 1035000 | 1.6110 |
| 1.6462 | 2.33 | 1040000 | 1.6210 |
| 1.6448 | 2.34 | 1045000 | 1.5993 |
| 1.6461 | 2.35 | 1050000 | 1.6096 |
| 1.6396 | 2.36 | 1055000 | 1.6137 |
| 1.644 | 2.37 | 1060000 | 1.6189 |
| 1.6396 | 2.39 | 1065000 | 1.6211 |
| 1.639 | 2.4 | 1070000 | 1.6149 |
| 1.6358 | 2.41 | 1075000 | 1.6144 |
| 1.6356 | 2.42 | 1080000 | 1.6018 |
| 1.6364 | 2.43 | 1085000 | 1.5999 |
| 1.6352 | 2.44 | 1090000 | 1.6095 |
| 1.634 | 2.45 | 1095000 | 1.6114 |
| 1.6279 | 2.46 | 1100000 | 1.6156 |
| 1.6272 | 2.47 | 1105000 | 1.6124 |
| 1.6319 | 2.49 | 1110000 | 1.6046 |
| 1.6276 | 2.5 | 1115000 | 1.6152 |
| 1.6285 | 2.51 | 1120000 | 1.6129 |
| 1.6242 | 2.52 | 1125000 | 1.5984 |
| 1.6261 | 2.53 | 1130000 | 1.6116 |
| 1.623 | 2.54 | 1135000 | 1.6061 |
| 1.6203 | 2.55 | 1140000 | 1.6182 |
| 1.62 | 2.56 | 1145000 | 1.5887 |
| 1.6177 | 2.58 | 1150000 | 1.5731 |
| 1.6172 | 2.59 | 1155000 | 1.5990 |
| 1.6179 | 2.6 | 1160000 | 1.5965 |
| 1.6206 | 2.61 | 1165000 | 1.6000 |
| 1.6156 | 2.62 | 1170000 | 1.5873 |
| 1.6124 | 2.63 | 1175000 | 1.5899 |
| 1.613 | 2.64 | 1180000 | 1.5910 |
| 1.6134 | 2.65 | 1185000 | 1.6017 |
| 1.609 | 2.67 | 1190000 | 1.5822 |
| 1.6084 | 2.68 | 1195000 | 1.5906 |
| 1.6101 | 2.69 | 1200000 | 1.6218 |
| 1.6077 | 2.7 | 1205000 | 1.6149 |
| 1.6057 | 2.71 | 1210000 | 1.5994 |
| 1.6018 | 2.72 | 1215000 | 1.5839 |
| 1.6049 | 2.73 | 1220000 | 1.5864 |
| 1.6012 | 2.74 | 1225000 | 1.5994 |
| 1.6013 | 2.75 | 1230000 | 1.5821 |
| 1.5957 | 2.77 | 1235000 | 1.5964 |
| 1.5971 | 2.78 | 1240000 | 1.5897 |
| 1.5967 | 2.79 | 1245000 | 1.5774 |
| 1.5927 | 2.8 | 1250000 | 1.5861 |
| 1.5954 | 2.81 | 1255000 | 1.5789 |
| 1.5937 | 2.82 | 1260000 | 1.5739 |
| 1.5895 | 2.83 | 1265000 | 1.5701 |
| 1.5912 | 2.84 | 1270000 | 1.5622 |
| 1.5922 | 2.86 | 1275000 | 1.5730 |
| 1.5883 | 2.87 | 1280000 | 1.5775 |
| 1.5864 | 2.88 | 1285000 | 1.5726 |
| 1.5837 | 2.89 | 1290000 | 1.5679 |
| 1.5824 | 2.9 | 1295000 | 1.5683 |
| 1.5817 | 2.91 | 1300000 | 1.5508 |
| 1.5778 | 2.92 | 1305000 | 1.5620 |
| 1.5822 | 2.93 | 1310000 | 1.5556 |
| 1.5783 | 2.95 | 1315000 | 1.5693 |
| 1.5751 | 2.96 | 1320000 | 1.5781 |
| 1.5716 | 2.97 | 1325000 | 1.5655 |
| 1.5765 | 2.98 | 1330000 | 1.5528 |
| 1.5728 | 2.99 | 1335000 | 1.5748 |
| 1.5672 | 3.0 | 1340000 | 1.5597 |
| 1.5467 | 3.01 | 1345000 | 1.5461 |
| 1.547 | 3.02 | 1350000 | 1.5516 |
| 1.5462 | 3.03 | 1355000 | 1.5519 |
| 1.5464 | 3.05 | 1360000 | 1.5593 |
| 1.5457 | 3.06 | 1365000 | 1.5576 |
| 1.5441 | 3.07 | 1370000 | 1.5653 |
| 1.544 | 3.08 | 1375000 | 1.5662 |
| 1.5467 | 3.09 | 1380000 | 1.5611 |
| 1.5439 | 3.1 | 1385000 | 1.5635 |
| 1.5449 | 3.11 | 1390000 | 1.5467 |
| 1.5417 | 3.12 | 1395000 | 1.5495 |
| 1.5428 | 3.14 | 1400000 | 1.5552 |
| 1.5432 | 3.15 | 1405000 | 1.5347 |
| 1.5401 | 3.16 | 1410000 | 1.5394 |
| 1.5391 | 3.17 | 1415000 | 1.5497 |
| 1.539 | 3.18 | 1420000 | 1.5431 |
| 1.5368 | 3.19 | 1425000 | 1.5479 |
| 1.5365 | 3.2 | 1430000 | 1.5513 |
| 1.5327 | 3.21 | 1435000 | 1.5467 |
| 1.5337 | 3.23 | 1440000 | 1.5477 |
| 1.5317 | 3.24 | 1445000 | 1.5398 |
| 1.5315 | 3.25 | 1450000 | 1.5481 |
| 1.532 | 3.26 | 1455000 | 1.5385 |
| 1.5312 | 3.27 | 1460000 | 1.5520 |
| 1.5328 | 3.28 | 1465000 | 1.5423 |
| 1.5288 | 3.29 | 1470000 | 1.5489 |
| 1.5271 | 3.3 | 1475000 | 1.5395 |
| 1.5273 | 3.31 | 1480000 | 1.5335 |
| 1.5235 | 3.33 | 1485000 | 1.5381 |
| 1.5224 | 3.34 | 1490000 | 1.5289 |
| 1.5206 | 3.35 | 1495000 | 1.5331 |
| 1.5189 | 3.36 | 1500000 | 1.5343 |
| 1.5152 | 3.37 | 1505000 | 1.5246 |
| 1.5225 | 3.38 | 1510000 | 1.5280 |
| 1.5168 | 3.39 | 1515000 | 1.5315 |
| 1.5161 | 3.4 | 1520000 | 1.5284 |
| 1.5111 | 3.42 | 1525000 | 1.5278 |
| 1.5154 | 3.43 | 1530000 | 1.5148 |
| 1.515 | 3.44 | 1535000 | 1.5286 |
| 1.5117 | 3.45 | 1540000 | 1.5291 |
| 1.5099 | 3.46 | 1545000 | 1.5320 |
| 1.5097 | 3.47 | 1550000 | 1.5323 |
| 1.5075 | 3.48 | 1555000 | 1.5157 |
| 1.5059 | 3.49 | 1560000 | 1.5214 |
| 1.5011 | 3.51 | 1565000 | 1.5199 |
| 1.5074 | 3.52 | 1570000 | 1.5114 |
| 1.5033 | 3.53 | 1575000 | 1.5145 |
| 1.5009 | 3.54 | 1580000 | 1.5184 |
| 1.4994 | 3.55 | 1585000 | 1.5125 |
| 1.5041 | 3.56 | 1590000 | 1.5048 |
| 1.5002 | 3.57 | 1595000 | 1.5156 |
| 1.4967 | 3.58 | 1600000 | 1.5176 |
| 1.4923 | 3.59 | 1605000 | 1.5128 |
| 1.495 | 3.61 | 1610000 | 1.5188 |
| 1.4929 | 3.62 | 1615000 | 1.5149 |
| 1.4921 | 3.63 | 1620000 | 1.5097 |
| 1.4916 | 3.64 | 1625000 | 1.5161 |
| 1.4852 | 3.65 | 1630000 | 1.5134 |
| 1.4881 | 3.66 | 1635000 | 1.5101 |
| 1.4873 | 3.67 | 1640000 | 1.5027 |
| 1.4911 | 3.68 | 1645000 | 1.4968 |
| 1.488 | 3.7 | 1650000 | 1.4962 |
| 1.4842 | 3.71 | 1655000 | 1.5030 |
| 1.4829 | 3.72 | 1660000 | 1.5041 |
| 1.4816 | 3.73 | 1665000 | 1.5076 |
| 1.479 | 3.74 | 1670000 | 1.5029 |
| 1.4768 | 3.75 | 1675000 | 1.5053 |
| 1.4769 | 3.76 | 1680000 | 1.5026 |
| 1.4781 | 3.77 | 1685000 | 1.5016 |
| 1.4781 | 3.79 | 1690000 | 1.5034 |
| 1.4777 | 3.8 | 1695000 | 1.4976 |
| 1.4736 | 3.81 | 1700000 | 1.5002 |
| 1.4715 | 3.82 | 1705000 | 1.4995 |
| 1.4716 | 3.83 | 1710000 | 1.4996 |
| 1.4648 | 3.84 | 1715000 | 1.4952 |
| 1.4711 | 3.85 | 1720000 | 1.4934 |
| 1.4682 | 3.86 | 1725000 | 1.4965 |
| 1.4659 | 3.87 | 1730000 | 1.4932 |
| 1.4689 | 3.89 | 1735000 | 1.4920 |
| 1.4656 | 3.9 | 1740000 | 1.4910 |
| 1.4666 | 3.91 | 1745000 | 1.4893 |
| 1.4611 | 3.92 | 1750000 | 1.4888 |
| 1.4623 | 3.93 | 1755000 | 1.4898 |
| 1.4637 | 3.94 | 1760000 | 1.4909 |
| 1.4585 | 3.95 | 1765000 | 1.4858 |
| 1.4586 | 3.96 | 1770000 | 1.4847 |
| 1.4579 | 3.98 | 1775000 | 1.4841 |
| 1.458 | 3.99 | 1780000 | 1.4840 |
| 1.4572 | 4.0 | 1785000 | 1.4832 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
JamesStratford/Pidrow-bot-DialoGPT-Small | a15a37a6c2e665ba4d71a8489cc67350f7ed58b2 | 2022-06-22T10:47:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JamesStratford | null | JamesStratford/Pidrow-bot-DialoGPT-Small | 25 | null | transformers | 7,709 | ---
tags:
- conversational
---
# Pidrow bot |
dayyass/qaner-conll-bert-base-uncased | 88a17463e8140fab4b14a6f6dba57d6599e293ee | 2022-06-22T14:06:56.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | dayyass | null | dayyass/qaner-conll-bert-base-uncased | 25 | 1 | transformers | 7,710 | Entry not found |
Aalaa/opt-125m-custom-data | 2a7dd73190663f91a6754a49c5ebe37c8a8290e3 | 2022-06-29T09:32:01.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-generation | false | Aalaa | null | Aalaa/opt-125m-custom-data | 25 | null | transformers | 7,711 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-125m-custom-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-custom-data
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 480 | 2.9889 |
| 3.1368 | 2.0 | 960 | 2.9625 |
| 2.8629 | 3.0 | 1440 | 2.9594 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
projecte-aina/roberta-base-ca-v2-cased-ner | 7ace853c8d4ba530bd52e104296c3768930e22aa | 2022-07-25T06:52:13.000Z | [
"pytorch",
"roberta",
"token-classification",
"ca",
"dataset:projecte-aina/ancora-ca-ner",
"arxiv:1907.11692",
"transformers",
"catalan",
"named entity recognition",
"ner",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2-cased-ner | 25 | null | transformers | 7,712 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "named entity recognition"
- "ner"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/ancora-ca-ner"
metrics:
- f1
model-index:
- name: roberta-base-ca-v2-cased-ner
results:
- task:
type: token-classification
dataset:
type: projecte-aina/ancora-ca-ner
name: Ancora-ca-NER
metrics:
- name: F1
type: f1
value: 0.8945
widget:
- text: "Em dic Lluïsa i visc a Santa Maria del Camí."
- text: "L'Aina, la Berta i la Norma són molt amigues."
- text: "El Martí llegeix el Cavall Fort."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Named Entity Recognition.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-ner** is a Named Entity Recognition (NER) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-ner** model can be used to recognize Named Entities in the provided text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to Use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("ner", model="projecte-aina/roberta-base-ca-v2-cased-ner")
example = "Em dic Lluïsa i visc a Santa Maria del Camí."
ner_results = nlp(example)
pprint(ner_results)
```
## Training
### Training data
We used the NER dataset in Catalan called [Ancora-ca-NER](https://huggingface.co/datasets/projecte-aina/ancora-ca-ner) for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing F1 score.
### Evaluation results
We evaluated the _roberta-base-ca-v2-cased-ner_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | Ancora-ca-ner (F1)|
| ------------|:-------------|
| roberta-base-ca-v2-cased-ner | **89.45** |
| roberta-base-ca-cased-ner | 88.94 |
| mBERT | 87.36 |
| XLM-RoBERTa | 88.07 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
projecte-aina/roberta-base-ca-v2-cased-sts | 9a9b37ef377ab5947d5c4fd890dfabba0302cec1 | 2022-07-25T06:51:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"ca",
"dataset:projecte-aina/sts-ca",
"arxiv:1907.11692",
"transformers",
"catalan",
"semantic textual similarity",
"sts-ca",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index"
] | text-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2-cased-sts | 25 | null | transformers | 7,713 | ---
pipeline_tag: text-classification
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "semantic textual similarity"
- "sts-ca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/sts-ca"
metrics:
- "combined_score"
model-index:
- name: roberta-base-ca-v2-cased-sts
results:
- task:
type: text-classification
dataset:
type: projecte-aina/sts-ca
name: STS-ca
metrics:
- name: Combined score
type: combined_score
value: 0.7907
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Semantic Textual Similarity.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-sts** model can be used to assess the similarity between two snippets of text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code:
```python
from transformers import pipeline, AutoTokenizer
from scipy.special import logit
model = 'projecte-aina/roberta-base-ca-v2-cased-sts'
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
def prepare(sentence_pairs):
sentence_pairs_prep = []
for s1, s2 in sentence_pairs:
sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}")
return sentence_pairs_prep
sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."),
("M'agrades.", "T'estimo."),
("M'agrada el sol i la calor", "A la Garrotxa plou molt.")]
predictions = pipe(prepare(sentence_pairs), add_special_tokens=False)
# convert back to scores to the original 0 and 5 interval
for prediction in predictions:
prediction['score'] = logit(prediction['score'])
print(predictions)
```
Expected output:
```
[{'label': 'SIMILARITY', 'score': 2.118301674983813},
{'label': 'SIMILARITY', 'score': 2.1799755855125853},
{'label': 'SIMILARITY', 'score': 0.9511617858568939}]
```
<sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._
## Training
### Training data
We used the STS dataset in Catalan called [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set, and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing the average score between the Pearson and Spearman correlations.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines:
| Model | STS-ca (Combined score) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-sts | 79.07 |
| roberta-base-ca-cased-sts | **80.19** |
| mBERT | 74.26 |
| XLM-RoBERTa | 61.61 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A] |
amanbawa96/bert-base-uncase-contracts | 53d4cc39541b9b3da626199718bd8c52a45d4f5d | 2022-06-30T23:05:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | amanbawa96 | null | amanbawa96/bert-base-uncase-contracts | 25 | null | transformers | 7,714 | Bert Base Uncased Contract model trained on CUAD Dataset
The Dataset can be downloaded from [Here](https://www.atticusprojectai.org/cuad). |
arize-ai/XLM-RoBERTa-xtreme-en | cf0f0e64d4c85fba8c66f2f1702efef628d29e7b | 2022-07-01T01:48:00.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme_en",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | arize-ai | null | arize-ai/XLM-RoBERTa-xtreme-en | 25 | null | transformers | 7,715 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme_en
metrics:
- accuracy
- f1
widget:
- text: "My name is Julia, I study at Imperial College, in London"
example_title: "Example 1"
- text: "My name is Sarah and I live in Paris"
example_title: "Example 2"
- text: "My name is Clara and I live in Berkeley, California"
example_title: "Example 3"
model-index:
- name: XLM-RoBERTa-xtreme-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme_en
type: xtreme_en
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9109484079686702
- name: F1
type: f1
value: 0.7544312444026322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-xtreme-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme_en dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2838
- Accuracy: 0.9109
- F1: 0.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6502 | 1.0 | 235 | 0.3328 | 0.8995 | 0.7251 |
| 0.3239 | 2.0 | 470 | 0.2897 | 0.9101 | 0.7473 |
| 0.2644 | 3.0 | 705 | 0.2838 | 0.9109 | 0.7544 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
emilylearning/cond_ft_subreddit_on_reddit__prcnt_100__test_run_False__roberta-base | c8b743327d731a80dbd7677951e01e4beb58a643 | 2022-07-01T10:20:47.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_subreddit_on_reddit__prcnt_100__test_run_False__roberta-base | 25 | null | transformers | 7,716 | Entry not found |
djagatiya/ner-albert-base-v2-ontonotesv5-englishv4 | 654ec67cf670a60afef166fcbcc9e91157f813d1 | 2022-07-03T11:28:08.000Z | [
"pytorch",
"albert",
"token-classification",
"dataset:djagatiya/ner-ontonotes-v5-eng-v4",
"transformers",
"autotrain_compatible"
] | token-classification | false | djagatiya | null | djagatiya/ner-albert-base-v2-ontonotesv5-englishv4 | 25 | null | transformers | 7,717 | ---
tags:
- token-classification
datasets:
- djagatiya/ner-ontonotes-v5-eng-v4
widget:
- text: "On September 1st George won 1 dollar while watching Game of Thrones."
---
# (NER) ALBERT-base-v2 : conll2012_ontonotesv5-english-v4
This `ALBERT-base-v2` NER model was finetuned on `conll2012_ontonotesv5` version `english-v4` dataset. <br>
Check out [NER-System Repository](https://github.com/djagatiya/NER-System) for more information.
## Evaluation
- Precision: 86.20
- Recall: 86.18
- F1-Score: 86.19
> check out this [eval.log](eval.log) file for evaluation metrics and classification report.
```
precision recall f1-score support
CARDINAL 0.84 0.83 0.83 935
DATE 0.84 0.87 0.86 1602
EVENT 0.61 0.52 0.56 63
FAC 0.54 0.59 0.56 135
GPE 0.95 0.94 0.95 2240
LANGUAGE 0.85 0.50 0.63 22
LAW 0.56 0.57 0.57 40
LOC 0.61 0.65 0.63 179
MONEY 0.85 0.88 0.86 314
NORP 0.88 0.92 0.90 841
ORDINAL 0.78 0.86 0.81 195
ORG 0.84 0.81 0.82 1795
PERCENT 0.88 0.87 0.88 349
PERSON 0.94 0.92 0.93 1988
PRODUCT 0.57 0.53 0.55 76
QUANTITY 0.77 0.81 0.79 105
TIME 0.59 0.66 0.62 212
WORK_OF_ART 0.60 0.52 0.56 166
micro avg 0.86 0.86 0.86 11257
macro avg 0.75 0.74 0.74 11257
weighted avg 0.86 0.86 0.86 11257
``` |
ccarvajal/beto-emoji | c8ed44514746fbb40d902faf20167173f0be2f47 | 2022-07-08T03:35:39.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers"
] | text-classification | false | ccarvajal | null | ccarvajal/beto-emoji | 25 | null | transformers | 7,718 | ---
language:
- es
---
# beto-emoji
Fine-tunning [BETO](https://github.com/dccuchile/beto) for emoji-prediction.
## Repository
Details with training and a use example are shown in [github.com/camilocarvajalreyes/beto-emoji](https://github.com/camilocarvajalreyes/beto-emoji). A deeper analysis of this and other models on the full dataset can be found in [github.com/furrutiav/data-mining-2022](https://github.com/furrutiav/data-mining-2022). We have used this model for a project for [CC5205 Data Mining](https://github.com/dccuchile/CC5205) course.
## Example
Inspired by model card from [cardiffnlp/twitter-roberta-base-emoji](https://huggingface.co/cardiffnlp/twitter-roberta-base-emoji).
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"ccarvajal/beto-emoji"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/camilocarvajalreyes/beto-emoji/main/es_mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "que viva españa"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output
```python
1) 🇪🇸 0.2508
2) 😍 0.238
3) 👌 0.2225
4) 😂 0.0806
5) ❤ 0.0489
6) 😁 0.0415
7) 😜 0.0232
8) 😎 0.0229
9) 😊 0.0156
10) 😉 0.0119
11) 💜 0.0079
12) 💕 0.0077
13) 💪 0.0066
14) 💘 0.0054
15) 💙 0.0052
16) 💞 0.005
17) 😘 0.0034
18) 🎶 0.0022
19) ✨ 0.0007
```
## Results in test set
precision recall f1-score support
❤ 0.39 0.43 0.41 2141
😍 0.29 0.39 0.33 1408
😂 0.51 0.51 0.51 1499
💕 0.09 0.05 0.06 352
😊 0.12 0.23 0.16 514
😘 0.24 0.23 0.24 397
💪 0.37 0.43 0.40 307
😉 0.15 0.17 0.16 453
👌 0.09 0.16 0.11 180
🇪🇸 0.46 0.46 0.46 424
😎 0.12 0.11 0.11 339
💙 0.36 0.02 0.04 413
💜 0.00 0.00 0.00 235
😜 0.04 0.02 0.02 274
💞 0.00 0.00 0.00 93
✨ 0.26 0.12 0.17 416
🎶 0.25 0.24 0.24 212
💘 0.00 0.00 0.00 134
😁 0.05 0.03 0.04 209
accuracy 0.30 10000
macro_avg 0.20 0.19 0.18 10000
weighted avg 0.29 0.30 0.29 10000
[Another example](https://github.com/camilocarvajalreyes/beto-emoji/blob/main/attention_visualisation.ipynb) with a visualisation of the attention modules within this model is carried out using [bertviz](https://github.com/jessevig/bertviz).
## Reproducibility
The Multilingual Emoji Prediction dataset (Barbieri et al. 2010) consists of tweets in English and Spanish that originally had a single emoji, which is later used as a tag. Test and trial sets can be downloaded [here](https://github.com/fvancesco/Semeval2018-Task2-Emoji-Detection/blob/master/dataset/Semeval2018-Task2-EmojiPrediction.zip?raw=true), but the train set needs to be downloaded using a [twitter crawler](https://github.com/fra82/twitter-crawler/blob/master/semeval2018task2TwitterCrawlerHOWTO.md). The goal is to predict that single emoji that was originally in the tweet using the text in it (out of a fixed set of possible emojis, 20 for English and 19 for Spanish).
Training parameters:
```python
training_args = TrainingArguments(
output_dir="./results",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01
)
```
|
ryo0634/luke-base-full-20201201 | 81132447ab2e71523f7d3424f5ca4a081de99a64 | 2022-07-03T16:09:24.000Z | [
"pytorch",
"luke",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ryo0634 | null | ryo0634/luke-base-full-20201201 | 25 | null | transformers | 7,719 | Entry not found |
ClassCat/roberta-base-french | 5bca559ed63b8ee656a4b2d01c18b5c730997bb7 | 2022-07-08T07:34:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"fr",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ClassCat | null | ClassCat/roberta-base-french | 25 | 1 | transformers | 7,720 | ---
language: fr
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "Je vais à la <mask>."
- text: "J'aime le <mask>."
- text: "J'ai ouvert la <mask>."
- text: "Je m'appelle <mask>."
- text: "J'ai beaucoup d'<mask>."
---
## RoBERTa French base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/fr](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bfr) (French Wikipedia)
* Subset of [CC-100/fr](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-french')
unmasker("Je vais à la <mask>.")
``` |
KevinChoi/dpr-context_encoder-klue-roberta-base | 236cd1829873679c9f9ac88cfcc79ac16b2c7f45 | 2022-07-06T03:55:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KevinChoi | null | KevinChoi/dpr-context_encoder-klue-roberta-base | 25 | null | transformers | 7,721 | Entry not found |
mikesong724/deberta-wiki-2010 | 2e299beaf61b91e08c79e078380a2395d5526675 | 2022-07-07T03:29:19.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mikesong724 | null | mikesong724/deberta-wiki-2010 | 25 | null | transformers | 7,722 | DeBERTa trained from scratch
continued training from https://huggingface.co/mikesong724/deberta-wiki-2006
Source data: https://dumps.wikimedia.org/archive/2010/
Tools used: https://github.com/mikesong724/Point-in-Time-Language-Model
2010 wiki archive 6.1 GB trained 18 epochs = 108GB + 2006 (65GB)
GLUE benchmark
cola (3e): matthews corr: 0.3640
sst2 (3e): acc: 0.9106
mrpc (5e): F1: 0.8505, acc: 0.7794
stsb (3e): pearson: 0.8339, spearman: 0.8312
qqp (3e): acc: 0.8965, F1: 0.8604
mnli (3e): acc_mm: 0.8023
qnli (3e): acc: 0.8889
rte (3e): acc: 0.5271
wnli (5e): acc: 0.3380 |
bhadresh-savani/bertweet-base-finetuned-emotion | 1662c8787098816246419044f8a2b12a1735aa83 | 2022-07-14T07:00:52.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | bhadresh-savani | null | bhadresh-savani/bertweet-base-finetuned-emotion | 25 | null | transformers | 7,723 | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9295613935787139
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.925
verified: true
- name: Precision Macro
type: precision
value: 0.8722017563353339
verified: true
- name: Precision Micro
type: precision
value: 0.925
verified: true
- name: Precision Weighted
type: precision
value: 0.9283646705517916
verified: true
- name: Recall Macro
type: recall
value: 0.8982480793145559
verified: true
- name: Recall Micro
type: recall
value: 0.925
verified: true
- name: Recall Weighted
type: recall
value: 0.925
verified: true
- name: F1 Macro
type: f1
value: 0.883488774573809
verified: true
- name: F1 Micro
type: f1
value: 0.925
verified: true
- name: F1 Weighted
type: f1
value: 0.9259820821054494
verified: true
- name: loss
type: loss
value: 0.18158096075057983
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-emotion
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1737
- Accuracy: 0.929
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9469 | 1.0 | 250 | 0.3643 | 0.895 | 0.8921 |
| 0.2807 | 2.0 | 500 | 0.2173 | 0.9245 | 0.9252 |
| 0.1749 | 3.0 | 750 | 0.1859 | 0.926 | 0.9266 |
| 0.1355 | 4.0 | 1000 | 0.1737 | 0.929 | 0.9296 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
shaneweisz/DialoGPT-finetuned-gab-multiCONAN | 174212e1a82ecbc11a8fc50230ce1b25c6226cff | 2022-07-12T14:36:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | shaneweisz | null | shaneweisz/DialoGPT-finetuned-gab-multiCONAN | 25 | null | transformers | 7,724 | Entry not found |
Hamzaaa/xlsr-wav2vec-speech-emotion-recognition-finetuned-Savee | a3356a0307bbe29e90e13d61d979c121eb83ab48 | 2022-07-14T08:52:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | Hamzaaa | null | Hamzaaa/xlsr-wav2vec-speech-emotion-recognition-finetuned-Savee | 25 | null | transformers | 7,725 | Entry not found |
Team-PIXEL/pixel-base-finetuned-squadv1 | f517f215cbbc8849db9c4bc8cc4966855468eb71 | 2022-07-14T13:05:00.000Z | [
"pytorch",
"pixel",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-squadv1 | 25 | null | transformers | 7,726 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: pixel-base-finetuned-squadv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-squad-v1
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
haisona3/longformer-base-4096-finetuned-1-epoch-512 | 6151c9818e482287ad47e7993183f13624ba8ead | 2022-07-18T01:01:48.000Z | [
"pytorch",
"tensorboard",
"longformer",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | haisona3 | null | haisona3/longformer-base-4096-finetuned-1-epoch-512 | 25 | null | transformers | 7,727 | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: longformer-base-4096-finetuned-squad2-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-finetuned-squad2-finetuned-squad2
This model is a fine-tuned version of [haisona3/longformer-base-4096-finetuned-squad2](https://huggingface.co/haisona3/longformer-base-4096-finetuned-squad2) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pysentimiento/robertuito-pos | d75b5813e91e8c19c07d64f08b69d60099e95328 | 2022-07-21T11:22:45.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"arxiv:2106.09462",
"arxiv:2111.09453",
"transformers",
"twitter",
"pos-tagging",
"autotrain_compatible"
] | token-classification | false | pysentimiento | null | pysentimiento/robertuito-pos | 25 | null | transformers | 7,728 | ---
language:
- es
tags:
- twitter
- pos-tagging
---
# POS Tagging model for Spanish/English
## robertuito-pos
Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with the Spanish/English split of the [LinCE NER corpus](https://ritual.uh.edu/lince/), a code-switched benchmark . Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets.
## Results
Results are taken from the LinCE leaderboard
| Model | Sentiment | NER | POS |
|:-----------------------|:----------------|:-------------------|:--------|
| RoBERTuito | **60.6** | 68.5 | 97.2 |
| XLM Large | -- | **69.5** | **97.2** |
| XLM Base | -- | 64.9 | 97.0 |
| C2S mBERT | 59.1 | 64.6 | 96.9 |
| mBERT | 56.4 | 64.0 | 97.1 |
| BERT | 58.4 | 61.1 | 96.9 |
| BETO | 56.5 | -- | -- |
## Citation
If you use this model in your research, please cite pysentimiento, RoBERTuito and LinCE papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{perez2021robertuito,
title={RoBERTuito: a pre-trained language model for social media text in Spanish},
author={Juan Manuel Pérez and Damián A. Furman and Laura Alonso Alemany and Franco Luque},
year={2021},
eprint={2111.09453},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{aguilar2020lince,
title={LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation},
author={Aguilar, Gustavo and Kar, Sudipta and Solorio, Thamar},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1803--1813},
year={2020}
}
``` |
google/ddpm-cat-256 | 34e20c9840f5865b26b3cd335f6a1bee4bd5f29b | 2022-07-21T15:00:17.000Z | [
"diffusers",
"arxiv:2006.11239",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ddpm-cat-256 | 25 | null | diffusers | 7,729 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-cat-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4.  |
51la5/QMSUM-keyphrase-gen | 1003893f7c9c2c784bc1e908d3deafc6d9d5b657 | 2022-07-22T10:08:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | 51la5 | null | 51la5/QMSUM-keyphrase-gen | 25 | null | transformers | 7,730 | Entry not found |
oliverguhr/wav2vec2-base-german-cv9 | 62829c379e83f02093fe998686c898bfcae2df98 | 2022-07-25T09:34:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_9_0",
"transformers",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"license:mit",
"model-index"
] | automatic-speech-recognition | false | oliverguhr | null | oliverguhr/wav2vec2-base-german-cv9 | 25 | null | transformers | 7,731 | ---
language:
- de
license: mit
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
model-index:
- name: wav2vec2-base-german-cv9
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 10.565782902002716
- name: Test CER
type: cer
value: 2.6226824852959657
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: de
metrics:
- name: Test WER (+LM)
type: wer
value: 7.996088831362508
- name: Test CER (+LM)
type: cer
value: 2.1515717711623326
---
# wav2vec2-base-german-cv9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- Wer: 0.1209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 0.6827 | 1.0 | 3557 | 0.6695 | 0.6247 |
| 0.3992 | 2.0 | 7114 | 0.3738 | 0.3936 |
| 0.2611 | 3.0 | 10671 | 0.3011 | 0.3177 |
| 0.2536 | 4.0 | 14228 | 0.2672 | 0.2749 |
| 0.1943 | 5.0 | 17785 | 0.2487 | 0.2480 |
| 0.2004 | 6.0 | 21342 | 0.2246 | 0.2268 |
| 0.1605 | 7.0 | 24899 | 0.2176 | 0.2120 |
| 0.1579 | 8.0 | 28456 | 0.2046 | 0.2024 |
| 0.1668 | 9.0 | 32013 | 0.2027 | 0.1944 |
| 0.1338 | 10.0 | 35570 | 0.1968 | 0.1854 |
| 0.1478 | 11.0 | 39127 | 0.1963 | 0.1823 |
| 0.1177 | 12.0 | 42684 | 0.1956 | 0.1800 |
| 0.1245 | 13.0 | 46241 | 0.1889 | 0.1732 |
| 0.1124 | 14.0 | 49798 | 0.1868 | 0.1714 |
| 0.1112 | 15.0 | 53355 | 0.1805 | 0.1650 |
| 0.1209 | 16.0 | 56912 | 0.1860 | 0.1614 |
| 0.1002 | 17.0 | 60469 | 0.1828 | 0.1604 |
| 0.118 | 18.0 | 64026 | 0.1832 | 0.1580 |
| 0.0974 | 19.0 | 67583 | 0.1771 | 0.1555 |
| 0.1007 | 20.0 | 71140 | 0.1812 | 0.1532 |
| 0.0866 | 21.0 | 74697 | 0.1752 | 0.1504 |
| 0.0901 | 22.0 | 78254 | 0.1690 | 0.1477 |
| 0.0964 | 23.0 | 81811 | 0.1773 | 0.1489 |
| 0.085 | 24.0 | 85368 | 0.1776 | 0.1456 |
| 0.0945 | 25.0 | 88925 | 0.1786 | 0.1428 |
| 0.0804 | 26.0 | 92482 | 0.1737 | 0.1429 |
| 0.0832 | 27.0 | 96039 | 0.1789 | 0.1394 |
| 0.0683 | 28.0 | 99596 | 0.1741 | 0.1390 |
| 0.0761 | 29.0 | 103153 | 0.1688 | 0.1379 |
| 0.0833 | 30.0 | 106710 | 0.1726 | 0.1370 |
| 0.0753 | 31.0 | 110267 | 0.1774 | 0.1353 |
| 0.08 | 32.0 | 113824 | 0.1734 | 0.1344 |
| 0.0644 | 33.0 | 117381 | 0.1737 | 0.1334 |
| 0.0745 | 34.0 | 120938 | 0.1763 | 0.1335 |
| 0.0629 | 35.0 | 124495 | 0.1761 | 0.1311 |
| 0.0654 | 36.0 | 128052 | 0.1718 | 0.1302 |
| 0.0656 | 37.0 | 131609 | 0.1697 | 0.1301 |
| 0.0643 | 38.0 | 135166 | 0.1716 | 0.1279 |
| 0.0683 | 39.0 | 138723 | 0.1777 | 0.1279 |
| 0.0587 | 40.0 | 142280 | 0.1735 | 0.1271 |
| 0.0693 | 41.0 | 145837 | 0.1780 | 0.1260 |
| 0.0532 | 42.0 | 149394 | 0.1724 | 0.1245 |
| 0.0594 | 43.0 | 152951 | 0.1736 | 0.1250 |
| 0.0544 | 44.0 | 156508 | 0.1744 | 0.1238 |
| 0.0559 | 45.0 | 160065 | 0.1770 | 0.1232 |
| 0.0557 | 46.0 | 163622 | 0.1766 | 0.1231 |
| 0.0521 | 47.0 | 167179 | 0.1751 | 0.1220 |
| 0.0591 | 48.0 | 170736 | 0.1724 | 0.1217 |
| 0.0507 | 49.0 | 174293 | 0.1753 | 0.1212 |
| 0.0577 | 50.0 | 177850 | 0.1742 | 0.1209 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ai4bharat/IndicBERTv2-alpha-TyDiQA | a0880f65a2c3d24240046f4b84257bb600c7443f | 2022-07-27T11:22:47.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ai4bharat | null | ai4bharat/IndicBERTv2-alpha-TyDiQA | 25 | null | transformers | 7,732 | Entry not found |
spicard/small-10 | 2fddb9184d8cd2312da25ce20e30b3b1439d65ba | 2022-07-26T16:40:18.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | spicard | null | spicard/small-10 | 25 | null | transformers | 7,733 | Entry not found |
AbidHasan95/movieHunt2 | b6fe88e4e4494fac296d48005d56ef4ba7063188 | 2022-02-10T19:57:57.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | AbidHasan95 | null | AbidHasan95/movieHunt2 | 24 | null | transformers | 7,734 | Entry not found |
BigSalmon/GPT2HardArticleEasyArticle | dab38cda75421cfdccd8a21d14ec533d6b39e322 | 2021-05-21T09:31:52.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPT2HardArticleEasyArticle | 24 | null | transformers | 7,735 | Entry not found |
Ching/negation_detector | b45f4e2e4ec707564027da0861a86c4d9855ef05 | 2021-10-18T10:32:43.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Ching | null | Ching/negation_detector | 24 | null | transformers | 7,736 | This question answering model was fine tuned to detect negation expressions
How to use:
question: negation
context: That is not safe!
Answer: not
question: negation
context: Weren't we going to go to the moon?
Answer: Weren't
|
ChristopherA08/IndoELECTRA | ccebcb76014a75179ba37840782832a72004aa8f | 2021-02-04T06:23:59.000Z | [
"pytorch",
"electra",
"pretraining",
"id",
"dataset:oscar",
"transformers"
] | null | false | ChristopherA08 | null | ChristopherA08/IndoELECTRA | 24 | null | transformers | 7,737 | ---
language: id
datasets:
- oscar
---
# IndoBERT (Indonesian BERT Model)
## Model description
ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).
IndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language.
This model is base version which use electra-base config.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ChristopherA08/IndoELECTRA")
model = AutoModel.from_pretrained("ChristopherA08/IndoELECTRA")
tokenizer.encode("hai aku mau makan.")
[2, 8078, 1785, 2318, 1946, 18, 4]
```
## Training procedure
The training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
|
DingleyMaillotUrgell/homer-bot | bbd1433f89817e468ccea770cb9dadd9a535e280 | 2022-03-24T21:13:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
] | conversational | false | DingleyMaillotUrgell | null | DingleyMaillotUrgell/homer-bot | 24 | 0 | transformers | 7,738 | ---
tags:
- conversational
language:
- en
---
# HomerBot: A conversational chatbot imitating Homer Simpson
This model is a fine-tuned [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium) (medium version) on Simpsons [scripts](https://www.kaggle.com/datasets/pierremegret/dialogue-lines-of-the-simpsons).
More specifically, we fine-tune DialoGPT-medium for 3 epochs on 10K **(character utterance, Homer's response)** pairs
For more details, check out our git [repo](https://github.com/jesseDingley/HomerBot) containing all the code.
### How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("DingleyMaillotUrgell/homer-bot")
model = AutoModelForCausalLM.from_pretrained("DingleyMaillotUrgell/homer-bot")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User: ") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids,
max_length=1000,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# print last outpput tokens from bot
print("Homer: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
EthanChen0418/few-shot-model-five-classes | da60c05fb3328e9a41275b31db9fe73f45d1523c | 2021-08-04T13:04:58.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | EthanChen0418 | null | EthanChen0418/few-shot-model-five-classes | 24 | null | transformers | 7,739 | Entry not found |
Ghana-NLP/distilabena-base-akuapem-twi-cased | f1d586ce2848b67894bbcabf7fce4b63825103c2 | 2020-10-22T06:04:27.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Ghana-NLP | null | Ghana-NLP/distilabena-base-akuapem-twi-cased | 24 | null | transformers | 7,740 | Entry not found |
Harveenchadha/indictrans | 637f125f737760febd79d096cb47393e175ebd5c | 2021-12-17T18:10:03.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Harveenchadha | null | Harveenchadha/indictrans | 24 | null | transformers | 7,741 | **Work in progress** |
Helsinki-NLP/opus-mt-alv-en | db2e7d8fa1edda0c395e03b813b45d91f6144d5b | 2021-01-18T07:46:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-alv-en | 24 | null | transformers | 7,742 | ---
language:
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- alv
- en
tags:
- translation
license: apache-2.0
---
### alv-eng
* source group: Atlantic-Congo languages
* target group: English
* OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md)
* model: transformer
* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 |
| Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 |
| Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 |
| Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 |
| Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 |
| Tatoeba-test.multi.eng | 20.9 | 0.376 |
| Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 |
| Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 |
| Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 |
| Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 |
| Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 |
| Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 |
| Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 |
| Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 |
| Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 |
| Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 |
| Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 |
### System Info:
- hf_name: alv-eng
- source_languages: alv
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']
- src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt
- src_alpha3: alv
- tgt_alpha3: eng
- short_pair: alv-en
- chrF2_score: 0.376
- bleu: 20.9
- brevity_penalty: 1.0
- ref_len: 15208.0
- src_name: Atlantic-Congo languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: alv
- tgt_alpha2: en
- prefer_old: False
- long_pair: alv-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cel-en | 5de5d6405a061244be33449468458a8af5343934 | 2021-01-18T07:54:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cel-en | 24 | null | transformers | 7,743 | ---
language:
- gd
- ga
- br
- kw
- gv
- cy
- cel
- en
tags:
- translation
license: apache-2.0
---
### cel-eng
* source group: Celtic languages
* target group: English
* OPUS readme: [cel-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md)
* model: transformer
* source language(s): bre cor cym gla gle glv
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bre-eng.bre.eng | 17.2 | 0.385 |
| Tatoeba-test.cor-eng.cor.eng | 3.0 | 0.172 |
| Tatoeba-test.cym-eng.cym.eng | 41.5 | 0.582 |
| Tatoeba-test.gla-eng.gla.eng | 15.4 | 0.330 |
| Tatoeba-test.gle-eng.gle.eng | 50.8 | 0.668 |
| Tatoeba-test.glv-eng.glv.eng | 11.0 | 0.297 |
| Tatoeba-test.multi.eng | 22.8 | 0.398 |
### System Info:
- hf_name: cel-eng
- source_languages: cel
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel', 'en']
- src_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cel
- tgt_alpha3: eng
- short_pair: cel-en
- chrF2_score: 0.39799999999999996
- bleu: 22.8
- brevity_penalty: 1.0
- ref_len: 42097.0
- src_name: Celtic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cel
- tgt_alpha2: en
- prefer_old: False
- long_pair: cel-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-gl | b72cd3b6bef693f9bf4a024e1db18b88d7a4f9d5 | 2021-09-09T21:35:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"gl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gl | 24 | null | transformers | 7,744 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-gl
* source languages: en
* target languages: gl
* OPUS readme: [en-gl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.gl | 36.4 | 0.572 |
|
Helsinki-NLP/opus-mt-en-lg | a0f5fff204854b2832969499a61ee05164cbfa2c | 2021-09-09T21:36:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-lg | 24 | 1 | transformers | 7,745 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lg
* source languages: en
* target languages: lg
* OPUS readme: [en-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lg | 30.4 | 0.543 |
| Tatoeba.en.lg | 5.7 | 0.386 |
|
Helsinki-NLP/opus-mt-es-el | 62475171998f80c7e466f33e0321650dd9aa7438 | 2021-09-09T21:42:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-el | 24 | null | transformers | 7,746 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-el
* source languages: es
* target languages: el
* OPUS readme: [es-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.el | 48.6 | 0.661 |
|
Helsinki-NLP/opus-mt-hi-ur | 30f8d77a8003744072305a44e2e6d07aa3ba11e4 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hi",
"ur",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hi-ur | 24 | null | transformers | 7,747 | ---
language:
- hi
- ur
tags:
- translation
license: apache-2.0
---
### hin-urd
* source group: Hindi
* target group: Urdu
* OPUS readme: [hin-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md)
* model: transformer-align
* source language(s): hin
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hin.urd | 12.4 | 0.393 |
### System Info:
- hf_name: hin-urd
- source_languages: hin
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hi', 'ur']
- src_constituents: {'hin'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt
- src_alpha3: hin
- tgt_alpha3: urd
- short_pair: hi-ur
- chrF2_score: 0.39299999999999996
- bleu: 12.4
- brevity_penalty: 1.0
- ref_len: 1618.0
- src_name: Hindi
- tgt_name: Urdu
- train_date: 2020-06-16
- src_alpha2: hi
- tgt_alpha2: ur
- prefer_old: False
- long_pair: hin-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-mfe-en | 0df7b162d732a66544619408f94c9ca1e4b1d7bf | 2021-09-10T13:57:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mfe",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mfe-en | 24 | null | transformers | 7,748 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mfe-en
* source languages: mfe
* target languages: en
* OPUS readme: [mfe-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfe-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mfe.en | 39.9 | 0.552 |
|
KBLab/bert-base-swedish-cased-neriob | e9faae17dbe01f726df3fb2e03cb45a74909a7ac | 2021-05-18T21:20:00.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | KBLab | null | KBLab/bert-base-swedish-cased-neriob | 24 | null | transformers | 7,749 | Entry not found |
LegolasTheElf/Wav2Vec2_XLSR_Bengali_1b | 825776a02eb76a560e28bb2ddd4d0b545172f997 | 2022-01-27T02:23:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_XLSR_Bengali_1b | 24 | null | transformers | 7,750 | Entry not found |
Luciano/bertimbau-base-lener_br | 20c96be10d975181d1fce2e91a321a559e0eadc5 | 2022-06-28T12:01:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"pt",
"dataset:lener_br",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Luciano | null | Luciano/bertimbau-base-lener_br | 24 | 2 | transformers | 7,751 | ---
language:
- pt
license: mit
tags:
- generated_from_trainer
datasets:
- lener_br
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bertimbau-base-lener_br
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lener_br
type: lener_br
args: lener_br
metric:
name: Accuracy
type: accuracy
value: 0.9692504609383333
model-index:
- name: Luciano/bertimbau-base-lener_br
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: lener_br
type: lener_br
config: lener_br
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9824282794418222
verified: true
- name: Precision
type: precision
value: 0.9877557596262284
verified: true
- name: Recall
type: recall
value: 0.9870401674313772
verified: true
- name: F1
type: f1
value: 0.9873978338768773
verified: true
- name: loss
type: loss
value: 0.11542011797428131
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertimbau-base-lener_br
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Precision: 0.8501
- Recall: 0.9138
- F1: 0.8808
- Accuracy: 0.9693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0686 | 1.0 | 1957 | 0.1399 | 0.7759 | 0.8669 | 0.8189 | 0.9641 |
| 0.0437 | 2.0 | 3914 | 0.1457 | 0.7997 | 0.8938 | 0.8441 | 0.9623 |
| 0.0313 | 3.0 | 5871 | 0.1675 | 0.8466 | 0.8744 | 0.8603 | 0.9651 |
| 0.0201 | 4.0 | 7828 | 0.1621 | 0.8713 | 0.8839 | 0.8775 | 0.9718 |
| 0.0137 | 5.0 | 9785 | 0.1811 | 0.7783 | 0.9159 | 0.8415 | 0.9645 |
| 0.0105 | 6.0 | 11742 | 0.1836 | 0.8568 | 0.9009 | 0.8783 | 0.9692 |
| 0.0105 | 7.0 | 13699 | 0.1649 | 0.8339 | 0.9125 | 0.8714 | 0.9725 |
| 0.0059 | 8.0 | 15656 | 0.2298 | 0.8501 | 0.9138 | 0.8808 | 0.9693 |
| 0.0051 | 9.0 | 17613 | 0.2210 | 0.8437 | 0.9045 | 0.8731 | 0.9693 |
| 0.0061 | 10.0 | 19570 | 0.2499 | 0.8627 | 0.8946 | 0.8784 | 0.9681 |
| 0.0041 | 11.0 | 21527 | 0.1985 | 0.8560 | 0.9052 | 0.8799 | 0.9720 |
| 0.003 | 12.0 | 23484 | 0.2204 | 0.8498 | 0.9065 | 0.8772 | 0.9699 |
| 0.0014 | 13.0 | 25441 | 0.2152 | 0.8425 | 0.9067 | 0.8734 | 0.9709 |
| 0.0005 | 14.0 | 27398 | 0.2317 | 0.8553 | 0.8987 | 0.8765 | 0.9705 |
| 0.0015 | 15.0 | 29355 | 0.2436 | 0.8543 | 0.8989 | 0.8760 | 0.9700 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
Luciano/gpt2-small-portuguese-finetuned-tcu-acordaos | 204addc7ee9d526586292c781bde10aeed33614b | 2022-02-18T10:22:01.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"pt",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Luciano | null | Luciano/gpt2-small-portuguese-finetuned-tcu-acordaos | 24 | null | transformers | 7,752 | ---
language:
- pt
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-portuguese-finetuned-tcu-acordaos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-portuguese-finetuned-tcu-acordaos
This model is a fine-tuned version of [pierreguillou/gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3435 | 1.0 | 658 | 1.8346 |
| 1.8668 | 2.0 | 1316 | 1.7141 |
| 1.7573 | 3.0 | 1974 | 1.6841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
RecordedFuture/Swedish-NER | 436f9d59ada004b5bbae5f351005c4fd9bd43bbb | 2021-05-24T12:03:54.000Z | [
"pytorch",
"bert",
"token-classification",
"sv",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | RecordedFuture | null | RecordedFuture/Swedish-NER | 24 | null | transformers | 7,753 | ---
language: sv
license: mit
---
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases a Named Entity Recognition(NER) model for entety detection in Swedish. The model is based on [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) and finetuned on data collected from various internet sources and forums.
The model has been trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Available tags
* Location
* Organization
* Person
* Religion
* Title
### Evaluation metrics
The model had the following metrics when evaluated on test data originating from the same domain as the training data.
#### F1-score
| Loc | Org | Per | Nat | Rel | Tit | Total |
|------|------|------|------|------|------|-------|
| 0.91 | 0.88 | 0.96 | 0.95 | 0.91 | 0.84 | 0.92 |
|
StivenLancheros/bert-base-spanish-wwm-cased-finetuned-ner-false | 385bf0087febad5d0c408fa3897b2d6a4a1e64bc | 2021-11-23T10:27:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2002",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/bert-base-spanish-wwm-cased-finetuned-ner-false | 24 | null | transformers | 7,754 | ---
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-ner-false
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
args: es
metrics:
- name: Precision
type: precision
value: 0.8527941844616084
- name: Recall
type: recall
value: 0.8625919117647058
- name: F1
type: f1
value: 0.8576650673977612
- name: Accuracy
type: accuracy
value: 0.9780246773614496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-ner-false
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1154
- Precision: 0.8528
- Recall: 0.8626
- F1: 0.8577
- Accuracy: 0.9780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1072 | 1.0 | 833 | 0.0905 | 0.8432 | 0.8451 | 0.8442 | 0.9779 |
| 0.0347 | 2.0 | 1666 | 0.0934 | 0.8592 | 0.8612 | 0.8602 | 0.9782 |
| 0.0218 | 3.0 | 2499 | 0.1078 | 0.8537 | 0.8568 | 0.8553 | 0.9776 |
| 0.0106 | 4.0 | 3332 | 0.1154 | 0.8528 | 0.8626 | 0.8577 | 0.9780 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
TODBERT/TOD-BERT-MLM-V1 | 34178a6c57ace7efbf9423aae288804eb163f326 | 2021-05-19T11:32:32.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | null | false | TODBERT | null | TODBERT/TOD-BERT-MLM-V1 | 24 | null | transformers | 7,755 | Entry not found |
Tymoteusz/distilbert-base-uncased-kaggle-readability | a3629dbe7697aaf4c1667b45af001a9d3ce7098f | 2021-08-10T21:09:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Tymoteusz | null | Tymoteusz/distilbert-base-uncased-kaggle-readability | 24 | 1 | transformers | 7,756 | Entry not found |
af-ai-center/bert-large-swedish-uncased | 0b4d7e18946709ef6303d597fd6020457ae42701 | 2021-05-18T23:14:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | af-ai-center | null | af-ai-center/bert-large-swedish-uncased | 24 | null | transformers | 7,757 | Entry not found |
airKlizz/mt5-base-wikinewssum-english-1000 | 965abb6d7793e41d2987461f8c0e9c8dfbe4bb7e | 2021-12-31T12:29:07.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-english-1000 | 24 | 1 | transformers | 7,758 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-1000
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
- Rouge1: 7.7389
- Rouge2: 3.1606
- Rougel: 6.3317
- Rougelsum: 7.2487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 125 | 2.6981 | 7.1504 | 2.6253 | 5.8261 | 6.7427 |
| No log | 2.0 | 250 | 2.5597 | 7.4666 | 2.9362 | 6.0965 | 6.9699 |
| No log | 3.0 | 375 | 2.5145 | 7.4599 | 2.9449 | 6.0941 | 6.9734 |
| No log | 4.0 | 500 | 2.4904 | 7.5063 | 2.975 | 6.137 | 7.0027 |
| No log | 5.0 | 625 | 2.4904 | 7.6027 | 3.0582 | 6.2161 | 7.0832 |
| No log | 6.0 | 750 | 2.4801 | 7.7601 | 3.1916 | 6.3689 | 7.2686 |
| No log | 7.0 | 875 | 2.4737 | 7.7162 | 3.1332 | 6.3113 | 7.2283 |
| No log | 8.0 | 1000 | 2.4724 | 7.7389 | 3.1606 | 6.3317 | 7.2487 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alaggung/bart-rl | 09ef92f05c2fa9e0e9fb9ea7805947053e8aeb11 | 2022-01-13T17:18:17.000Z | [
"pytorch",
"tf",
"bart",
"text2text-generation",
"ko",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | alaggung | null | alaggung/bart-rl | 24 | null | transformers | 7,759 | ---
language:
- ko
tags:
- summarization
widget:
- text: "[BOS]밥 ㄱ?[SEP]고고고고 뭐 먹을까?[SEP]어제 김치찌개 먹어서 한식말고 딴 거[SEP]그럼 돈까스 어때?[SEP]오 좋다 1시 학관 앞으로 오셈[SEP]ㅇㅋ[EOS]"
inference:
parameters:
max_length: 64
top_k: 5
---
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[bart-r3f](https://huggingface.co/alaggung/bart-r3f) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 RL 기법을 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123 | a41d734d99baf52cc5e0db6c8d38f72b17b0f534 | 2021-05-22T04:14:37.000Z | [
"pytorch",
"albert",
"token-classification",
"bn",
"dataset:albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664",
"transformers",
"autonlp",
"autotrain_compatible"
] | token-classification | false | albertvillanova | null | albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123 | 24 | 2 | transformers | 7,760 | ---
tags: autonlp
language: bn
widget:
- text: "I love AutoNLP 🤗"
datasets:
- albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 1301123
## Validation Metrics
- Loss: 0.14097803831100464
- Accuracy: 0.9740097463451206
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
allenai/longformer-scico | 30022f11e6d9c4231b64d1495f1ff11b973a4c10 | 2021-09-30T10:04:33.000Z | [
"pytorch",
"longformer",
"text-classification",
"en",
"dataset:allenai/scico",
"transformers",
"longformer-scico",
"license:apache-2.0"
] | text-classification | false | allenai | null | allenai/longformer-scico | 24 | 1 | transformers | 7,761 | ---
language: en
tags:
- longformer
- longformer-scico
license: apache-2.0
datasets:
- allenai/scico
inference: false
---
# Longformer for SciCo
This model is the `unified` model discussed in the paper [SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts (AKBC 2021)](https://openreview.net/forum?id=OFLbgUP04nC) that formulates the task of hierarchical cross-document coreference resolution (H-CDCR) as a multiclass problem. The model takes as input two mentions `m1` and `m2` with their corresponding context and outputs 4 scores:
* 0: not related
* 1: `m1` and `m2` corefer
* 2: `m1` is a parent of `m2`
* 3: `m1` is a child of `m2`.
We provide the following code as an example to set the global attention on the special tokens: `<s>`, `<m>` and `</m>`.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained('allenai/longformer-scico')
model = AutoModelForSequenceClassification.from_pretrained('allenai/longformer-scico')
start_token = tokenizer.convert_tokens_to_ids("<m>")
end_token = tokenizer.convert_tokens_to_ids("</m>")
def get_global_attention(input_ids):
global_attention_mask = torch.zeros(input_ids.shape)
global_attention_mask[:, 0] = 1 # global attention to the CLS token
start = torch.nonzero(input_ids == start_token) # global attention to the <m> token
end = torch.nonzero(input_ids == end_token) # global attention to the </m> token
globs = torch.cat((start, end))
value = torch.ones(globs.shape[0])
global_attention_mask.index_put_(tuple(globs.t()), value)
return global_attention_mask
m1 = "In this paper we present the results of an experiment in <m> automatic concept and definition extraction </m> from written sources of law using relatively simple natural methods."
m2 = "This task is important since many natural language processing (NLP) problems, such as <m> information extraction </m>, summarization and dialogue."
inputs = m1 + " </s></s> " + m2
tokens = tokenizer(inputs, return_tensors='pt')
global_attention_mask = get_global_attention(tokens['input_ids'])
with torch.no_grad():
output = model(tokens['input_ids'], tokens['attention_mask'], global_attention_mask)
scores = torch.softmax(output.logits, dim=-1)
# tensor([[0.0818, 0.0023, 0.0019, 0.9139]]) -- m1 is a child of m2
```
**Note:** There is a slight difference between this model and the original model presented in the [paper](https://openreview.net/forum?id=OFLbgUP04nC). The original model includes a single linear layer on top of the `<s>` token (equivalent to `[CLS]`) while this model includes a two-layers MLP to be in line with `LongformerForSequenceClassification`. The original repository can be found [here](https://github.com/ariecattan/scico).
# Citation
```python
@inproceedings{
cattan2021scico,
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
author={Arie Cattan and Sophie Johnson and Daniel S Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=OFLbgUP04nC}
}
```
|
bhavikardeshna/xlm-roberta-base-arabic | 155506a3f20d9c89857dce72140d6c8f7e655016 | 2021-12-21T11:41:04.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
] | question-answering | false | bhavikardeshna | null | bhavikardeshna/xlm-roberta-base-arabic | 24 | 1 | transformers | 7,762 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
cambridgeltl/simctg_english_wikipedia | 8976b60a827627d10ec618291a0e935eeca14903 | 2022-06-25T19:45:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2202.06417",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/simctg_english_wikipedia | 24 | null | transformers | 7,763 | This model provides a GPT-2 language model trained with SimCTG on the English Wikipedia based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_english_wikipedia'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
```
## 3. Prepare the Text Prefix:
```python
prefix_text = r"Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or micro stock. Insects may be farmed for the commodities"
print ('Prefix is: {}'.format(prefix_text))
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 5, 0.6, 128
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output))
'''
Prefix is: Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or
micro stock. Insects may be farmed for the commodities
Output:
----------------------------------------------------------------------------------------------------
Insect farming is the practice of raising and breeding insects as livestock, also referred to as minilivestock or micro stock.
Insects may be farmed for the commodities they produce, such as honey, corn, sorghum, and other crops. In some cases, the
production of insects is a way to increase income for the owner or his family. This type of farming has been described as "an
economic system that benefits all people regardless of race, sex, or social status" (p. 9). A large number of farmers in North
America, Europe, and South America have used the method of farming for food production in order to feed their families and livestock.
The most common method of farming is by hand-cropping, which consists of cutting a hole in the ground and using a saw
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
camembert/camembert-base-ccnet-4gb | 940db5c122b766bb82b5e2e6290c6d82c04bb515 | 2020-12-11T21:35:11.000Z | [
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"transformers"
] | null | false | camembert | null | camembert/camembert-base-ccnet-4gb | 24 | null | transformers | 7,764 | ---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-ccnet-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-ccnet-4gb", tokenizer="camembert/camembert-base-ccnet-4gb")
results = camembert_fill_mask("Le camembert est-il <mask> ?")
# results
#[{'sequence': '<s> Le camembert est-il sain?</s>', 'score': 0.07001790404319763, 'token': 10286},
#{'sequence': '<s> Le camembert est-il français?</s>', 'score': 0.057594332844018936, 'token': 384},
#{'sequence': '<s> Le camembert est-il bon?</s>', 'score': 0.04098724573850632, 'token': 305},
#{'sequence': '<s> Le camembert est-il périmé?</s>', 'score': 0.03486393392086029, 'token': 30862},
#{'sequence': '<s> Le camembert est-il cher?</s>', 'score': 0.021535946056246758, 'token': 1604}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[ 0.0331, 0.0095, -0.2776, ..., 0.2875, -0.0827, -0.2467],
# [-0.1348, 0.0478, -0.5409, ..., 0.8330, 0.0467, 0.0662],
# [ 0.0920, -0.0264, 0.0177, ..., 0.1112, 0.0108, -0.1123],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-ccnet-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-ccnet-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0144, 0.1855, 0.4895, ..., -0.1537, 0.0107, -0.2293],
# [-0.6664, -0.0880, -0.1539, ..., 0.3635, 0.4047, 0.1258],
# [ 0.0511, 0.0540, 0.2545, ..., 0.0709, -0.0288, -0.0779],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
cardiffnlp/bertweet-base-stance-feminist | 09bcb891e6443cea7d7aa85a84510d9485880b94 | 2021-05-20T14:57:14.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-stance-feminist | 24 | null | transformers | 7,765 | |
congcongwang/bart-base-en-zh | 5374572a6370dee233695aa209cf75a5917ff658 | 2020-10-04T21:16:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | congcongwang | null | congcongwang/bart-base-en-zh | 24 | null | transformers | 7,766 | Entry not found |
congcongwang/distilgpt2_fine_tuned_coder | 04eff431ef11d99e25142edb2e5aeee4ee0e36ad | 2021-05-21T15:04:51.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | congcongwang | null | congcongwang/distilgpt2_fine_tuned_coder | 24 | 1 | transformers | 7,767 | Entry not found |
dbmdz/flair-clef-hipe-german-base | 1bd0a25e12823de125082e5bc70ff5c818f237d3 | 2021-04-09T13:00:18.000Z | [
"pytorch",
"de",
"arxiv:2011.06993",
"arxiv:2010.10392",
"flair",
"token-classification",
"sequence-tagger-model",
"license:mit"
] | token-classification | false | dbmdz | null | dbmdz/flair-clef-hipe-german-base | 24 | null | flair | 7,768 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
widget:
- text: "Herr Oberst Brunner ist nämlich Hauptagent für den Kanton Zürich."
license: mit
---
# Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German
Based on [our paper](http://ceur-ws.org/Vol-2696/paper_173.pdf) we release a new baseline model for the German
[CLEF-HIPE shared task](https://impresso.github.io/CLEF-HIPE-2020/).
In contrast to the models used in the paper, we manually sentence-segmented and normalize hyphenations and
trained a NER model using the German Europeana BERT model.
Additionally, we perform experiments with different context sizes. This approach is described in
more detail in [this paper](https://arxiv.org/abs/2011.06993).
# Results
The results with different context sizes can be seen in the following table:
| Model | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg.
| -------------------------- | --------------- | --------------- | --------------- | ------------------- | --------------- | ---------------
| German Europeana BERT | (81.45) / 76.92 | (**81.53**) / 77.03 | (80.49) / 77.83 | (80.88) / 77.19 | (81.39) / 77.00 | (81.15 ± 0.45) / 77.19 ± 0.34
| German Europeana BERT (16) | (**82.56**) / 77.38 | (81.19) / 77.76 | (80.99) / 76.34 | (81.27) / 77.70 | (81.28) / 77.22 | (81.46 ± 0.63) / 77.28 ± 0.57
| German Europeana BERT (32) | (**82.04**) / 78.50 | (81.14) / 76.56 | (81.81) / 78.28 | (81.50) / 76.90 | (81.64) / 77.94 | (81.63 ± 0.34) / 77.64 ± 0.86
| German Europeana BERT (64) | (81.21) / 78.39 | (81.27) / 75.98 | (**81.88**) / 78.40 | (81.66) / 77.35 | (81.29) / 76.70 | (81.46 ± 0.29) / 77.36 ± 1.06
| German Europeana BERT (80) | (82.13) / 77.77 | (81.31) / 76.81 | (82.09) / 78.69 | (**82.30**) / 76.79 | (80.65) / 77.10 | (81.70 ± 0.70) / 77.43 ± 0.81
For model upload, we choose the best model on development score: 82.56 with a context length of 16.
## Comparisons
The following figure shows the results with different context sized (on development dataset):

We perform "Almost Stochastic Order" tests as proposed in the
["Deep Dominance - How to Properly Compare Deep Neural Models"](https://www.aclweb.org/anthology/P19-1266/) paper.
The heatmap figure is heavily inspired by the ["CharacterBERT"](https://arxiv.org/abs/2010.10392) paper.

|
emrecan/bert-base-multilingual-cased-allnli_tr | 25f8e4b3271467c419a73e0e293d3299db775534 | 2021-12-03T20:46:47.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:mit"
] | zero-shot-classification | false | emrecan | null | emrecan/bert-base-multilingual-cased-allnli_tr | 24 | null | transformers | 7,769 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased_allnli_tr
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Accuracy: 0.7662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8623 | 0.03 | 1000 | 0.9076 | 0.5917 |
| 0.7528 | 0.07 | 2000 | 0.8587 | 0.6119 |
| 0.7074 | 0.1 | 3000 | 0.7867 | 0.6647 |
| 0.6949 | 0.14 | 4000 | 0.7474 | 0.6772 |
| 0.6681 | 0.17 | 5000 | 0.7661 | 0.6814 |
| 0.6597 | 0.2 | 6000 | 0.7264 | 0.6943 |
| 0.6495 | 0.24 | 7000 | 0.7841 | 0.6781 |
| 0.6323 | 0.27 | 8000 | 0.7256 | 0.6952 |
| 0.6308 | 0.31 | 9000 | 0.7319 | 0.6958 |
| 0.6254 | 0.34 | 10000 | 0.7054 | 0.7004 |
| 0.6233 | 0.37 | 11000 | 0.7069 | 0.7085 |
| 0.6165 | 0.41 | 12000 | 0.6880 | 0.7181 |
| 0.6033 | 0.44 | 13000 | 0.6844 | 0.7197 |
| 0.6014 | 0.48 | 14000 | 0.6753 | 0.7129 |
| 0.5947 | 0.51 | 15000 | 0.7000 | 0.7039 |
| 0.5965 | 0.54 | 16000 | 0.6708 | 0.7263 |
| 0.5979 | 0.58 | 17000 | 0.6562 | 0.7285 |
| 0.5787 | 0.61 | 18000 | 0.6554 | 0.7297 |
| 0.58 | 0.65 | 19000 | 0.6544 | 0.7315 |
| 0.574 | 0.68 | 20000 | 0.6549 | 0.7339 |
| 0.5751 | 0.71 | 21000 | 0.6545 | 0.7289 |
| 0.5659 | 0.75 | 22000 | 0.6467 | 0.7371 |
| 0.5732 | 0.78 | 23000 | 0.6448 | 0.7362 |
| 0.5637 | 0.82 | 24000 | 0.6520 | 0.7355 |
| 0.5648 | 0.85 | 25000 | 0.6412 | 0.7345 |
| 0.5622 | 0.88 | 26000 | 0.6350 | 0.7358 |
| 0.5579 | 0.92 | 27000 | 0.6347 | 0.7393 |
| 0.5518 | 0.95 | 28000 | 0.6417 | 0.7392 |
| 0.5547 | 0.99 | 29000 | 0.6321 | 0.7437 |
| 0.524 | 1.02 | 30000 | 0.6430 | 0.7412 |
| 0.4982 | 1.05 | 31000 | 0.6253 | 0.7458 |
| 0.5002 | 1.09 | 32000 | 0.6316 | 0.7418 |
| 0.4993 | 1.12 | 33000 | 0.6197 | 0.7487 |
| 0.4963 | 1.15 | 34000 | 0.6307 | 0.7462 |
| 0.504 | 1.19 | 35000 | 0.6272 | 0.7480 |
| 0.4922 | 1.22 | 36000 | 0.6410 | 0.7433 |
| 0.5016 | 1.26 | 37000 | 0.6295 | 0.7461 |
| 0.4957 | 1.29 | 38000 | 0.6183 | 0.7506 |
| 0.4883 | 1.32 | 39000 | 0.6261 | 0.7502 |
| 0.4985 | 1.36 | 40000 | 0.6315 | 0.7496 |
| 0.4885 | 1.39 | 41000 | 0.6189 | 0.7529 |
| 0.4909 | 1.43 | 42000 | 0.6189 | 0.7473 |
| 0.4894 | 1.46 | 43000 | 0.6314 | 0.7433 |
| 0.4912 | 1.49 | 44000 | 0.6184 | 0.7446 |
| 0.4851 | 1.53 | 45000 | 0.6258 | 0.7461 |
| 0.4879 | 1.56 | 46000 | 0.6286 | 0.7480 |
| 0.4907 | 1.6 | 47000 | 0.6196 | 0.7512 |
| 0.4884 | 1.63 | 48000 | 0.6157 | 0.7526 |
| 0.4755 | 1.66 | 49000 | 0.6056 | 0.7591 |
| 0.4811 | 1.7 | 50000 | 0.5977 | 0.7582 |
| 0.4787 | 1.73 | 51000 | 0.5915 | 0.7621 |
| 0.4779 | 1.77 | 52000 | 0.6014 | 0.7583 |
| 0.4767 | 1.8 | 53000 | 0.6041 | 0.7623 |
| 0.4737 | 1.83 | 54000 | 0.6093 | 0.7563 |
| 0.4836 | 1.87 | 55000 | 0.6001 | 0.7568 |
| 0.4765 | 1.9 | 56000 | 0.6109 | 0.7601 |
| 0.4776 | 1.94 | 57000 | 0.6046 | 0.7599 |
| 0.4769 | 1.97 | 58000 | 0.5970 | 0.7568 |
| 0.4654 | 2.0 | 59000 | 0.6147 | 0.7614 |
| 0.4144 | 2.04 | 60000 | 0.6439 | 0.7566 |
| 0.4101 | 2.07 | 61000 | 0.6373 | 0.7527 |
| 0.4192 | 2.11 | 62000 | 0.6136 | 0.7575 |
| 0.4128 | 2.14 | 63000 | 0.6283 | 0.7560 |
| 0.4204 | 2.17 | 64000 | 0.6187 | 0.7625 |
| 0.4114 | 2.21 | 65000 | 0.6127 | 0.7621 |
| 0.4097 | 2.24 | 66000 | 0.6188 | 0.7626 |
| 0.4129 | 2.28 | 67000 | 0.6156 | 0.7639 |
| 0.4085 | 2.31 | 68000 | 0.6232 | 0.7616 |
| 0.4074 | 2.34 | 69000 | 0.6240 | 0.7605 |
| 0.409 | 2.38 | 70000 | 0.6153 | 0.7591 |
| 0.4046 | 2.41 | 71000 | 0.6375 | 0.7587 |
| 0.4117 | 2.45 | 72000 | 0.6145 | 0.7629 |
| 0.4002 | 2.48 | 73000 | 0.6279 | 0.7610 |
| 0.4042 | 2.51 | 74000 | 0.6176 | 0.7646 |
| 0.4055 | 2.55 | 75000 | 0.6277 | 0.7643 |
| 0.4021 | 2.58 | 76000 | 0.6196 | 0.7642 |
| 0.4081 | 2.62 | 77000 | 0.6127 | 0.7659 |
| 0.408 | 2.65 | 78000 | 0.6237 | 0.7638 |
| 0.3997 | 2.68 | 79000 | 0.6190 | 0.7636 |
| 0.4093 | 2.72 | 80000 | 0.6152 | 0.7648 |
| 0.4095 | 2.75 | 81000 | 0.6155 | 0.7627 |
| 0.4088 | 2.79 | 82000 | 0.6130 | 0.7641 |
| 0.4063 | 2.82 | 83000 | 0.6072 | 0.7646 |
| 0.3978 | 2.85 | 84000 | 0.6128 | 0.7662 |
| 0.4034 | 2.89 | 85000 | 0.6157 | 0.7627 |
| 0.4044 | 2.92 | 86000 | 0.6127 | 0.7661 |
| 0.403 | 2.96 | 87000 | 0.6126 | 0.7664 |
| 0.4033 | 2.99 | 88000 | 0.6144 | 0.7662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ericzhou/DialoGPT-Medium-Rick_v2 | 61a36944dbe5d692bf8640d4b39997b78bc28980 | 2022-01-20T05:06:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ericzhou | null | ericzhou/DialoGPT-Medium-Rick_v2 | 24 | 1 | transformers | 7,770 | ---
tags:
- conversational
---
# rick |
facebook/s2t-wav2vec2-large-en-ca | 12c8a3c9ccae1f0ae603e758d42b9caa31390a6b | 2021-11-14T20:39:29.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"en",
"ca",
"dataset:covost2",
"dataset:librispeech_asr",
"arxiv:2104.06678",
"transformers",
"audio",
"speech-translation",
"speech2text2",
"license:mit"
] | automatic-speech-recognition | false | facebook | null | facebook/s2t-wav2vec2-large-en-ca | 24 | 2 | transformers | 7,771 | ---
language:
- en
- ca
datasets:
- covost2
- librispeech_asr
tags:
- audio
- speech-translation
- automatic-speech-recognition
- speech2text2
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Common Voice 1
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
- example_title: Common Voice 2
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_99989.mp3
- example_title: Common Voice 3
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_9999.mp3
---
# S2T2-Wav2Vec2-CoVoST2-EN-CA-ST
`s2t-wav2vec2-large-en-ca` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST).
The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in
[Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
## Model description
S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Catalan text translation.
See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline
```python
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-ca", feature_extractor="facebook/s2t-wav2vec2-large-en-ca")
translation = asr(librispeech_en[0]["file"])
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoder
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-ca")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-ca")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
```
## Evaluation results
CoVoST-V2 test results for en-ca (BLEU score): **34.1**
For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-06678,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino and
Alexei Baevski and
Michael Auli and
Alexis Conneau},
title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation},
journal = {CoRR},
volume = {abs/2104.06678},
year = {2021},
url = {https://arxiv.org/abs/2104.06678},
archivePrefix = {arXiv},
eprint = {2104.06678},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
fenrhjen/camembert_aux_amandes | 539ebd04389c079ea44818f0334f99e1fb255ccb | 2020-12-20T18:22:33.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | fenrhjen | null | fenrhjen/camembert_aux_amandes | 24 | null | transformers | 7,772 | |
flair/frame-english-fast | c9f6e94c9a7f077645d348c7c4985d0ee992b7eb | 2021-03-02T22:01:45.000Z | [
"pytorch",
"en",
"dataset:ontonotes",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | flair | null | flair/frame-english-fast | 24 | null | flair | 7,773 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- ontonotes
widget:
- text: "George returned to Berlin to return his hat."
---
## English Verb Disambiguation in Flair (fast model)
This is the fast verb disambiguation model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **88,27** (Ontonotes) - predicts [Proposition Bank verb frames](http://verbs.colorado.edu/propbank/framesets-english-aliases/).
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/frame-english-fast")
# make example sentence
sentence = Sentence("George returned to Berlin to return his hat.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following frame tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('frame'):
print(entity)
```
This yields the following output:
```
Span [2]: "returned" [− Labels: return.01 (0.9867)]
Span [6]: "return" [− Labels: return.02 (0.4741)]
```
So, the word "*returned*" is labeled as **return.01** (as in *go back somewhere*) while "*return*" is labeled as **return.02** (as in *give back something*) in the sentence "*George returned to Berlin to return his hat*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus = ColumnCorpus(
"resources/tasks/srl", column_format={1: "text", 11: "frame"}
)
# 2. what tag do we want to predict?
tag_type = 'frame'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
BytePairEmbeddings("en"),
FlairEmbeddings("news-forward-fast"),
FlairEmbeddings("news-backward-fast"),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/frame-english-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2019flair,
title={FLAIR: An easy-to-use framework for state-of-the-art NLP},
author={Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland},
booktitle={{NAACL} 2019, 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)},
pages={54--59},
year={2019}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
glob-asr/wav2vec2-large-xls-r-300m-guarani-small | e601b22ad2f2d7b24031b3ac127878acad1e3fb6 | 2022-03-24T11:52:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gn",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | glob-asr | null | glob-asr/wav2vec2-large-xls-r-300m-guarani-small | 24 | null | transformers | 7,774 | ---
language:
- gn
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- gn
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-guarani-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-guarani-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4964
- Wer: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.65 | 100 | 1.1326 | 1.0 |
| 1.6569 | 13.32 | 200 | 0.5264 | 0.6478 |
| 1.6569 | 19.97 | 300 | 0.5370 | 0.6261 |
| 0.2293 | 26.65 | 400 | 0.4964 | 0.5957 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
harshit345/xlsr_wav2vec_english | a34f5311c459b1b6ba67c65bab537856fecca2c5 | 2021-12-11T21:22:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | harshit345 | null | harshit345/xlsr_wav2vec_english | 24 | null | transformers | 7,775 | ---
language: en
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 English by Jonatas Grosman
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice en
type: common_voice
args: en
metrics:
- name: Test WER
type: wer
value: 21.53
- name: Test CER
type: cer
value: 9.66
---
# Wav2vec2-Large-English
Fine-tuned [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on English using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows...
Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library:
```python
from asrecognition import ASREngine
asr = ASREngine("fr", model_path="jonatasgrosman/wav2vec2-large-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "SHE'LL BE ALL RIGHT." | SHELL BE ALL RIGHT |
| SIX | SIX |
| "ALL'S WELL THAT ENDS WELL." | ALLAS WELL THAT ENDS WELL |
| DO YOU MEAN IT? | W MEAN IT |
| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESTION |
| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSILLA GOING TO BANDL AND BE WHIT IS LIKE QU AND QU |
| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTION AS HAME AK AN THE POT |
| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING |
| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUCE IS SAUCE FOR THE GONDER |
| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD |
## Evaluation
The model can be evaluated as follows on the English (en) test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-english"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well. Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| wav2vec2-large-xlsr-53-english | **18.98%** | **8.29%** |
| wav2vec2-large-xlsr-53-greek | 18.99% | 10.60% |
| wav2vec2-large-xlsr-53-hindi | 20.01% | 9.66% |
| wav2vec2-large-960h-lv60-english | 22.03% | 10.39% |
| wav2vec2-base-100h-lv60-english | 24.97% | 11.14% |
|
|
henryk/bert-base-multilingual-cased-finetuned-polish-squad1 | 515774f2646efcb7fb7f7016ce0045db9069c8e6 | 2021-05-19T19:04:09.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"pl",
"transformers",
"autotrain_compatible"
] | question-answering | false | henryk | null | henryk/bert-base-multilingual-cased-finetuned-polish-squad1 | 24 | null | transformers | 7,776 | ---
language: pl
---
# Multilingual + Polish SQuAD1.1
This model is the multilingual model provided by the Google research team with a fine-tuned polish Q&A downstream task.
## Details of the language model
Language model ([**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md)):
12-layer, 768-hidden, 12-heads, 110M parameters.
Trained on cased text in the top 104 languages with the largest Wikipedias.
## Details of the downstream task
Using the `mtranslate` Python module, [**SQuAD1.1**](https://rajpurkar.github.io/SQuAD-explorer/) was machine-translated. In order to find the start tokens, the direct translations of the answers were searched in the corresponding paragraphs. Due to the different translations depending on the context (missing context in the pure answer), the answer could not always be found in the text, and thus a loss of question-answer examples occurred. This is a potential problem where errors can occur in the data set.
| Dataset | # Q&A |
| ---------------------- | ----- |
| SQuAD1.1 Train | 87.7 K |
| Polish SQuAD1.1 Train | 39.5 K |
| SQuAD1.1 Dev | 10.6 K |
| Polish SQuAD1.1 Dev | 2.6 K |
## Model benchmark
| Model | EM | F1 |
| ---------------------- | ----- | ----- |
| [SlavicBERT](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) | **60.89** | 71.68 |
| [polBERT](https://huggingface.co/dkleczek/bert-base-polish-uncased-v1) | 57.46 | 68.87 |
| [multiBERT](https://huggingface.co/bert-base-multilingual-cased) | 60.67 | **71.89** |
| [xlm](https://huggingface.co/xlm-mlm-100-1280) | 47.98 | 59.42 |
## Model training
The model was trained on a **Tesla V100** GPU with the following command:
```python
export SQUAD_DIR=path/to/pl_squad
python run_squad.py
--model_type bert \
--model_name_or_path bert-base-multilingual-cased \
--do_train \
--do_eval \
--train_file $SQUAD_DIR/pl_squadv1_train_clean.json \
--predict_file $SQUAD_DIR/pl_squadv1_dev_clean.json \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps=8000 \
--output_dir ../../output \
--overwrite_cache \
--overwrite_output_dir
```
**Results**:
{'exact': 60.670731707317074, 'f1': 71.8952193697293, 'total': 2624, 'HasAns_exact': 60.670731707317074, 'HasAns_f1': 71.8952193697293,
'HasAns_total': 2624, 'best_exact': 60.670731707317074, 'best_exact_thresh': 0.0, 'best_f1': 71.8952193697293, 'best_f1_thresh': 0.0}
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="henryk/bert-base-multilingual-cased-finetuned-polish-squad1",
tokenizer="henryk/bert-base-multilingual-cased-finetuned-polish-squad1"
)
qa_pipeline({
'context': "Warszawa jest największym miastem w Polsce pod względem liczby ludności i powierzchni",
'question': "Jakie jest największe miasto w Polsce?"})
```
# Output:
```json
{
"score": 0.9988,
"start": 0,
"end": 8,
"answer": "Warszawa"
}
```
## Contact
Please do not hesitate to contact me via [LinkedIn](https://www.linkedin.com/in/henryk-borzymowski-0755a2167/) if you want to discuss or get access to the Polish version of SQuAD. |
hfl/cino-base-v2 | 4e4eb5114da7a9eef6e3bcdeb997c20090afb4e8 | 2022-01-24T10:34:45.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/cino-base-v2 | 24 | 2 | transformers | 7,777 | ---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
huggingtweets/bts_bighit | 0c1065599374b90b4c1a8511cb5750d3b3dbf04b | 2021-05-21T21:16:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/bts_bighit | 24 | null | transformers | 7,778 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1318205976110010371/hvlZiocy_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">BTS_official 🤖 AI Bot </div>
<div style="font-size: 15px">@bts_bighit bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@bts_bighit's tweets](https://twitter.com/bts_bighit).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 807 |
| Short tweets | 17 |
| Tweets kept | 2424 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/346cr95o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bts_bighit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qrtx438c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qrtx438c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bts_bighit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/marsneedsmilfs | 4d127bd9d236bf511240d2a81b3a3d283fe3d299 | 2021-05-22T13:32:30.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/marsneedsmilfs | 24 | null | transformers | 7,779 | ---
language: en
thumbnail: https://www.huggingtweets.com/marsneedsmilfs/1614121336301/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1358993374590750724/2DLIr0yk_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">nial 🤖 AI Bot </div>
<div style="font-size: 15px">@marsneedsmilfs bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@marsneedsmilfs's tweets](https://twitter.com/marsneedsmilfs).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3159 |
| Retweets | 1127 |
| Short tweets | 633 |
| Tweets kept | 1399 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/utrzu0cc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marsneedsmilfs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1avwfygo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1avwfygo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marsneedsmilfs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/murderlinart | bd04e04172eed23e8fc6525eaf5c91553be360b5 | 2021-05-22T15:30:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/murderlinart | 24 | null | transformers | 7,780 | ---
language: en
thumbnail: https://www.huggingtweets.com/murderlinart/1617904433043/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1378075236109811712/6wkJc-3m_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">AJ 🍀 🤖 AI Bot </div>
<div style="font-size: 15px">@murderlinart bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@murderlinart's tweets](https://twitter.com/murderlinart).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 1141 |
| Short tweets | 544 |
| Tweets kept | 1545 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/b0hhcnrk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @murderlinart's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3a7qsqyy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3a7qsqyy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/murderlinart')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/qtsheepgirl | b7d4d93e4e20e5eb736ac4b93ee38fb7bc5a6ef7 | 2021-05-22T20:00:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/qtsheepgirl | 24 | null | transformers | 7,781 | ---
language: en
thumbnail: https://www.huggingtweets.com/qtsheepgirl/1614111306823/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357323547606188040/0l2qcUWr_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ashleigh🎄💜💛💜💛💜💛💜💛💜 🤖 AI Bot </div>
<div style="font-size: 15px">@qtsheepgirl bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@qtsheepgirl's tweets](https://twitter.com/qtsheepgirl).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1338 |
| Retweets | 233 |
| Short tweets | 407 |
| Tweets kept | 698 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21akccjl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @qtsheepgirl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1f5eimxf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1f5eimxf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/qtsheepgirl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/stefrappeneau | 94b7d3d0fe718dc49b9c1749a546a566e77a7cc7 | 2021-05-23T00:00:31.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/stefrappeneau | 24 | null | transformers | 7,782 | ---
language: en
thumbnail: https://www.huggingtweets.com/stefrappeneau/1609353045656/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1202294057281740800/SnPHZMvt_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Stephane Rappeneau 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@stefrappeneau bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@stefrappeneau's tweets](https://twitter.com/stefrappeneau).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3208</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>297</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>86</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2825</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qa7ycwy3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stefrappeneau's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/b1exumr4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/b1exumr4/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/stefrappeneau'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/thatonequeen | 004f7bf25e2ba212625fed2ed5b9fd6097fdce73 | 2021-05-23T01:12:51.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/thatonequeen | 24 | null | transformers | 7,783 | ---
language: en
thumbnail: https://www.huggingtweets.com/thatonequeen/1612629006703/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357903571333701634/pqawe_iI_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Black Lives Still Matter 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@thatonequeen bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@thatonequeen's tweets](https://twitter.com/thatonequeen).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3183</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>449</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>511</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2223</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h37t2gnh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thatonequeen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bs8r2sf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bs8r2sf/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/thatonequeen'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
iarfmoose/roberta-small-bulgarian-ner | 80a25716df202d7b2295d0c0a5dea0b615125565 | 2021-05-20T16:51:18.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | iarfmoose | null | iarfmoose/roberta-small-bulgarian-ner | 24 | null | transformers | 7,784 | Entry not found |
izumi-lab/electra-small-paper-japanese-discriminator | caba5a2f83aa6fe6686317a12b46d5436d3089de | 2022-03-19T09:40:02.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"dataset:wikipedia",
"arxiv:2003.10555",
"transformers",
"license:cc-by-sa-4.0"
] | null | false | izumi-lab | null | izumi-lab/electra-small-paper-japanese-discriminator | 24 | 1 | transformers | 7,785 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: 東京大学で[MASK]の研究をしています。
---
# ELECTRA small Japanese discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The corpus file is 2.9GB, consisting of approximately 20M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps.
The size of the generator is 1/4 of the size of the discriminator.
## Citation
**There will be another paper for this pretrained model. Be sure to check here again when you cite.**
```
@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
|
jamarju/roberta-large-bne-squad-2.0-es | 93ad0184d0ed7a0771388a046e662b8f30917f01 | 2021-08-05T14:59:41.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:squad_es",
"transformers",
"autotrain_compatible"
] | question-answering | false | jamarju | null | jamarju/roberta-large-bne-squad-2.0-es | 24 | null | transformers | 7,786 | ---
language:
- es
datasets:
- squad_es
widget:
- text: "¿Quién era el duque en la batalla de Hastings?"
context: "La dinastía normanda tuvo un gran impacto político, cultural y militar en la Europa medieval e incluso en el Cercano Oriente. Los normandos eran famosos por su espíritu marcial y, finalmente, por su piedad cristiana, convirtiéndose en exponentes de la ortodoxia católica en la que se asimilaron. Adoptaron la lengua galorromance de la tierra franca que establecieron, siendo su dialecto conocido como francés normando, normando o normando, una lengua literaria importante. El ducado de Normandía, que formaron por tratado con la corona francesa, fue un gran feudo de la Francia medieval, y bajo Ricardo I de Normandía se forjó en un principado cohesionado y formidable en la tenencia feudal. Los normandos se caracterizan tanto por su cultura, como por su singular arquitectura románica y sus tradiciones musicales, y por sus importantes logros e innovaciones militares. Aventureros normandos fundaron el Reino de Sicilia bajo Roger II después de conquistar el sur de Italia con los sarracenos y bizantinos, y una expedición en nombre de su duque, Guillermo el Conquistador, condujo a la conquista normanda de Inglaterra. La influencia cultural y militar normanda se extendió desde estos nuevos centros europeos a los estados cruzados del Cercano Oriente, donde su príncipe Bohemundo I fundó el Principado de Antioquía en el Levante mediterráneo, a Escocia y Gales en Gran Bretaña."
---
This is the [BSC-TeMU/roberta-large-bne](https://huggingface.co/BSC-TeMU/roberta-large-bne) model ([source](https://github.com/PlanTL-SANIDAD/lm-spanish)) trained on the [squad_es v2.0.0](https://huggingface.co/datasets/squad_es) dataset ([source](https://github.com/ccasimiro88/TranslateAlignRetrieve)).
Current achievement: em=60.21, f1=68.61
Results:
```
{
"epoch": 4.0,
"eval_HasAns_exact": 48.44804318488529,
"eval_HasAns_f1": 65.24520506718169,
"eval_HasAns_total": 5928,
"eval_NoAns_exact": 71.97301854974705,
"eval_NoAns_f1": 71.97301854974705,
"eval_NoAns_total": 5930,
"eval_best_exact": 60.22094788328555,
"eval_best_exact_thresh": 0.0,
"eval_best_f1": 68.6181122987237,
"eval_best_f1_thresh": 0.0,
"eval_exact": 60.2125147579693,
"eval_f1": 68.60967917340695,
"eval_samples": 12203,
"eval_total": 11858
}
```
Training script:
```
python -m torch.distributed.launch --nproc_per_node=3 ./run_qa.py \
--model_name_or_path BSC-TeMU/roberta-large-bne \
--dataset_name squad_es \
--dataset_config_name v2.0.0 \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models/roberta-large-bne-finetuned-squad-es/ \
--per_device_eval_batch_size=24 \
--per_device_train_batch_size=12 \
--version_2_with_negative \
--ddp_find_unused_parameters=False \
```
|
jeniya/BERTOverflow_stackoverflow_github | 9fb82ec57c8b7573cf340bad5f629c5c3fe484a1 | 2021-05-19T20:48:44.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | jeniya | null | jeniya/BERTOverflow_stackoverflow_github | 24 | 1 | transformers | 7,787 |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/). We would like to thank [Wuwei Lan](https://lanwuwei.github.io/) for helping us in training this model.
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("jeniya/BERTOverflow")
model = AutoModelForTokenClassification.from_pretrained("jeniya/BERTOverflow")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
``` |
joaoalvarenga/wav2vec2-large-xlsr-portuguese-a | a33b4944859db06f54bc71c51ffedf27f4c8a3ec | 2021-07-06T09:23:08.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-large-xlsr-portuguese-a | 24 | null | transformers | 7,788 | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- apache-2.0
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- PyTorch
license: apache-2.0
model-index:
- name: JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 15.037146%
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
|
k-partha/decision_bert_bio | e8862263aac91883c0f6f668ef95481ccfc05b18 | 2022-01-29T03:36:59.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"transformers"
] | text-classification | false | k-partha | null | k-partha/decision_bert_bio | 24 | null | transformers | 7,789 | Rates Twitter biographies on decision-making preference: Thinking or Feeling. Roughly corresponds to [agreeableness.](https://en.wikipedia.org/wiki/Agreeableness)
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Remember that models employ pure statistical reasoning (and may consequently make no sense sometimes.)
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402). |
manandey/wav2vec2-large-xlsr-mongolian | 31defd0d8a9e8429a5abe60421a99ccb373566b9 | 2021-07-06T11:37:29.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | manandey | null | manandey/wav2vec2-large-xlsr-mongolian | 24 | null | transformers | 7,790 | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 43.08
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.08%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
michaelrglass/albert-base-rci-tabmcq-row | 39e2686999689e8ae7a7ec946c3a8402cd43d379 | 2021-06-16T16:09:19.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | michaelrglass | null | michaelrglass/albert-base-rci-tabmcq-row | 24 | null | transformers | 7,791 | Entry not found |
nightingal3/bert-finetuned-wsc | 50045b9df90e0729a5555d15fb36549887c25d12 | 2021-10-19T16:09:06.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | nightingal3 | null | nightingal3/bert-finetuned-wsc | 24 | null | transformers | 7,792 | Entry not found |
patrickvonplaten/wav2vec2-2-bart-large | b8a94de6a635a54503433156e91e98a04982cf21 | 2021-12-29T15:49:52.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"asr_seq2esq",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-2-bart-large | 24 | 5 | transformers | 7,793 | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- asr_seq2esq
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
- example_title: Common Voice sample
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
model-index:
- name: wav2vec2-2-bart-large
results: []
---
To rerun this experiment, please clone this directory and run:
```bash
python create_model.py
```
followed by
```bash
./run_librispeech.sh
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2-bart-large
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) and [bart-large](https://huggingface.co/facebook/bart-large) on the librispeech_asr - clean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3204
- Wer: 0.0486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3 |
piotr-rybak/poleval2021-task4-plt5-base-qa | a33abe7bfca7ccb3a33a4442e2e86cdddf2606cd | 2021-09-23T17:39:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | piotr-rybak | null | piotr-rybak/poleval2021-task4-plt5-base-qa | 24 | null | transformers | 7,794 | Entry not found |
r3dhummingbird/DialoGPT-medium-neku | ea721e0c3619d5e8e5ef115ec1f7548471b1bacd | 2021-06-08T02:57:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | r3dhummingbird | null | r3dhummingbird/DialoGPT-medium-neku | 24 | 3 | transformers | 7,795 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Neku Sakuraba from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-neku")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-neku")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("NekuBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
rsvp-ai/bertserini-bert-large-squad | 9e1d057d6c6cd5f2bec4b2e564e16506d49e43e0 | 2021-05-19T00:44:05.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rsvp-ai | null | rsvp-ai/bertserini-bert-large-squad | 24 | null | transformers | 7,796 | Entry not found |
s3h/gec-token-classification-arabert | 05608103287647fcdc3dfdc965a4bb0a7e81a4ec | 2022-01-04T18:55:01.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | s3h | null | s3h/gec-token-classification-arabert | 24 | null | transformers | 7,797 | Entry not found |
shreeshaaithal/whatsapp-medium-bot-2 | 3fed305673c563c9f6ce02c1c46f48050d1d506b | 2021-07-07T06:28:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | shreeshaaithal | null | shreeshaaithal/whatsapp-medium-bot-2 | 24 | null | transformers | 7,798 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on WhatsApp chats
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on WhatsApp chats or you can train this model on [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
feel free to ask me questions on discord server [discord server](https://discord.gg/Gqhje8Z7DX)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("harrydonni/whatsapp-medium-bot-2")
model = AutoModelWithLMHead.from_pretrained("harrydonni/whatsapp-medium-bot-2")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Messi: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
this is done by shreesha thank you...... |
sismetanin/xlm_roberta_large-ru-sentiment-rureviews | 66882479d2401daf57c7b0960583d7d995296e76 | 2021-02-25T23:52:40.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_large-ru-sentiment-rureviews | 24 | null | transformers | 7,799 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## XLM-RoBERTa-Large-ru-sentiment-RuReviews
XLM-RoBERTa-Large-ru-sentiment-RuReviews is a [XLM-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.