modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KBLab/bert-base-swedish-cased-reallysimple-ner | a71306793996bdb6d9462d6b1c9aea5a49ff33c7 | 2022-03-02T17:42:50.000Z | [
"pytorch",
"megatron-bert",
"token-classification",
"sv",
"dataset:KBLab/sucx3_ner",
"transformers",
"sequence-tagger-model",
"bert",
"autotrain_compatible"
]
| token-classification | false | KBLab | null | KBLab/bert-base-swedish-cased-reallysimple-ner | 5 | null | transformers | 16,100 | ---
tags:
- token-classification
- sequence-tagger-model
- bert
language: sv
datasets:
- KBLab/sucx3_ner
widget:
- text: "Emil bor i Lönneberga"
---
# KB-BERT for NER
## Cased data
This model is based on [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) and was fine-tuned on the [SUCX 3.0 - NER](https://huggingface.co/datasets/KBLab/sucx3_ner) corpus, using the _simple_ tags and cased data.
For this model we used a variation of the data that did **not** use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.
The model was trained on the training data only, with the best model chosen by its performance on the validation data.
You find more information about the model and the performance on our blog: https://kb-labb.github.io/posts/2022-02-07-sucx3_ner |
KBLab/wav2vec2-large-voxpopuli-sv-swedish | 2d233c7186184f6f52458a7adfc7a2016dc1c5f3 | 2021-09-14T21:25:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"transformers",
"audio",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0",
"model-index"
]
| automatic-speech-recognition | false | KBLab | null | KBLab/wav2vec2-large-voxpopuli-sv-swedish | 5 | null | transformers | 16,101 | ---
language: sv-SE
datasets:
- common_voice
- NST Swedish ASR Database
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- voxpopuli
license: cc-by-nc-4.0
model-index:
- name: Wav2vec 2.0 large VoxPopuli-sv swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 10.994764
- name: Test CER
type: cer
value: 3.946846
---
# Wav2vec 2.0 large-voxpopuli-sv-swedish
**PLEASE NOTE that [this](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) model performs better and has a less restrictive license.**
Additionally pretrained and finetuned version of Facebooks [VoxPopuli-sv large](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **3.95%**. WER for Common Voice test set is **10.99%** directly and **7.82%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Training
This model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
KETI-AIR/ke-t5-large-newslike | 6f3e7518f9d9285760508033edb69b111da888fa | 2021-06-23T03:00:11.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-large-newslike | 5 | 1 | transformers | 16,102 | Entry not found |
Katsiaryna/distilbert-base-uncased-finetuned_9th | 422bb763e0ac58076b62b22ecab7c23a1de40701 | 2021-12-09T13:46:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/distilbert-base-uncased-finetuned_9th | 5 | null | transformers | 16,103 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned_9th
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_9th
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2826
- Accuracy: 0.4462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2357 | 1.0 | 569 | 0.2277 | 0.3474 |
| 0.2237 | 2.0 | 1138 | 0.2316 | 0.3474 |
| 0.1847 | 3.0 | 1707 | 0.2456 | 0.3712 |
| 0.1302 | 4.0 | 2276 | 0.2763 | 0.4602 |
| 0.0863 | 5.0 | 2845 | 0.2826 | 0.4462 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-normal | 9865d65d8126b4a2ca08c10ada90cbd21abac6ba | 2021-12-15T23:23:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-normal | 5 | null | transformers | 16,104 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op1 | 63269ab4a1dc6e999b4c1fc2e929c0d0e4542fe6 | 2021-12-16T11:57:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op1 | 5 | null | transformers | 16,105 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op2 | 5890225de34fddf9e1fe98a3cbd74cf8d87eb5c2 | 2021-12-16T00:25:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op2 | 5 | null | transformers | 16,106 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op3 | 0e84efa15421ab8eb84a7cae07ec8805416f424d | 2021-12-15T23:45:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op3 | 5 | null | transformers | 16,107 | Entry not found |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_40000-top3-BCE | b0ae4cc6e911cebb7758a4f52c2c6e829b982b1c | 2021-12-16T21:22:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Katsiaryna | null | Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_40000-top3-BCE | 5 | null | transformers | 16,108 | Entry not found |
Kceilord/autonlp-tc-13522454 | 64b67f37c630a4a257a4670eab4c76e6aa77c195 | 2021-09-28T10:46:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Kceilord/autonlp-data-tc",
"transformers",
"autonlp"
]
| text-classification | false | Kceilord | null | Kceilord/autonlp-tc-13522454 | 5 | null | transformers | 16,109 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Kceilord/autonlp-data-tc
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 13522454
## Validation Metrics
- Loss: 0.31450966000556946
- Accuracy: 0.8461538461538461
- Precision: 0.8181818181818182
- Recall: 0.782608695652174
- AUC: 0.9369259032455604
- F1: 0.8
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kceilord/autonlp-tc-13522454
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Ketzu/koelectra-sts-v0.4 | 1fe96b748c088ab521f9f3bf6fac89bc7f4a6031 | 2021-12-29T23:31:59.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Ketzu | null | Ketzu/koelectra-sts-v0.4 | 5 | null | transformers | 16,110 | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.4
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.9286505242442783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.4
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3368
- Pearson: 0.9303
- Spearmanr: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0345 | 1.0 | 730 | 0.3368 | 0.9303 | 0.9287 |
| 0.0343 | 2.0 | 1460 | 0.3368 | 0.9303 | 0.9287 |
| 0.0337 | 3.0 | 2190 | 0.3368 | 0.9303 | 0.9287 |
| 0.0345 | 4.0 | 2920 | 0.3368 | 0.9303 | 0.9287 |
| 0.0347 | 5.0 | 3650 | 0.3368 | 0.9303 | 0.9287 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
KoichiYasuoka/bert-large-japanese-unidic-luw-upos | d80617afbff30f3ae5420f75acac3840c145a2b5 | 2022-05-23T16:54:08.000Z | [
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-japanese-unidic-luw-upos | 5 | null | transformers | 16,111 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# bert-large-japanese-unidic-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese](https://huggingface.co/cl-tohoku/bert-large-japanese). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
KoichiYasuoka/roberta-small-japanese-char-luw-upos | 006555d984864b0a088aca28215f5d61792a82e9 | 2022-05-24T06:26:53.000Z | [
"pytorch",
"roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-small-japanese-char-luw-upos | 5 | null | transformers | 16,112 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# roberta-small-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Kyoungmin/beauty-base-KLCP2 | 629c877adefd5fdaf70156cb09add14d5e45d8bf | 2021-08-22T19:24:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Kyoungmin | null | Kyoungmin/beauty-base-KLCP2 | 5 | null | transformers | 16,113 | **Second** BertForMaskedLM pretrained model in **KOREAN Beauty** domain.
About 120,000 reviews were used.
It was trained based on _beomi/kcbert-base_ .
Check out _Kyoungmin/beauty-base-KLCP_ for smaller model !! |
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7 | f7711c4fa59c39e03dc687fbf587e63fcec54c44 | 2022-02-07T19:16:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7 | 5 | null | transformers | 16,114 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Wav2Vec2_xls_r_300m_hi_cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Wer: 0.6273
- Cer: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.6969 | 9.52 | 400 | 3.3092 | 1.0 | 0.9800 |
| 1.7721 | 19.05 | 800 | 0.7769 | 0.7045 | 0.2367 |
| 0.6384 | 28.57 | 1200 | 0.6567 | 0.6273 | 0.2093 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
LeoCordoba/beto2beto-mlsum | 8f769c4fd35c0609b1bfb11046b14bde18b878ba | 2021-09-22T18:52:43.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | LeoCordoba | null | LeoCordoba/beto2beto-mlsum | 5 | null | transformers | 16,115 | \n---
language: es
tags:
- summarization
- spanish
- encoder-decoder
- beto
license: apache-2.0
datasets:
- mlsum - es
model-index:
- name: beto2beto-mlsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "MLSUM: MultiLingual SUMmarization dataset (Spanish)"
type: mlsum - es
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 26.1256
- name: Validation ROGUE-2
type: rogue-2
value: 9.2552
- name: Validation ROGUE-L
type: rogue-l
value: 21.4899
- name: Validation ROGUE-Lsum
type: rogue-lsum
value: 21.8194
- name: Test ROGUE-1
type: rogue-1
value: 25.8639
- name: Test ROGUE-2
type: rogue-2
value: 8.911
- name: Test ROGUE-L
type: rogue-l
value: 21.2426
- name: Test ROGUE-Lsum
type: rogue-lsum
value: 21.5859
widget:
- text: |
La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña.
---
## beto2beto-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum.
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"num_train_epochs": 10,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
## Results
| metric | score |
| --- | ----- |
| validation_loss | 2.5021677017211914 |
| validation_rouge1 | 26.1256 |
| validation_rouge2 | 9.2552 |
| validation_rougeL | 21.4899 |
| validation_rougeLsum | 21.8194 |
| test_loss | 2.57672381401062 |
| test_rouge1 | 25.8639 |
| test_rouge2 | 8.911 |
| test_rougeL | 21.2426 |
| test_rougeLsum | 21.5859 |
|
LilaBoualili/electra-sim-doc | 211a34032cd2d838565bed80764e101f4c632772 | 2021-05-18T14:35:56.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | LilaBoualili | null | LilaBoualili/electra-sim-doc | 5 | null | transformers | 16,116 | Entry not found |
LucasS/distilBertABSA | 2bff5b55eabb9c155afe9f8cf52f7e771c1284a5 | 2021-09-02T16:05:51.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | LucasS | null | LucasS/distilBertABSA | 5 | null | transformers | 16,117 | Entry not found |
Lumos/imdb3 | 668c89646f6004e2b3afcf269529514ffaa8c996 | 2021-12-13T11:50:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lumos | null | Lumos/imdb3 | 5 | null | transformers | 16,118 | Entry not found |
M-CLIP/Swedish-2M | bc7435bc1529e37246e0f5745c33ba2a62b5dbd9 | 2021-05-18T21:35:44.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | M-CLIP | null | M-CLIP/Swedish-2M | 5 | null | transformers | 16,119 | <br />
<p align="center">
<h1 align="center">Swe-CLIP 2M</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%202M">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 2 Million sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
|
M-FAC/bert-tiny-finetuned-stsb | a5b6d60efef0c876ed5fd623a73205b85ede8bc9 | 2021-12-13T08:12:04.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-tiny-finetuned-stsb | 5 | null | transformers | 16,120 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on STS-B validation set:
```bash
pearson = 80.66
spearman = 81.13
```
Mean and standard deviation for 5 runs on STS-B validation set:
| | Pearson | Spearman |
|:----:|:-----------:|:----------:|
| Adam | 64.39 ± 5.02 | 66.52 ± 5.67 |
| M-FAC | 80.15 ± 0.52 | 80.62 ± 0.43 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 7 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name stsb \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
MINYOUNG/distilbert-base-uncased-finetuned-cola | e2fefe56fd9b0ef775e69dd30f96146acc83fd38 | 2021-10-21T09:42:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | MINYOUNG | null | MINYOUNG/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5494735380761103
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8540
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5314 | 0.4095 |
| 0.346 | 2.0 | 1070 | 0.5141 | 0.5054 |
| 0.2294 | 3.0 | 1605 | 0.6351 | 0.5200 |
| 0.1646 | 4.0 | 2140 | 0.7575 | 0.5459 |
| 0.1235 | 5.0 | 2675 | 0.8540 | 0.5495 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
MM98/mt5-small-finetuned-pnsum | 9b947f7c01455cdf19e1ffa8780bc7fb24dfa8de | 2022-01-29T16:15:27.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | MM98 | null | MM98/mt5-small-finetuned-pnsum | 5 | null | transformers | 16,122 | Entry not found |
KeLiu/Title-Gen | 800b5b6b7387d15583d9db9e81315948e323e6bf | 2021-10-13T03:48:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | KeLiu | null | KeLiu/Title-Gen | 5 | 1 | transformers | 16,123 | Entry not found |
MS366/DialoGPT-small-vision | 239a5312f4a0e29bc2d1cc944a5a5b8048d259c9 | 2021-11-17T10:52:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | MS366 | null | MS366/DialoGPT-small-vision | 5 | null | transformers | 16,124 | ---
tags:
- conversational
---
# Vision DialoGPT Model |
Maelstrom77/bert-base-uncased-MRPC | 8fc3680f7ab8afc7254119fce511e24c76e94546 | 2021-09-21T11:35:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/bert-base-uncased-MRPC | 5 | null | transformers | 16,125 | Entry not found |
Maelstrom77/bert-base-uncased-QQP | 1e1d19ab560a2b7b8434f09f6a83d6ffb5ecff2b | 2021-09-21T11:51:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/bert-base-uncased-QQP | 5 | null | transformers | 16,126 | Entry not found |
Maelstrom77/bert-base-uncased-mnli | 09954999cd46f74628f5dd99a6c37b7bf1b9beca | 2021-10-04T13:30:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/bert-base-uncased-mnli | 5 | null | transformers | 16,127 | ```
for i in range(len(predictions)):
if predictions[i] == 0:
predictions[i] = 2
elif predictions[i] == 1:
predictions[i] = 0
elif predictions[i] == 2:
predictions[i] = 1
``` |
Maelstrom77/bert-base-uncased-snli | f3347ed362d1cee2945f32f709657caaef14e39f | 2021-10-04T13:20:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/bert-base-uncased-snli | 5 | null | transformers | 16,128 | ```
for i in range(len(predictions)):
if predictions[i] == 0:
predictions[i] = 2
elif predictions[i] == 1:
predictions[i] = 0
elif predictions[i] == 2:
predictions[i] = 1
``` |
Maelstrom77/roblclass | f6b0d0652aaa1144187852c8fc0fb12fe66b9bfd | 2021-11-10T13:42:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/roblclass | 5 | null | transformers | 16,129 | Entry not found |
Maelstrom77/vibert | e0f0b75f9bfb5947d13708b4df72730d8d17a6ce | 2021-11-08T17:25:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/vibert | 5 | null | transformers | 16,130 | Entry not found |
Maha/OGBV-gender-indicbert-ta-eacl_finals | d49ce969f6b247feb855c586d78dd1dd2613d991 | 2022-02-24T06:09:59.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | Maha | null | Maha/OGBV-gender-indicbert-ta-eacl_finals | 5 | null | transformers | 16,131 | Entry not found |
Maha/OGBV-gender-indicbert-ta-hasoc21_codemix | 24e8ae0069ce77539f34aa2751dd6573eb485fe1 | 2022-02-20T05:19:18.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | Maha | null | Maha/OGBV-gender-indicbert-ta-hasoc21_codemix | 5 | 1 | transformers | 16,132 | Entry not found |
MahsaShahidi/Persian-Image-Captioning | 30b2eb05bbffa85da50316527c3b29b78d17abfd | 2022-02-22T10:49:24.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers",
"generated_from_trainer",
"model-index"
]
| null | false | MahsaShahidi | null | MahsaShahidi/Persian-Image-Captioning | 5 | null | transformers | 16,133 | ---
tags:
- generated_from_trainer
model-index:
name: Persian-Image-Captioning
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Image-Captioning
This model is a fine-tuned version of [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) on coco-flickr-farsi.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Maltehb/roberta-base-scandinavian | 5622426d0f48adf505f1764c6d8a7dfa80a04bc7 | 2021-07-12T10:26:41.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Maltehb | null | Maltehb/roberta-base-scandinavian | 5 | null | transformers | 16,134 | Entry not found |
MariamD/distilbert-base-uncased-finetuned-legal_data | 5dceb8de76f2c268a0d4cf9aba5a71be3a312f3b | 2021-10-07T17:25:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | MariamD | null | MariamD/distilbert-base-uncased-finetuned-legal_data | 5 | null | transformers | 16,135 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-legal_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-legal_data
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 5.3529 |
| No log | 2.0 | 52 | 5.4226 |
| No log | 3.0 | 78 | 5.2550 |
| No log | 4.0 | 104 | 5.1011 |
| No log | 5.0 | 130 | 5.1857 |
| No log | 6.0 | 156 | 5.5119 |
| No log | 7.0 | 182 | 5.4480 |
| No log | 8.0 | 208 | 5.6993 |
| No log | 9.0 | 234 | 5.9614 |
| No log | 10.0 | 260 | 5.6987 |
| No log | 11.0 | 286 | 5.6679 |
| No log | 12.0 | 312 | 5.9850 |
| No log | 13.0 | 338 | 5.6065 |
| No log | 14.0 | 364 | 5.3162 |
| No log | 15.0 | 390 | 5.7856 |
| No log | 16.0 | 416 | 5.5786 |
| No log | 17.0 | 442 | 5.6028 |
| No log | 18.0 | 468 | 5.7649 |
| No log | 19.0 | 494 | 5.5382 |
| 1.8345 | 20.0 | 520 | 6.3654 |
| 1.8345 | 21.0 | 546 | 5.3575 |
| 1.8345 | 22.0 | 572 | 5.3808 |
| 1.8345 | 23.0 | 598 | 5.9340 |
| 1.8345 | 24.0 | 624 | 6.1475 |
| 1.8345 | 25.0 | 650 | 6.2188 |
| 1.8345 | 26.0 | 676 | 5.7651 |
| 1.8345 | 27.0 | 702 | 6.2629 |
| 1.8345 | 28.0 | 728 | 6.1356 |
| 1.8345 | 29.0 | 754 | 5.9255 |
| 1.8345 | 30.0 | 780 | 6.4252 |
| 1.8345 | 31.0 | 806 | 5.6967 |
| 1.8345 | 32.0 | 832 | 6.4324 |
| 1.8345 | 33.0 | 858 | 6.5087 |
| 1.8345 | 34.0 | 884 | 6.1113 |
| 1.8345 | 35.0 | 910 | 6.7443 |
| 1.8345 | 36.0 | 936 | 6.6970 |
| 1.8345 | 37.0 | 962 | 6.5578 |
| 1.8345 | 38.0 | 988 | 6.1963 |
| 0.2251 | 39.0 | 1014 | 6.4893 |
| 0.2251 | 40.0 | 1040 | 6.6347 |
| 0.2251 | 41.0 | 1066 | 6.7106 |
| 0.2251 | 42.0 | 1092 | 6.8129 |
| 0.2251 | 43.0 | 1118 | 6.6386 |
| 0.2251 | 44.0 | 1144 | 6.4134 |
| 0.2251 | 45.0 | 1170 | 6.6883 |
| 0.2251 | 46.0 | 1196 | 6.6406 |
| 0.2251 | 47.0 | 1222 | 6.3065 |
| 0.2251 | 48.0 | 1248 | 7.0281 |
| 0.2251 | 49.0 | 1274 | 7.3646 |
| 0.2251 | 50.0 | 1300 | 7.1086 |
| 0.2251 | 51.0 | 1326 | 6.4749 |
| 0.2251 | 52.0 | 1352 | 6.3303 |
| 0.2251 | 53.0 | 1378 | 6.2919 |
| 0.2251 | 54.0 | 1404 | 6.3855 |
| 0.2251 | 55.0 | 1430 | 6.9501 |
| 0.2251 | 56.0 | 1456 | 6.8714 |
| 0.2251 | 57.0 | 1482 | 6.9856 |
| 0.0891 | 58.0 | 1508 | 6.9910 |
| 0.0891 | 59.0 | 1534 | 6.9293 |
| 0.0891 | 60.0 | 1560 | 7.3493 |
| 0.0891 | 61.0 | 1586 | 7.1834 |
| 0.0891 | 62.0 | 1612 | 7.0479 |
| 0.0891 | 63.0 | 1638 | 6.7674 |
| 0.0891 | 64.0 | 1664 | 6.7553 |
| 0.0891 | 65.0 | 1690 | 7.3074 |
| 0.0891 | 66.0 | 1716 | 6.8071 |
| 0.0891 | 67.0 | 1742 | 7.6622 |
| 0.0891 | 68.0 | 1768 | 6.9555 |
| 0.0891 | 69.0 | 1794 | 7.0153 |
| 0.0891 | 70.0 | 1820 | 7.2085 |
| 0.0891 | 71.0 | 1846 | 6.7582 |
| 0.0891 | 72.0 | 1872 | 6.7989 |
| 0.0891 | 73.0 | 1898 | 6.7012 |
| 0.0891 | 74.0 | 1924 | 7.0088 |
| 0.0891 | 75.0 | 1950 | 7.1024 |
| 0.0891 | 76.0 | 1976 | 6.6968 |
| 0.058 | 77.0 | 2002 | 7.5249 |
| 0.058 | 78.0 | 2028 | 6.9199 |
| 0.058 | 79.0 | 2054 | 7.1995 |
| 0.058 | 80.0 | 2080 | 6.9349 |
| 0.058 | 81.0 | 2106 | 7.4025 |
| 0.058 | 82.0 | 2132 | 7.4199 |
| 0.058 | 83.0 | 2158 | 6.8081 |
| 0.058 | 84.0 | 2184 | 7.4777 |
| 0.058 | 85.0 | 2210 | 7.1990 |
| 0.058 | 86.0 | 2236 | 7.0062 |
| 0.058 | 87.0 | 2262 | 7.5724 |
| 0.058 | 88.0 | 2288 | 6.9362 |
| 0.058 | 89.0 | 2314 | 7.1368 |
| 0.058 | 90.0 | 2340 | 7.2183 |
| 0.058 | 91.0 | 2366 | 6.8684 |
| 0.058 | 92.0 | 2392 | 7.1433 |
| 0.058 | 93.0 | 2418 | 7.2161 |
| 0.058 | 94.0 | 2444 | 7.1442 |
| 0.058 | 95.0 | 2470 | 7.3098 |
| 0.058 | 96.0 | 2496 | 7.1264 |
| 0.0512 | 97.0 | 2522 | 6.9424 |
| 0.0512 | 98.0 | 2548 | 6.9155 |
| 0.0512 | 99.0 | 2574 | 6.9038 |
| 0.0512 | 100.0 | 2600 | 6.9101 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
MarshallCharles/bartlargemnli | 23867d0cb92ba6c7d3dc4964ad9eef173d050316 | 2021-07-26T21:51:10.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | MarshallCharles | null | MarshallCharles/bartlargemnli | 5 | null | transformers | 16,136 | Entry not found |
Mary222/GPT2_standard | 13ce9de68a89673d983c42a06e983174c10027c7 | 2021-11-03T16:54:29.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"ru",
"transformers",
"text-generation"
]
| feature-extraction | false | Mary222 | null | Mary222/GPT2_standard | 5 | 1 | transformers | 16,137 | ---
language: ru
tags:
- text-generation
---
# GPT2 - RUS |
Mary222/Models_testing_ai | c4b4a3b2f0dbbb2ce32fd81c864de06b4543c81e | 2021-12-21T11:22:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Mary222 | null | Mary222/Models_testing_ai | 5 | null | transformers | 16,138 | Entry not found |
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech | a79b9753892ac3512dc5a04b656375ecfc320fd0 | 2021-07-05T15:42:45.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | MehdiHosseiniMoghadam | null | MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech | 5 | null | transformers | 16,139 | ---
language: cs
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-Czech by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cs
type: common_voice
args: cs
metrics:
- name: Test WER
type: wer
value: 27.047806
---
# wav2vec2-large-xlsr-53-Czech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Czech using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.047806 %
## Training
The Common Voice `train`, `validation` datasets were used for training. |
MichelBartels/tinybert-6l-768d-squad2-large-teacher-dummy | 082ffed64297fa9423a6cf6a87950a0882e83824 | 2022-01-31T15:19:21.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | MichelBartels | null | MichelBartels/tinybert-6l-768d-squad2-large-teacher-dummy | 5 | null | transformers | 16,140 | Entry not found |
MickyMike/0-GPT2SP-mulestudio | 9e8bbce69a5080ac367ccde25527a45387d866ab | 2021-08-19T02:02:06.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-mulestudio | 5 | null | transformers | 16,141 | Entry not found |
MickyMike/0-GPT2SP-talenddataquality | 6dd35ba2fab5987843d01facff71cac797654fcf | 2021-08-19T02:02:31.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-talenddataquality | 5 | null | transformers | 16,142 | Entry not found |
MickyMike/0-GPT2SP-talendesb | c89cbd47d1c1b359c41dbd0dd12e1ada418b7364 | 2021-08-19T02:02:44.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-talendesb | 5 | null | transformers | 16,143 | Entry not found |
MickyMike/0-GPT2SP-usergrid | 2035612dd1c546e08286d144296b0b07872430d7 | 2021-08-19T02:03:09.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-usergrid | 5 | null | transformers | 16,144 | Entry not found |
MickyMike/00-GPT2SP-mule-mulestudio | 61fbbf8018e35901fb7bbe7e5dc1546c4bb1d692 | 2021-08-15T07:38:12.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-mule-mulestudio | 5 | null | transformers | 16,145 | Entry not found |
MickyMike/00-GPT2SP-mulestudio-mule | 586fabc07a4c5172f51c1cb28bfcaa30bf13ce54 | 2021-08-15T07:49:38.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/00-GPT2SP-mulestudio-mule | 5 | null | transformers | 16,146 | Entry not found |
MickyMike/000-GPT2SP-clover-usergrid | 9cef071eecd55fe3e6592e812a996a9bef8823c2 | 2021-08-15T10:50:45.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-clover-usergrid | 5 | null | transformers | 16,147 | Entry not found |
MickyMike/000-GPT2SP-mule-titanium | 13bfe441008da49228157105da123cd204451e53 | 2021-08-15T11:47:12.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/000-GPT2SP-mule-titanium | 5 | null | transformers | 16,148 | Entry not found |
MickyMike/1-GPT2SP-datamanagement | 99b27dd9246aa80dcf054b3872d1bfb929b25626 | 2021-08-15T13:15:14.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-datamanagement | 5 | null | transformers | 16,149 | Entry not found |
MickyMike/1-GPT2SP-duracloud | 1350f48f4908377b589272ca5fb6c61295849c8b | 2021-08-15T13:21:22.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-duracloud | 5 | null | transformers | 16,150 | Entry not found |
MickyMike/1-GPT2SP-moodle | 36f366bfc16e85951a5650875d163c7df357245c | 2021-08-15T13:40:01.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-moodle | 5 | null | transformers | 16,151 | Entry not found |
MickyMike/1-GPT2SP-springxd | 7a0f1a6eceba6e83ea63aa37041bcd3407324edb | 2021-08-15T13:57:53.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/1-GPT2SP-springxd | 5 | null | transformers | 16,152 | Entry not found |
MickyMike/11-GPT2SP-mesos-usergrid | cc9b26fab7519e82c84dd187e2fbfe3122019a87 | 2021-08-15T23:27:18.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/11-GPT2SP-mesos-usergrid | 5 | null | transformers | 16,153 | Entry not found |
MickyMike/111-GPT2SP-appceleratorstudio-mulestudio | c2d12aaf284087dd579b7367e03cbcd7bb947111 | 2021-08-16T00:53:42.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/111-GPT2SP-appceleratorstudio-mulestudio | 5 | null | transformers | 16,154 | Entry not found |
MickyMike/111-GPT2SP-clover-usergrid | b9cdbe71b3261c1b4467592d9d8844683ec41045 | 2021-08-16T00:16:59.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/111-GPT2SP-clover-usergrid | 5 | null | transformers | 16,155 | Entry not found |
MickyMike/111-GPT2SP-mulestudio-titanium | 480c9610abf6f04871f471dbc0324057f52038db | 2021-08-16T00:48:04.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/111-GPT2SP-mulestudio-titanium | 5 | null | transformers | 16,156 | Entry not found |
MickyMike/2-GPT2SP-bamboo | 9ff6a5711ed26a525f3cee60ca87f03242d0cd8f | 2021-08-29T20:30:55.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-bamboo | 5 | null | transformers | 16,157 | Entry not found |
MickyMike/2-GPT2SP-clover | 848abe18f406aee30817e2a80ef7d257c9b419a0 | 2021-08-29T20:38:53.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-clover | 5 | null | transformers | 16,158 | Entry not found |
MickyMike/2-GPT2SP-mesos | 2f0ad3122b912b3e467b8142406268b7ea6db4aa | 2021-08-29T21:08:59.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-mesos | 5 | null | transformers | 16,159 | Entry not found |
MickyMike/2-GPT2SP-mule | 2e62eae9cbc8122aca86a7a2b19aa6f0e9d46e84 | 2021-08-29T21:24:10.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-mule | 5 | null | transformers | 16,160 | Entry not found |
MickyMike/2-GPT2SP-mulestudio | 7ff48fd14fef2b3afd8f0b4b00f5a52ae3051034 | 2021-08-29T21:33:20.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-mulestudio | 5 | null | transformers | 16,161 | Entry not found |
MickyMike/2-GPT2SP-titanium | ebc17968fe022fb31138660564b1254614e7e9c9 | 2021-08-29T22:06:51.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-titanium | 5 | null | transformers | 16,162 | Entry not found |
MickyMike/2-GPT2SP-usergrid | a59d7df5a7c70cacd8cd271529dea7d33e08cd22 | 2021-08-29T22:14:11.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/2-GPT2SP-usergrid | 5 | null | transformers | 16,163 | Entry not found |
MickyMike/22-GPT2SP-mesos-usergrid | df39e1f594a8716928ba257e2e7aa717effc631a | 2021-08-29T22:20:42.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/22-GPT2SP-mesos-usergrid | 5 | null | transformers | 16,164 | Entry not found |
MickyMike/222-GPT2SP-clover-usergrid | 63acab3a0a048c6f1714c04bf5cdd6d3e6ff7cf3 | 2021-08-29T23:21:44.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/222-GPT2SP-clover-usergrid | 5 | null | transformers | 16,165 | Entry not found |
MickyMike/222-GPT2SP-talenddataquality-appceleratorstudio | 49fa9695c305d3e1bd0e91f3cd9c2742ad477388 | 2021-08-29T23:49:00.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/222-GPT2SP-talenddataquality-appceleratorstudio | 5 | null | transformers | 16,166 | Entry not found |
MickyMike/222-GPT2SP-talendesb-mesos | 51adabe1c403ac22fd7f9272b0f287ca41f886f8 | 2021-08-29T23:28:33.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/222-GPT2SP-talendesb-mesos | 5 | null | transformers | 16,167 | Entry not found |
MickyMike/6-GPT2SP-bamboo | 197f0a949c0b4a0010456ccd08e3dbbdc328c903 | 2021-08-30T01:55:21.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-bamboo | 5 | null | transformers | 16,168 | Entry not found |
MickyMike/6-GPT2SP-clover | 8e364c949ebb89333881c3979ab0a7de30ab3f4a | 2021-08-30T02:03:43.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-clover | 5 | null | transformers | 16,169 | Entry not found |
MickyMike/6-GPT2SP-mule | e6b1958e7f761e1f93ac6a00ac21e4bd61404bc3 | 2021-08-30T02:57:20.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-mule | 5 | null | transformers | 16,170 | Entry not found |
MickyMike/6-GPT2SP-usergrid | b026d9961677c28f8cf68c1113bea266f0c7d133 | 2021-08-30T03:51:40.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/6-GPT2SP-usergrid | 5 | null | transformers | 16,171 | Entry not found |
MickyMike/66-GPT2SP-appceleratorstudio-titanium | fc2befb19bd9913cc0bdebe8db42f2da08943ed7 | 2021-08-30T04:25:40.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/66-GPT2SP-appceleratorstudio-titanium | 5 | null | transformers | 16,172 | Entry not found |
MickyMike/66-GPT2SP-mesos-usergrid | 381be01f46a9375ca925642d407e8da0753210f7 | 2021-08-30T03:58:38.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/66-GPT2SP-mesos-usergrid | 5 | null | transformers | 16,173 | Entry not found |
MickyMike/66-GPT2SP-mulestudio-mule | ac15cddce01af01d4faeb3fa283b44885c0ef0c7 | 2021-08-30T04:57:43.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/66-GPT2SP-mulestudio-mule | 5 | null | transformers | 16,174 | Entry not found |
MickyMike/666-GPT2SP-talenddataquality-appceleratorstudio | e320ddf7017680b2b20b3cbfd2da666da72a3efb | 2021-08-30T05:41:53.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/666-GPT2SP-talenddataquality-appceleratorstudio | 5 | null | transformers | 16,175 | Entry not found |
MickyMike/7-GPT2SP-aptanastudio | d6b28e4964caca0c08ec8419673d3096ffe9b761 | 2021-08-30T17:39:07.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-aptanastudio | 5 | null | transformers | 16,176 | Entry not found |
MickyMike/7-GPT2SP-duracloud | e6f53fcd4c9dfb2d08199c4df07df828f73c01fe | 2021-08-30T18:18:35.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-duracloud | 5 | null | transformers | 16,177 | Entry not found |
MickyMike/7-GPT2SP-mesos | 0634d392ed69b5aaa205dfc7a48549c3a0c12a7e | 2021-08-30T18:38:35.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-mesos | 5 | null | transformers | 16,178 | Entry not found |
MickyMike/77-GPT2SP-mesos-usergrid | 0c37f02bb0cb277043624a5d33705c8b3359484b | 2021-08-30T19:52:43.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/77-GPT2SP-mesos-usergrid | 5 | null | transformers | 16,179 | Entry not found |
MickyMike/77-GPT2SP-usergrid-mesos | 26a3835947b02690f46e0c8bff6fe87bb55a96d3 | 2021-08-30T19:59:57.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/77-GPT2SP-usergrid-mesos | 5 | null | transformers | 16,180 | Entry not found |
MickyMike/777-GPT2SP-clover-usergrid | 3fe785abbeeaebb930fd50f9ee09dc31ac65ac7a | 2021-08-30T21:02:13.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-clover-usergrid | 5 | null | transformers | 16,181 | Entry not found |
MickyMike/777-GPT2SP-talendesb-mesos | cfa0e583966e9e5d398aa0333393ac42539c5331 | 2021-08-30T21:10:15.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/777-GPT2SP-talendesb-mesos | 5 | null | transformers | 16,182 | Entry not found |
Minowa/distilbert-base-uncased-finetuned-ner | 53000ff6c7f45312d7b0fe910a1f8fd5c3741354 | 2022-02-16T07:09:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Minowa | null | Minowa/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,183 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9239501818582607
- name: Recall
type: recall
value: 0.9378006488421524
- name: F1
type: f1
value: 0.9308238951809905
- name: Accuracy
type: accuracy
value: 0.9837800054013695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9240
- Recall: 0.9378
- F1: 0.9308
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2381 | 1.0 | 878 | 0.0707 | 0.9100 | 0.9240 | 0.9170 | 0.9805 |
| 0.0563 | 2.0 | 1756 | 0.0583 | 0.9246 | 0.9382 | 0.9314 | 0.9835 |
| 0.03 | 3.0 | 2634 | 0.0596 | 0.9240 | 0.9378 | 0.9308 | 0.9838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MohammadABH/bertweet-finetuned-rbam | bd2459d52fcc89c3fe0c8d17b9238f03c31218d9 | 2022-02-19T22:23:05.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | MohammadABH | null | MohammadABH/bertweet-finetuned-rbam | 5 | null | transformers | 16,184 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bertweet-finetuned-rbam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-finetuned-rbam
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3971
- F1: 0.6620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7138 | 1.0 | 1632 | 0.7529 | 0.6814 |
| 0.5692 | 2.0 | 3264 | 0.8473 | 0.6803 |
| 0.4126 | 3.0 | 4896 | 1.0029 | 0.6617 |
| 0.2854 | 4.0 | 6528 | 1.2167 | 0.6635 |
| 0.2007 | 5.0 | 8160 | 1.3971 | 0.6620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MohammadABH/twitter-roberta-base-dec2021_rbam_fine_tuned | 906fbb1d4d51f03a1adb5bfc44e0283c18815b7f | 2022-03-24T17:42:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | MohammadABH | null | MohammadABH/twitter-roberta-base-dec2021_rbam_fine_tuned | 5 | null | transformers | 16,185 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-roberta-base-dec2021_rbam_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-dec2021_rbam_fine_tuned
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8295
- Accuracy: 0.6777
- Precision: 0.6743
- Recall: 0.6777
- F1: 0.6753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8455 | 1.0 | 3264 | 0.7663 | 0.6661 | 0.6802 | 0.6661 | 0.6693 |
| 0.6421 | 2.0 | 6528 | 0.8295 | 0.6777 | 0.6743 | 0.6777 | 0.6753 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Motahar/distilbert-sst2-mahtab | 8a9af50b72ce299ba94ebd7c4faabfec0e8b3d0a | 2021-12-30T15:18:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Motahar | null | Motahar/distilbert-sst2-mahtab | 5 | null | transformers | 16,186 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-sst2-mahtab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sst2-mahtab
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4982
- eval_accuracy: 0.8830
- eval_runtime: 2.3447
- eval_samples_per_second: 371.91
- eval_steps_per_second: 46.489
- epoch: 1.0
- step: 8419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
MultiBertGunjanPatrick/multiberts-seed-0-100k | e256bbd5e1d984d185d957a1916676495da15cd4 | 2021-10-04T04:55:27.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-100k | 5 | null | transformers | 16,187 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 100k (uncased)
Seed 0 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-100k')
model = BertModel.from_pretrained("multiberts-seed-0-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-400k | 18534b54ba7b20e39e641d61e482c63da2724caa | 2021-10-04T04:56:23.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-400k | 5 | null | transformers | 16,188 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 400k (uncased)
Seed 0 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-400k')
model = BertModel.from_pretrained("multiberts-seed-0-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-0-500k | 797c1ca959e6fefa78102117a1989735650bcdc5 | 2021-10-04T04:56:30.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-0",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-0-500k | 5 | null | transformers | 16,189 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-0
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 Checkpoint 500k (uncased)
Seed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-500k')
model = BertModel.from_pretrained("multiberts-seed-0-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-140k | 0a106a4493558b63280cbfff329db9a30cb38daa | 2021-10-04T04:59:23.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-140k | 5 | null | transformers | 16,190 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 140k (uncased)
Seed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-140k')
model = BertModel.from_pretrained("multiberts-seed-1-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-1500k | 665183deb828eb4587210548f1214d6bcc5375ab | 2021-10-04T05:01:24.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-1500k | 5 | null | transformers | 16,191 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 1500k (uncased)
Seed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1500k')
model = BertModel.from_pretrained("multiberts-seed-1-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-1-300k | 055effaff9d1eb2e4f78e1666e14018d0502ed57 | 2021-10-04T04:59:58.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-1",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-1-300k | 5 | null | transformers | 16,192 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-1
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 1 Checkpoint 300k (uncased)
Seed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-300k')
model = BertModel.from_pretrained("multiberts-seed-1-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-2-0k | b73b6a5ed6cdf2f9a2af4945d475e3590c920ebd | 2021-10-04T05:02:07.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-2",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-2-0k | 5 | null | transformers | 16,193 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-2
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 2 Checkpoint 0k (uncased)
Seed 2 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-0k')
model = BertModel.from_pretrained("multiberts-seed-2-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-2-1000k | 2704a2c8137e4fa3b513bb7b6e080eda2275f048 | 2021-10-04T05:04:25.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-2",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-2-1000k | 5 | null | transformers | 16,194 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-2
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 2 Checkpoint 1000k (uncased)
Seed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1000k')
model = BertModel.from_pretrained("multiberts-seed-2-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-2-400k | 14837e3bb1d46054fd7ade99de6b371d2cadc138 | 2021-10-04T05:03:41.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-2",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-2-400k | 5 | null | transformers | 16,195 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-2
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 2 Checkpoint 400k (uncased)
Seed 2 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-400k')
model = BertModel.from_pretrained("multiberts-seed-2-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-4-120k | 484908aa1abd3a6ce9e7cd13335dcebd9a94da5a | 2021-10-04T05:10:11.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"multiberts-seed-4",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-4-120k | 5 | null | transformers | 16,196 | ---
language: en
tags:
- exbert
- multiberts
- multiberts-seed-4
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 4 Checkpoint 120k (uncased)
Seed 4 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-120k')
model = BertModel.from_pretrained("multiberts-seed-4-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
MultiBertGunjanPatrick/multiberts-seed-7 | 9b29cbb1149cf6c382b3504e5528ffc6be384d14 | 2021-10-04T05:41:49.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"transformers",
"exbert",
"multiberts",
"license:apache-2.0"
]
| null | false | MultiBertGunjanPatrick | null | MultiBertGunjanPatrick/multiberts-seed-7 | 5 | null | transformers | 16,197 | ---
language: en
tags:
- exbert
- multiberts
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
NYTK/translation-mt5-small-128-en-hu | f2601a7d56221e06008028038eea28ffeb3aec15 | 2022-02-14T13:30:49.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"hu",
"transformers",
"translation",
"license:gpl",
"autotrain_compatible"
]
| translation | false | NYTK | null | NYTK/translation-mt5-small-128-en-hu | 5 | null | transformers | 16,198 | ---
language:
- en
- hu
tags:
- translation
license: gpl
metrics:
- sacrebleu
- chrf
widget:
- text: "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter."
---
# mT5 Translation model
For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Source language: English
- Target language: Hungarian
- Pretrained model used: mT5-small
- Finetuned on subcorpora from OPUS
- Segments: 56.837.602
## Limitations
- tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy))
- max_source_length = 128
- max_target_length = 128
## Results
| Model | BLEU | chrF-3 | chrF-6 |
| ------------- | ------------- | ------------- | ------------- |
| Google en-hu | 25.30 | 54.08 | 49.06 |
| BART | 36.89 | 60.77 | 56.4 |
| **mT5** | **27.69** | **53.73** | **48.57** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {laki-yang-mt,
title = {{Jobban fordítunk magyarra, mint a Google!}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Laki, László and Yang, Zijian Győző},
pages = {357--372}
}
``` |
NahedAbdelgaber/evaluating-student-writing-distibert-ner-with-metric | f75a859c0e2c41371438ba5845023b502169173e | 2022-01-09T06:45:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | NahedAbdelgaber | null | NahedAbdelgaber/evaluating-student-writing-distibert-ner-with-metric | 5 | null | transformers | 16,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: evaluating-student-writing-distibert-ner-with-metric
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# evaluating-student-writing-distibert-ner-with-metric
This model is a fine-tuned version of [NahedAbdelgaber/evaluating-student-writing-distibert-ner](https://huggingface.co/NahedAbdelgaber/evaluating-student-writing-distibert-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7535
- Precision: 0.0614
- Recall: 0.2590
- F1: 0.0993
- Accuracy: 0.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7145 | 1.0 | 1755 | 0.7683 | 0.0546 | 0.2194 | 0.0875 | 0.6191 |
| 0.6608 | 2.0 | 3510 | 0.7504 | 0.0570 | 0.2583 | 0.0934 | 0.6136 |
| 0.5912 | 3.0 | 5265 | 0.7535 | 0.0614 | 0.2590 | 0.0993 | 0.6188 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.