modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
igorcadelima/distilbert-base-uncased-finetuned-emotion | 669e596fdf2da6a9396b3f5119fca9c7b721e328 | 2022-07-25T16:55:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | igorcadelima | null | igorcadelima/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,700 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.927005317669938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8181 | 1.0 | 250 | 0.3036 | 0.9085 | 0.9064 |
| 0.2443 | 2.0 | 500 | 0.2147 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s755 | a4172a377aa4ae5c8fb35ce4e1980f9bed406604 | 2022-07-25T22:40:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s755 | 8 | null | transformers | 13,701 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-8_female-2_s755
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
phjhk/hklegal-xlm-r-base | 77583fd19d9f41ee598ac7a0b1026b2cbfe198da | 2022-07-29T14:52:30.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | phjhk | null | phjhk/hklegal-xlm-r-base | 8 | null | transformers | 13,702 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments
# Uses
The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain.
```python
>>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-base")
>>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-base")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Alya told Jasmine that Andrew could pay with cash..")
```
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
``` |
Yehor/wav2vec2-xls-r-300m-uk-with-small-lm-noisy | 388c7a8789a85c428941abd53ab67efc39e1f0a5 | 2022-07-30T07:00:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-300m-uk-with-small-lm-noisy | 8 | 1 | transformers | 13,703 | ---
language:
- uk
license: "apache-2.0"
datasets:
- mozilla-foundation/common_voice_10_0
---
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model has been trained on noisy data in order to make the acoustic model robust to noisy audio data.
This model has apostrophes and hyphens.
The language model is trained on the texts of the Common Voice dataset, which is used during training.
Special thanks for noised data to **Dmytro Chaplynsky**, https://lang.org.ua
Noisy dataset:
- Transcriptions: https://www.dropbox.com/s/ohj3y2cq8f4207a/transcriptions.zip?dl=0
- Audio files: https://www.dropbox.com/s/v8crgclt9opbrv1/data.zip?dl=0
Metrics:
| Dataset | CER | WER |
|-|-|-|
| CV10 (no LM) | 0.0515 | 0.2617 |
| CV10 (with LM) | 0.0148 | 0.0524 |
Metrics on noisy data with [standard model](https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm):
| Dataset | CER | WER |
|-|-|-|
| CV10 (no LM) | 0.1064 | 0.3926 |
| CV10 (with LM) | 0.0497 | 0.1265 |
More:
- The same model, but trained on raw Common Voice data: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm
|
ccdv/lsg-albert-base-v2-4096 | 2d1da305f37863beee22f0c001bcea85ca8d51fd | 2022-07-26T20:13:53.000Z | [
"pytorch",
"albert",
"fill-mask",
"en",
"arxiv:1909.11942",
"transformers",
"long context",
"autotrain_compatible"
]
| fill-mask | false | ccdv | null | ccdv/lsg-albert-base-v2-4096 | 8 | null | transformers | 13,704 | ---
tags:
- albert
- long context
language:
- en
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [AlBERT-base-v2](https://huggingface.co/albert-base-v2) without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-albert-base-v2-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-albert-base-v2-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-albert-base-v2-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-albert-base-v2-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-albert-base-v2-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-albert-base-v2-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-albert-base-v2-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**AlBERT**
```
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
mrgiraffe/results | e687abf5a2e5d3fd97a20b0db4b7ea13539ca978 | 2022-07-27T00:55:40.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mrgiraffe | null | mrgiraffe/results | 8 | null | transformers | 13,705 | ---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1400
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rwang5688/distilbert-base-uncased-finetuned-cola | 4c383487e26980f258b046ad9db04732b8d8a921 | 2022-07-29T06:50:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | rwang5688 | null | rwang5688/distilbert-base-uncased-finetuned-cola | 8 | 1 | transformers | 13,706 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5518326707011334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Matthews Correlation: 0.5518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5248 | 1.0 | 535 | 0.5325 | 0.3973 |
| 0.3479 | 2.0 | 1070 | 0.5064 | 0.5235 |
| 0.2337 | 3.0 | 1605 | 0.6480 | 0.5022 |
| 0.1721 | 4.0 | 2140 | 0.7528 | 0.5518 |
| 0.1292 | 5.0 | 2675 | 0.8475 | 0.5485 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.8.1+cpu
- Datasets 2.4.0
- Tokenizers 0.10.3
|
tk648/distilbert-base-uncased-finelytuned-emotion | 7b252d299689db94f8a2402625216bbd1a3580b6 | 2022-07-27T08:22:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tk648 | null | tk648/distilbert-base-uncased-finelytuned-emotion | 8 | null | transformers | 13,707 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finelytuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9227682344612014
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finelytuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2262
- Accuracy: 0.923
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8397 | 1.0 | 250 | 0.3350 | 0.8975 | 0.8941 |
| 0.2555 | 2.0 | 500 | 0.2262 | 0.923 | 0.9228 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ahmed007/t5-base-ibn-Shaddad-v6 | c5142c65afae61b7ead4dd44d5f047369cd32807 | 2022-07-27T12:57:13.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"Poet",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Ahmed007 | null | Ahmed007/t5-base-ibn-Shaddad-v6 | 8 | null | transformers | 13,708 | ---
license: apache-2.0
tags:
- Poet
- generated_from_trainer
model-index:
- name: t5-base-ibn-Shaddad-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-ibn-Shaddad-v6
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9444 | 1.0 | 1067 | 4.4333 |
| 4.5154 | 2.0 | 2134 | 4.3345 |
| 4.4462 | 3.0 | 3201 | 4.2957 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
asparius/mixed-sa2 | 8fae9e7a8bc5a7a64c013b6484b100853c66eb9a | 2022-07-27T13:39:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | asparius | null | asparius/mixed-sa2 | 8 | null | transformers | 13,709 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mixed-sa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mixed-sa2
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1111
- Accuracy: 0.9726
- F1: 0.9721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Someman/xlm-roberta-base-finetuned-wikiann-hi | 226a36250873a61c43322e70343211b246812206 | 2022-07-27T14:33:43.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Someman | null | Someman/xlm-roberta-base-finetuned-wikiann-hi | 8 | null | transformers | 13,710 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-wikiann-hi
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: hi
metrics:
- name: F1
type: f1
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-wikiann-hi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.5689 | 1.0 | 209 | 0.3179 | 1.0 |
| 0.2718 | 2.0 | 418 | 0.2733 | 1.0 |
| 0.19 | 3.0 | 627 | 0.2560 | 1.0 |
| 0.142 | 4.0 | 836 | 0.2736 | 1.0 |
| 0.0967 | 5.0 | 1045 | 0.2686 | 1.0 |
| 0.0668 | 6.0 | 1254 | 0.2966 | 1.0 |
| 0.052 | 7.0 | 1463 | 0.3194 | 1.0 |
| 0.0369 | 8.0 | 1672 | 0.3034 | 1.0 |
| 0.0236 | 9.0 | 1881 | 0.3174 | 1.0 |
| 0.0135 | 10.0 | 2090 | 0.3097 | 1.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Yank2901/DialoGPT-small-Harry | bc4d3e74273c96f70f9b3b5677ded99f7864e83a | 2022-07-27T20:51:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Yank2901 | null | Yank2901/DialoGPT-small-Harry | 8 | null | transformers | 13,711 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
muhtasham/Bert-Tiny-finetuned-finer-139-full-intel-cpu | 5aba937726e4f7ca578f6ae979ed714ea58c22de | 2022-07-28T01:48:11.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | muhtasham | null | muhtasham/Bert-Tiny-finetuned-finer-139-full-intel-cpu | 8 | null | transformers | 13,712 | Entry not found |
bongsoo/sentencebert_v1.1 | 31396a9b7979f90f2f9b8003b7e72bd0d03e3389 | 2022-07-28T03:05:22.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"ko",
"en"
]
| sentence-similarity | false | bongsoo | null | bongsoo/sentencebert_v1.1 | 8 | 1 | sentence-transformers | 13,713 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- ko
- en
---
# sentencebert_v1.1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
- 이 모델은 **distilbert-base-multiligual-cased** 모델에 **kowiki_20220620** 말뭉치를 가지고, 한글 단어들을 추가 학습 시킨 후
<br>sentencebert로 만든 후,추가적으로 NLI/STS Tearch-student 증류 학습 시켜 만든 모델 입니다.
- 모델 제작 과정에 대한 자세한 내용은 [여기](https://github.com/kobongsoo/BERT/tree/master)를 참조 하세요.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["오늘은 비가 올것 같다", "내일은 춥고 눈이 올거다"]
model = SentenceTransformer('bongsoo/sentencebert_v1.0')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
- 성능 측정을 위한 말뭉치는, **korsts(1,379쌍문장)** 와 **klue-sts(519쌍문장)** 를 이용함.
|모델 |korsts|klue-sts|korsts+klue-sts|
|:--------|------:|--------:|--------------:|
|bongsoo/sentencebert_v1.0|0.743|0.799|0.638|
|bongsoo/sentencebert_v1.1|0.806|0.749|0.633|
|distiluse-base-multilingual-cased-v2|0.747|0.785|0.644|
|paraphrase-multilingual-mpnet-base-v2|0.820|0.799|0.721|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 18432 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 80,
"evaluation_steps": 147454,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 3e-05
},
"scheduler": "warmupconstant",
"steps_per_epoch": null,
"warmup_steps": 147454,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
bongsoo |
alfredcs/swin-cifar10 | 87d7ddb38b66a30eb59357036a3ce04810d3c26b | 2022-07-28T22:42:23.000Z | [
"pytorch",
"swin",
"image-classification",
"transformers",
"license:apache-2.0"
]
| image-classification | false | alfredcs | null | alfredcs/swin-cifar10 | 8 | null | transformers | 13,714 | ---
license: apache-2.0
---
|
noob123/augmented_bert | e4f7df4722195f2729963c50ee3fb218e27bd1c4 | 2022-07-28T11:38:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | noob123 | null | noob123/augmented_bert | 8 | null | transformers | 13,715 | Entry not found |
AnonymousSub/recipes-roberta-base-tokenwise-token-and-step-losses_with_pos_with_ingr | ba948f715a28fd76b4ba62d570747e299e1ccea8 | 2022-07-28T14:29:11.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | AnonymousSub | null | AnonymousSub/recipes-roberta-base-tokenwise-token-and-step-losses_with_pos_with_ingr | 8 | null | transformers | 13,716 | Entry not found |
yanaiela/roberta-base-epoch_7 | b8c02ae9517e4c79fa8d5b953d7b6ea852d42029 | 2022-07-29T22:43:03.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_7",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_7 | 8 | null | transformers | 13,717 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_7
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 7
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_7.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_8 | 3858deaac0b880781bd4081e7d58948adb2ce518 | 2022-07-29T22:43:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_8",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_8 | 8 | null | transformers | 13,718 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_8
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 8
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_8.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_9 | dc630f5c1339f85d023aaa4e064b897f1d641f1f | 2022-07-29T22:43:40.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_9",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_9 | 8 | null | transformers | 13,719 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_9
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 9
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_9.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_12 | d4d3bec39c89269528866c2130f2e8c14fcce7f3 | 2022-07-29T22:44:35.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_12",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_12 | 8 | null | transformers | 13,720 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_12
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 12
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_12.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_13 | 6ce2d478c75145bd1aba91ed420acdbf5f5c44a7 | 2022-07-29T22:44:53.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_13",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_13 | 8 | null | transformers | 13,721 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_13
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 13
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_13.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_21 | d3165ab69e03a80a912ed30ccbb14c7f60b7ba75 | 2022-07-29T22:47:23.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_21",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_21 | 8 | null | transformers | 13,722 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_21
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 21
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_21.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_22 | 19758c14715a05091af3f6301440ca4afc4f71f7 | 2022-07-29T22:47:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_22",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_22 | 8 | null | transformers | 13,723 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_22
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 22
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_22.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_23 | 1d6f7dfbe2fce4c5a39cdc6abab455b09e56cf36 | 2022-07-29T22:48:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_23",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_23 | 8 | null | transformers | 13,724 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_23
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 23
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_23.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_25 | 5698c745594bc421488c3d4f23aedadef966a303 | 2022-07-29T22:48:37.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_25",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_25 | 8 | null | transformers | 13,725 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_25
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 25
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_25.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_26 | 82f02aa1fbad842fba224dfa6926e7500d136dd8 | 2022-07-29T22:48:56.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_26",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_26 | 8 | null | transformers | 13,726 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_26
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 26
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_26.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_29 | 548b6ba809c009626fd716d33fc1933a9e83901a | 2022-07-29T22:49:52.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_29",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_29 | 8 | null | transformers | 13,727 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_29
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 29
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_29.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_31 | 3d5096d7c91b741f4b180d42609c1fc063c37303 | 2022-07-29T22:50:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_31",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_31 | 8 | null | transformers | 13,728 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_31
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 31
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_31.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_32 | 159f49c8a016e6dc049cef54148da0d9733d7898 | 2022-07-29T22:50:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_32",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_32 | 8 | null | transformers | 13,729 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_32
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 32
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_32.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_34 | 6e2b1ee5c944e6f59e242b5086ff19b1ce3165f5 | 2022-07-29T22:51:23.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_34",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_34 | 8 | null | transformers | 13,730 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_34
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 34
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_34.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_35 | 9e3f02aa07d75c59f8ed215fee4602342ce6e1fa | 2022-07-29T22:51:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_35",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_35 | 8 | null | transformers | 13,731 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_35
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 35
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_35.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_38 | dabf28671d89a757e944ab7c912bd2c473698541 | 2022-07-29T22:52:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_38",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_38 | 8 | null | transformers | 13,732 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_38
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 38
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_38.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_39 | a71b1353a1a017c9909bd4afe1ce4ad9a0eaefad | 2022-07-29T22:53:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_39",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_39 | 8 | null | transformers | 13,733 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_39
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 39
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_39.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_41 | b9f8a471190142f3e3f538ba6e3fb800533699d7 | 2022-07-29T22:53:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_41",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_41 | 8 | null | transformers | 13,734 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_41
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 41
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_41.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_42 | 606ea1ac0d74fd94208ed15df9b101ccdb5124d3 | 2022-07-29T22:54:14.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_42",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_42 | 8 | null | transformers | 13,735 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_42
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 42
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_42.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_43 | 52426560248c86b8f70b82289d813786b6786401 | 2022-07-29T22:54:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_43",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_43 | 8 | null | transformers | 13,736 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_43
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 43
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_43.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_44 | 1a7224a2e2f51c94775f55b8b881be0fefa6280a | 2022-07-29T22:55:07.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_44",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_44 | 8 | null | transformers | 13,737 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_44
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 44
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_44.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_45 | 4448c358ac6f9ff46c452eee4367f610658546f9 | 2022-07-29T22:55:32.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_45",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_45 | 8 | null | transformers | 13,738 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_45
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 45
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_45.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_47 | 90803af953c805b5e29e04d469c359cb0d34e4c3 | 2022-07-29T22:56:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_47",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_47 | 8 | null | transformers | 13,739 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_47
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 47
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_47.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_48 | cd29142925a358e3f843ee82d89173bc8fdc5434 | 2022-07-29T22:56:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_48",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_48 | 8 | null | transformers | 13,740 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_48
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 48
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_48.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_49 | 4e40651e2af76a7a2b26935f2ed7db8b320c15d7 | 2022-07-29T22:57:07.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_49",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_49 | 8 | null | transformers | 13,741 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_49
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 49
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_49.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_50 | 3d4ea171b3b83cfacc70ea9183daf643878bc5f4 | 2022-07-29T22:57:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_50",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_50 | 8 | null | transformers | 13,742 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_50
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 50
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_50.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_51 | 13c7790739af893294996df4eb0277fe3568a4a5 | 2022-07-29T22:57:57.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_51",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_51 | 8 | null | transformers | 13,743 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_51
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 51
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_51.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_53 | 3ab0ef0c7629e9a401f84b9d1de6a9838d10572d | 2022-07-29T22:58:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_53",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_53 | 8 | null | transformers | 13,744 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_53
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 53
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_53.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_55 | 45cc9589eca76c0502ce3322e7e1c1bcebec31b6 | 2022-07-29T22:59:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_55",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_55 | 8 | null | transformers | 13,745 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_55
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 55
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_55.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_56 | 3a163de35010a06cd14994f5f4040cc929fbb170 | 2022-07-29T22:59:56.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_56",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_56 | 8 | null | transformers | 13,746 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_56
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 56
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_56.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_57 | 4238ba306123edffa591a984c9dff3c09fc1aa48 | 2022-07-29T23:00:18.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_57",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_57 | 8 | null | transformers | 13,747 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_57
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 57
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_57.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_58 | 6c7edd6da39cece041657f1c8a314354e15f87fa | 2022-07-29T23:00:40.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_58",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_58 | 8 | null | transformers | 13,748 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_58
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 58
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_58.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_60 | 9f8ef1a93450e66daba56e2e0013f4077b5a037b | 2022-07-29T23:01:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_60",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_60 | 8 | null | transformers | 13,749 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_60
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 60
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_60.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_61 | f9495462a420c0281048673c906e63956acfcaa4 | 2022-07-29T23:01:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_61",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_61 | 8 | null | transformers | 13,750 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_61
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 61
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_61.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_62 | 9428053a86008fd75a5334a4bec4dadf9d203f4d | 2022-07-29T23:02:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_62",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_62 | 8 | null | transformers | 13,751 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_62
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 62
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_62.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_63 | fb9119de5fac7affb990a785fb1cf8899e8ea0c3 | 2022-07-29T23:02:25.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_63",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_63 | 8 | null | transformers | 13,752 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_63
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 63
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_63.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_64 | 1e07ebbbd7849a6fd505a14779164a301b57fda8 | 2022-07-29T23:02:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_64",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_64 | 8 | null | transformers | 13,753 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_64
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 64
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_64.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_66 | dd1039bcc464564afeb26db0054ef1cfc0d91bdf | 2022-07-29T23:03:37.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_66",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_66 | 8 | null | transformers | 13,754 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_66
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 66
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_66.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_67 | 39053968c0db3050ce9cbccea0d3457fc6cecddc | 2022-07-29T23:04:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_67",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_67 | 8 | null | transformers | 13,755 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_67
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 67
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_67.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_68 | adfc1925cdff27d09a90a906ef3638033695b1aa | 2022-07-29T23:04:30.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_68",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_68 | 8 | null | transformers | 13,756 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_68
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 68
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_68.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_69 | 699058954f32af883b0f7e246896b8d4ca7e0b44 | 2022-07-29T23:04:53.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_69",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_69 | 8 | null | transformers | 13,757 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_69
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 69
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_69.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_70 | a9ce1461a01c8d869f9495d32d9f435054010498 | 2022-07-29T23:05:14.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_70",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_70 | 8 | null | transformers | 13,758 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_70
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 70
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_70.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_73 | 644516254b8b2f33f76985cda2dfc9fd61d3aad4 | 2022-07-29T23:06:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_73",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_73 | 8 | null | transformers | 13,759 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_73
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 73
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_73.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_74 | 67daf182905e9dbf318e19840425550d85413b0c | 2022-07-29T23:06:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_74",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_74 | 8 | null | transformers | 13,760 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_74
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 74
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_74.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_77 | 9b141849d517a56e5c653e4e5316337d61f851fd | 2022-07-29T23:07:53.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_77",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_77 | 8 | null | transformers | 13,761 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_77
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 77
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_77.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_78 | 58910ba1d5899437fa9d4ffa5e06e45be0629764 | 2022-07-29T23:08:15.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_78",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_78 | 8 | null | transformers | 13,762 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_78
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 78
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_78.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
BitnaKeum/experiment_14-1 | bbccf872bbc6c6590e6ea20aaa6fc58f50d95e01 | 2022-07-29T08:48:24.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | BitnaKeum | null | BitnaKeum/experiment_14-1 | 8 | null | transformers | 13,763 | Entry not found |
123abhiALFLKFO/distilbert-base-uncased-finetuned-cola | 22e60ac571915fa1fa5e79c8f8804565cc07fd69 | 2021-08-05T08:57:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | 123abhiALFLKFO | null | 123abhiALFLKFO/distilbert-base-uncased-finetuned-cola | 7 | null | transformers | 13,764 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.5331291095663535
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8628
- Matthews Correlation: 0.5331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5253 | 1.0 | 535 | 0.5214 | 0.3943 |
| 0.3459 | 2.0 | 1070 | 0.5551 | 0.4693 |
| 0.2326 | 3.0 | 1605 | 0.6371 | 0.5059 |
| 0.1718 | 4.0 | 2140 | 0.7851 | 0.5111 |
| 0.1262 | 5.0 | 2675 | 0.8628 | 0.5331 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Aleksandar1932/gpt2-pop | 42611e2ae71642fcf7d87009974c0745c209b591 | 2022-03-18T22:53:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt2-pop | 7 | null | transformers | 13,765 | Entry not found |
AlexN/xls-r-300m-fr-0 | 744e07eb619322dfff199360acd1825af178f1f6 | 2022-03-24T11:54:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | AlexN | null | AlexN/xls-r-300m-fr-0 | 7 | null | transformers | 13,766 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-300m-fr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0 fr
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 36.81
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 35.55
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 39.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Wer: 0.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3748 | 0.07 | 500 | 3.8784 | 1.0 |
| 2.8068 | 0.14 | 1000 | 2.8289 | 0.9826 |
| 1.6698 | 0.22 | 1500 | 0.8811 | 0.7127 |
| 1.3488 | 0.29 | 2000 | 0.5166 | 0.5369 |
| 1.2239 | 0.36 | 2500 | 0.4105 | 0.4741 |
| 1.1537 | 0.43 | 3000 | 0.3585 | 0.4448 |
| 1.1184 | 0.51 | 3500 | 0.3336 | 0.4292 |
| 1.0968 | 0.58 | 4000 | 0.3195 | 0.4180 |
| 1.0737 | 0.65 | 4500 | 0.3075 | 0.4141 |
| 1.0677 | 0.72 | 5000 | 0.3015 | 0.4089 |
| 1.0462 | 0.8 | 5500 | 0.2971 | 0.4077 |
| 1.0392 | 0.87 | 6000 | 0.2870 | 0.3997 |
| 1.0178 | 0.94 | 6500 | 0.2805 | 0.3963 |
| 0.992 | 1.01 | 7000 | 0.2748 | 0.3935 |
| 1.0197 | 1.09 | 7500 | 0.2691 | 0.3884 |
| 1.0056 | 1.16 | 8000 | 0.2682 | 0.3889 |
| 0.9826 | 1.23 | 8500 | 0.2647 | 0.3868 |
| 0.9815 | 1.3 | 9000 | 0.2603 | 0.3832 |
| 0.9717 | 1.37 | 9500 | 0.2561 | 0.3807 |
| 0.9605 | 1.45 | 10000 | 0.2523 | 0.3783 |
| 0.96 | 1.52 | 10500 | 0.2494 | 0.3788 |
| 0.9442 | 1.59 | 11000 | 0.2478 | 0.3760 |
| 0.9564 | 1.66 | 11500 | 0.2454 | 0.3733 |
| 0.9436 | 1.74 | 12000 | 0.2439 | 0.3747 |
| 0.938 | 1.81 | 12500 | 0.2411 | 0.3716 |
| 0.9353 | 1.88 | 13000 | 0.2397 | 0.3698 |
| 0.9271 | 1.95 | 13500 | 0.2388 | 0.3681 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
AlexN/xls-r-300m-fr | a7e64048a441cd015b3168e0ace83c8d6ecfa049 | 2022-03-23T18:32:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"model-index"
]
| automatic-speech-recognition | false | AlexN | null | AlexN/xls-r-300m-fr | 7 | 1 | transformers | 13,767 | ---
language:
- fr
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-300m-fr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0 fr
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 21.58
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 36.03
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: fr
metrics:
- name: Test WER
type: wer
value: 38.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2700
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Alexander-Learn/bert-finetuned-ner-accelerate | 6ee59b8a7c2d378653f972793eb895be41f217ae | 2022-01-28T09:54:04.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Alexander-Learn | null | Alexander-Learn/bert-finetuned-ner-accelerate | 7 | null | transformers | 13,768 | Entry not found |
Alireza1044/albert-base-v2-mrpc | a12ece8e4cf73d08d4242e016e3ca4685555aff4 | 2021-07-26T11:46:48.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | Alireza1044 | null | Alireza1044/albert-base-v2-mrpc | 7 | null | transformers | 13,769 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model_index:
- name: mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metric:
name: F1
type: f1
value: 0.901060070671378
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Accuracy: 0.8627
- F1: 0.9011
- Combined Score: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Andranik/TestPytorchClassification | 73b294c609be1d24afd9d9c22cf7532416b616a7 | 2022-02-17T12:49:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Andranik | null | Andranik/TestPytorchClassification | 7 | null | transformers | 13,770 | Entry not found |
Andrija/SRoBERTa-XL | 3d6a747ba0c4ee5777facf60a0124ce0fdd8584f | 2021-09-26T17:09:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"hr",
"sr",
"dataset:oscar",
"dataset:srwac",
"dataset:leipzig",
"dataset:cc100",
"dataset:hrwac",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Andrija | null | Andrija/SRoBERTa-XL | 7 | null | transformers | 13,771 | ---
datasets:
- oscar
- srwac
- leipzig
- cc100
- hrwac
language:
- hr
- sr
tags:
- masked-lm
widget:
- text: "Ovo je početak <mask>."
license: apache-2.0
---
# Transformer language model for Croatian and Serbian
Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps).
Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `Andrija/SRoBERTa-XL` | 80M | Forth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (28 GB of text) | |
Ann2020/distilbert-base-uncased-finetuned-ner | 3282a339ef223c6d92a19a99bbf89034ac0d2ff2 | 2021-08-29T21:13:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | Ann2020 | null | Ann2020/distilbert-base-uncased-finetuned-ner | 7 | null | transformers | 13,772 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.984018301110458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0609
- Precision: 0.9275
- Recall: 0.9365
- F1: 0.9320
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2527 | 1.0 | 878 | 0.0706 | 0.9120 | 0.9181 | 0.9150 | 0.9803 |
| 0.0517 | 2.0 | 1756 | 0.0603 | 0.9174 | 0.9349 | 0.9261 | 0.9830 |
| 0.031 | 3.0 | 2634 | 0.0609 | 0.9275 | 0.9365 | 0.9320 | 0.9840 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Anonymous/ReasonBERT-BERT | 75b5ad9f763b3ac650642a25c1004d145185c3c4 | 2021-05-23T02:33:35.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Anonymous | null | Anonymous/ReasonBERT-BERT | 7 | null | transformers | 13,773 | Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx
This is based on bert-base-uncased model and pre-trained for text input |
AnonymousSub/declutr-emanuals-s10-AR | 42aaecc6a813750cd5f07c28803b07dd6616fc70 | 2021-10-03T02:01:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/declutr-emanuals-s10-AR | 7 | null | transformers | 13,774 | Entry not found |
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa | 898dffa69a731f6dcf264fc5bf720b01bb106c1c | 2022-01-23T04:34:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa | 7 | null | transformers | 13,775 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_wikiqa | 9ef82d233f44e4dee79d2565d7694e9a6e015847 | 2022-01-23T07:45:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_wikiqa | 7 | null | transformers | 13,776 | Entry not found |
Aruden/DialoGPT-medium-harrypotterall | e947bfe50a640a85de4e64f9f4dee58d8f3addf5 | 2021-09-21T13:19:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Aruden | null | Aruden/DialoGPT-medium-harrypotterall | 7 | null | transformers | 13,777 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
BertChristiaens/EmojiPredictor | 17a60bb1307e71da66acb77307c24b8ac18b58be | 2021-10-14T12:23:17.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | BertChristiaens | null | BertChristiaens/EmojiPredictor | 7 | null | transformers | 13,778 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | 2b68c93a6744f54e527fa3d98ef0af2678f45efb | 2021-10-17T12:10:17.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | 7 | null | transformers | 13,779 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-Mix Poetry Classification Model
## Model description
**CAMeLBERT-Mix Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9937475919723511},
{'label': 'الكامل', 'score': 0.971284031867981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | 4a1c9d9bd73a3ce4a304987845f983efa51d66d0 | 2021-10-18T10:17:22.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | 7 | null | transformers | 13,780 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'عامل ايه ؟'
---
# CAMeLBERT-MSA POS-EGY Model
## Model description
**CAMeLBERT-MSA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy')
>>> text = 'عامل ايه ؟'
>>> pos(text)
[{'entity': 'adj', 'score': 0.99979395, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.998192, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99929804, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAUKiel/JavaBERT-uncased | 316cba1749f5764c6689d82bec7bb41d7640d8fa | 2021-09-14T08:35:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"java",
"code",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | CAUKiel | null | CAUKiel/JavaBERT-uncased | 7 | null | transformers | 13,781 | ---
language:
- java
- code
license: apache-2.0
widget:
- text: 'public [MASK] isOdd(Integer num){if (num % 2 == 0) {return "even";} else {return "odd";}}'
---
## JavaBERT
A BERT-like model pretrained on Java software code.
### Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A ```bert-base-uncased``` tokenizer is used by this model.
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Usage
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='CAUKiel/JavaBERT')
output = pipe(CODE) # Replace with Java code; Use '[MASK]' to mask tokens/words in the code.
``` |
CLTL/icf-levels-att | b39d79785a02773c015174fabf73a4f33ab0932f | 2021-11-08T10:24:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-att | 7 | 1 | transformers | 13,782 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Attention Functioning Levels (ICF b140)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with concentrating / directing / holding / dividing attention.
3 | Slight problem with concentrating / directing / holding / dividing attention for a longer period of time or for complex tasks.
2 | Can concentrate / direct / hold / divide attention only for a short time.
1 | Can barely concentrate / direct / hold / divide attention.
0 | Unable to concentrate / direct / hold / divide attention.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-att',
use_cuda=False,
)
example = 'Snel afgeleid, moeite aandacht te behouden.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.89
```
The raw outputs look like this:
```
[[2.89226103]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.99 | 1.03
mean squared error | 1.35 | 1.47
root mean squared error | 1.16 | 1.21
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
Capreolus/birch-bert-large-msmarco_mb | e4395c625ce2cb0ff56c68f9df8000bd513c39c5 | 2021-05-18T17:43:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
]
| null | false | Capreolus | null | Capreolus/birch-bert-large-msmarco_mb | 7 | null | transformers | 13,783 | Entry not found |
Chakita/gpt2_mwp | 9611727df9189fdb028398fd7d790ec4b435ce28 | 2022-01-26T13:27:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Chakita | null | Chakita/gpt2_mwp | 7 | null | transformers | 13,784 | Entry not found |
Cloudy/DialoGPT-CJ-large | 479ccee9d5e059ad3d778a85b3ac0057477db343 | 2021-09-03T18:45:18.000Z | [
"pytorch",
"conversational"
]
| conversational | false | Cloudy | null | Cloudy/DialoGPT-CJ-large | 7 | null | null | 13,785 | ---
tags:
- conversational
--- |
CouchCat/ma_mlc_v7_distil | bf9ab34a52ad40681b99433685d08680890962b4 | 2021-02-17T08:17:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"multi-label",
"license:mit"
]
| text-classification | false | CouchCat | null | CouchCat/ma_mlc_v7_distil | 7 | null | transformers | 13,786 | ---
language: en
license: mit
tags:
- multi-label
widget:
- text: "I would like to return these pants and shoes"
---
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_mlc_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_mlc_v7_distil")
``` |
Crasher222/kaggle-comp-test | 8a1b24c9ea55ca4aefda6da2a2e5b2a4bbec687e | 2021-10-24T11:40:04.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Crasher222 | null | Crasher222/kaggle-comp-test | 7 | null | transformers | 13,787 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Crasher222/autonlp-data-kaggle-test
co2_eq_emissions: 60.744727079482495
---
# Model Finetuned from BERT-base for
- Problem type: Multi-class Classification
- Model ID: 25805800
## Validation Metrics
- Loss: 0.4422711133956909
- Accuracy: 0.8615328555811976
- Macro F1: 0.8642434650461513
- Micro F1: 0.8615328555811976
- Weighted F1: 0.8617743626671308
- Macro Precision: 0.8649112225076049
- Micro Precision: 0.8615328555811976
- Weighted Precision: 0.8625407179375096
- Macro Recall: 0.8640777539828228
- Micro Recall: 0.8615328555811976
- Weighted Recall: 0.8615328555811976
## Usage
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Crasher222/kaggle-comp-test")
tokenizer = AutoTokenizer.from_pretrained("Crasher222/kaggle-comp-test")
inputs = tokenizer("I am in love with you", return_tensors="pt")
outputs = model(**inputs)
``` |
DeadBeast/mbert-base-cased-finetuned-bengali-fakenews | ddac3dee4e27c3ee4c08e528c7bbcb3345c90ca4 | 2021-08-15T14:36:05.000Z | [
"pytorch",
"bert",
"text-classification",
"bengali",
"dataset:BanFakeNews",
"transformers",
"license:apache-2.0"
]
| text-classification | false | DeadBeast | null | DeadBeast/mbert-base-cased-finetuned-bengali-fakenews | 7 | 1 | transformers | 13,788 | ---
language: bengali
license: apache-2.0
datasets:
- BanFakeNews
---
# **mBERT-base-cased-finetuned-bengali-fakenews**
This model is a fine-tune checkpoint of mBERT-base-cased over **[Bengali-fake-news Dataset](https://www.kaggle.com/cryptexcode/banfakenews)** for Text classification. This model reaches an accuracy of 96.3 with an f1-score of 79.1 on the dev set.
### **How to use?**
**Task**: binary-classification
- LABEL_1: Authentic (*Authentic means news is authentic*)
- LABEL_0: Fake (*Fake means news is fake*)
```
from transformers import pipeline
print(pipeline("sentiment-analysis",model="DeadBeast/mbert-base-cased-finetuned-bengali-fakenews",tokenizer="DeadBeast/mbert-base-cased-finetuned-bengali-fakenews")("অভিনেতা আফজাল শরীফকে ২০ লাখ টাকার অনুদান অসুস্থ অভিনেতা আফজাল শরীফকে চিকিৎসার জন্য ২০ লাখ টাকা অনুদান দিয়েছেন প্রধানমন্ত্রী শেখ হাসিনা।"))
``` |
DeepPavlov/marianmt-tatoeba-enru | d4fad4dde6c6512e84a9b80fc03af5b30957a3ad | 2021-12-20T09:42:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | DeepPavlov | null | DeepPavlov/marianmt-tatoeba-enru | 7 | null | transformers | 13,789 | Entry not found |
Doogie/wav2vec2-base-timit-demo-colab | 6322c09705b7b5bf115956d2be94cad3a063568c | 2021-11-16T00:41:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Doogie | null | Doogie/wav2vec2-base-timit-demo-colab | 7 | null | transformers | 13,790 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4180
- Wer: 0.3392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.656 | 4.0 | 500 | 1.8973 | 1.0130 |
| 0.8647 | 8.0 | 1000 | 0.4667 | 0.4705 |
| 0.2968 | 12.0 | 1500 | 0.4211 | 0.4035 |
| 0.1719 | 16.0 | 2000 | 0.4725 | 0.3739 |
| 0.1272 | 20.0 | 2500 | 0.4586 | 0.3543 |
| 0.1079 | 24.0 | 3000 | 0.4356 | 0.3484 |
| 0.0808 | 28.0 | 3500 | 0.4180 | 0.3392 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Duugu/jakebot3000 | 2471bf7c0107cb6362b6fdee18a40810dd649f02 | 2021-09-08T23:13:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Duugu | null | Duugu/jakebot3000 | 7 | null | transformers | 13,791 | ---
tags:
- conversational
---
# My Awesome Model |
EColi/sponsorblock-base-v1 | e1436971b0ad41920ad61198bb7315e3f9d97fc3 | 2022-01-24T17:23:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | EColi | null | EColi/sponsorblock-base-v1 | 7 | 1 | transformers | 13,792 | ---
tags:
- generated_from_trainer
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out
This model is a fine-tuned version of [/1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/](https://huggingface.co//1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 2518227880
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 0.0867 | 0.07 | 75000 | 0.0742 |
| 0.0783 | 0.13 | 150000 | 0.0695 |
| 0.0719 | 0.2 | 225000 | 0.0732 |
| 0.0743 | 0.27 | 300000 | 0.0663 |
| 0.0659 | 0.34 | 375000 | 0.0686 |
| 0.0664 | 0.4 | 450000 | 0.0683 |
| 0.0637 | 0.47 | 525000 | 0.0680 |
| 0.0655 | 0.54 | 600000 | 0.0641 |
| 0.0676 | 0.6 | 675000 | 0.0644 |
| 0.0704 | 0.67 | 750000 | 0.0645 |
| 0.0687 | 0.74 | 825000 | 0.0610 |
| 0.059 | 0.81 | 900000 | 0.0652 |
| 0.0666 | 0.87 | 975000 | 0.0619 |
| 0.0624 | 0.94 | 1050000 | 0.0619 |
| 0.0625 | 1.01 | 1125000 | 0.0667 |
| 0.0614 | 1.03 | 1150000 | 0.0658 |
| 0.0597 | 1.05 | 1175000 | 0.0683 |
| 0.0629 | 1.07 | 1200000 | 0.0691 |
| 0.0603 | 1.1 | 1225000 | 0.0678 |
| 0.0601 | 1.12 | 1250000 | 0.0746 |
| 0.0606 | 1.14 | 1275000 | 0.0691 |
| 0.0671 | 1.16 | 1300000 | 0.0702 |
| 0.0625 | 1.19 | 1325000 | 0.0661 |
| 0.0617 | 1.21 | 1350000 | 0.0688 |
| 0.0579 | 1.23 | 1375000 | 0.0679 |
| 0.0663 | 1.25 | 1400000 | 0.0634 |
| 0.0583 | 1.28 | 1425000 | 0.0638 |
| 0.0623 | 1.3 | 1450000 | 0.0681 |
| 0.0615 | 1.32 | 1475000 | 0.0670 |
| 0.0592 | 1.34 | 1500000 | 0.0666 |
| 0.0626 | 1.37 | 1525000 | 0.0666 |
| 0.063 | 1.39 | 1550000 | 0.0647 |
| 0.0648 | 1.41 | 1575000 | 0.0653 |
| 0.0611 | 1.43 | 1600000 | 0.0700 |
| 0.0622 | 1.46 | 1625000 | 0.0634 |
| 0.0617 | 1.48 | 1650000 | 0.0651 |
| 0.0613 | 1.5 | 1675000 | 0.0634 |
| 0.0639 | 1.52 | 1700000 | 0.0661 |
| 0.0615 | 1.54 | 1725000 | 0.0644 |
| 0.0605 | 1.57 | 1750000 | 0.0662 |
| 0.0622 | 1.59 | 1775000 | 0.0656 |
| 0.0585 | 1.61 | 1800000 | 0.0633 |
| 0.0628 | 1.63 | 1825000 | 0.0625 |
| 0.0638 | 1.66 | 1850000 | 0.0662 |
| 0.0599 | 1.68 | 1875000 | 0.0664 |
| 0.0583 | 1.7 | 1900000 | 0.0668 |
| 0.0543 | 1.72 | 1925000 | 0.0631 |
| 0.06 | 1.75 | 1950000 | 0.0629 |
| 0.0615 | 1.77 | 1975000 | 0.0644 |
| 0.0587 | 1.79 | 2000000 | 0.0663 |
| 0.0647 | 1.81 | 2025000 | 0.0654 |
| 0.0604 | 1.84 | 2050000 | 0.0639 |
| 0.0641 | 1.86 | 2075000 | 0.0636 |
| 0.0604 | 1.88 | 2100000 | 0.0636 |
| 0.0654 | 1.9 | 2125000 | 0.0652 |
| 0.0588 | 1.93 | 2150000 | 0.0638 |
| 0.0616 | 1.95 | 2175000 | 0.0657 |
| 0.0598 | 1.97 | 2200000 | 0.0646 |
| 0.0633 | 1.99 | 2225000 | 0.0645 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Emirhan/51k-finetuned-bert-model | 388f85944f1b1e14e27236ea44b590324b3fb8ff | 2021-06-04T17:35:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Emirhan | null | Emirhan/51k-finetuned-bert-model | 7 | null | transformers | 13,793 | Entry not found |
EthanChen0418/seven-classed-domain-cls | 19cdf88e8cbbab1e5a4876ba27b5f2636b603903 | 2021-08-26T07:05:04.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | EthanChen0418 | null | EthanChen0418/seven-classed-domain-cls | 7 | null | transformers | 13,794 | Entry not found |
EthanChen0418/six-classed-domain-cls | ded28cac5201fa5c6206134b4ba9141110792d37 | 2021-08-21T17:25:56.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | EthanChen0418 | null | EthanChen0418/six-classed-domain-cls | 7 | null | transformers | 13,795 | Entry not found |
Fauzan/autonlp-judulberita-32517788 | 93beb34a46b5cdee79e82440fa936500cc58271c | 2021-11-13T15:12:57.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:Fauzan/autonlp-data-judulberita",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Fauzan | null | Fauzan/autonlp-judulberita-32517788 | 7 | null | transformers | 13,796 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Fauzan/autonlp-data-judulberita
co2_eq_emissions: 0.9413042739759596
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32517788
- CO2 Emissions (in grams): 0.9413042739759596
## Validation Metrics
- Loss: 0.32112351059913635
- Accuracy: 0.8641304347826086
- Precision: 0.8055555555555556
- Recall: 0.8405797101449275
- AUC: 0.9493383742911153
- F1: 0.8226950354609929
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Fauzan/autonlp-judulberita-32517788
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Fauzan/autonlp-judulberita-32517788", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Fiddi/distilbert-base-uncased-finetuned-ner | e7cdeec3384018959d1468961a46ebedc4228290 | 2021-10-10T20:08:19.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Fiddi | null | Fiddi/distilbert-base-uncased-finetuned-ner | 7 | null | transformers | 13,797 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9290544285555925
- name: Recall
type: recall
value: 0.9375769101689228
- name: F1
type: f1
value: 0.9332962138084633
- name: Accuracy
type: accuracy
value: 0.9841136193940935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9291
- Recall: 0.9376
- F1: 0.9333
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 |
| 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
FitoDS/wav2vec2-large-xls-r-300m-spanish-large | ed2e99172ac80d3939795e8570f920e465b20161 | 2022-02-05T14:06:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | FitoDS | null | FitoDS/wav2vec2-large-xls-r-300m-spanish-large | 7 | null | transformers | 13,798 | Entry not found |
GPL/fiqa-tsdae-msmarco-distilbert-margin-mse | 64a1401e55ff49a6cd3d9bb311f33ef141220e33 | 2022-04-19T16:47:51.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | GPL | null | GPL/fiqa-tsdae-msmarco-distilbert-margin-mse | 7 | null | transformers | 13,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.