modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8 | 2fe4b18a4420f00dbac392d31cd969d24f76b1af | 2022-02-10T22:57:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8 | 2 | null | transformers | 23,900 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
This model is a fine-tuned version of [emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8](https://huggingface.co/emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Wer: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0402 | 0.67 | 500 | 0.3354 | 0.5681 |
| 0.7265 | 1.33 | 1000 | 0.3181 | 0.5444 |
| 0.6858 | 2.0 | 1500 | 0.3044 | 0.5322 |
| 0.6537 | 2.66 | 2000 | 0.2911 | 0.5217 |
| 0.6337 | 3.33 | 2500 | 0.2874 | 0.5164 |
| 0.6111 | 3.99 | 3000 | 0.2758 | 0.5059 |
| 0.5815 | 4.66 | 3500 | 0.2708 | 0.5010 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8 | 226f4f7f816329ebc8a082395a49d9e311c4cfd7 | 2022-02-10T22:57:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8 | 2 | null | transformers | 23,901 | ---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4813
- Wer: 0.7207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2 | 0.53 | 400 | 3.1949 | 0.9964 |
| 2.9387 | 1.07 | 800 | 2.5015 | 1.0337 |
| 1.5975 | 1.6 | 1200 | 1.0928 | 0.9945 |
| 1.0688 | 2.13 | 1600 | 0.8388 | 0.9390 |
| 0.8977 | 2.66 | 2000 | 0.7106 | 0.8889 |
| 0.789 | 3.2 | 2400 | 0.6051 | 0.8273 |
| 0.7116 | 3.73 | 2800 | 0.5580 | 0.7855 |
| 0.6576 | 4.26 | 3200 | 0.5033 | 0.7433 |
| 0.6002 | 4.79 | 3600 | 0.4813 | 0.7207 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-ab-CV8 | 4c7cbd20e2181f648d683579462a24870b859a01 | 2022-03-23T18:34:41.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec2-xls-r-300m-ab-CV8 | 2 | null | transformers | 23,902 | ---
license: apache-2.0
language: ab
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-ab-CV8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ab
metrics:
- name: Test WER
type: wer
value: 44.9
---
# wav2vec2-xls-r-300m-ab-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2105
- Wer: 0.5474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.7729 | 0.63 | 500 | 3.0624 | 1.0021 |
| 2.7348 | 1.26 | 1000 | 1.0460 | 0.9815 |
| 1.2756 | 1.9 | 1500 | 0.4618 | 0.8309 |
| 1.0419 | 2.53 | 2000 | 0.3725 | 0.7449 |
| 0.9491 | 3.16 | 2500 | 0.3368 | 0.7345 |
| 0.9006 | 3.79 | 3000 | 0.3014 | 0.6936 |
| 0.8519 | 4.42 | 3500 | 0.2852 | 0.6767 |
| 0.8243 | 5.06 | 4000 | 0.2701 | 0.6504 |
| 0.7902 | 5.69 | 4500 | 0.2641 | 0.6221 |
| 0.7767 | 6.32 | 5000 | 0.2549 | 0.6192 |
| 0.7516 | 6.95 | 5500 | 0.2515 | 0.6179 |
| 0.737 | 7.59 | 6000 | 0.2408 | 0.5963 |
| 0.7217 | 8.22 | 6500 | 0.2429 | 0.6261 |
| 0.7101 | 8.85 | 7000 | 0.2366 | 0.5687 |
| 0.6922 | 9.48 | 7500 | 0.2277 | 0.5680 |
| 0.6866 | 10.11 | 8000 | 0.2242 | 0.5847 |
| 0.6703 | 10.75 | 8500 | 0.2222 | 0.5803 |
| 0.6649 | 11.38 | 9000 | 0.2247 | 0.5765 |
| 0.6513 | 12.01 | 9500 | 0.2182 | 0.5644 |
| 0.6369 | 12.64 | 10000 | 0.2128 | 0.5508 |
| 0.6425 | 13.27 | 10500 | 0.2132 | 0.5514 |
| 0.6399 | 13.91 | 11000 | 0.2116 | 0.5495 |
| 0.6208 | 14.54 | 11500 | 0.2105 | 0.5474 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
enelpi/electra-tr-enelpi-squad-qa | e63f255e35a31e9171f5c6aa6f37bda0994c5e8d | 2020-07-31T15:19:18.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | enelpi | null | enelpi/electra-tr-enelpi-squad-qa | 2 | null | transformers | 23,903 | Entry not found |
enelpi/med-electra-small-30k-discriminator | f222cbda2c257676ffe09bd56500d23e540bfeba | 2021-01-16T01:17:53.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | enelpi | null | enelpi/med-electra-small-30k-discriminator | 2 | null | transformers | 23,904 | Entry not found |
entelecheia/ekonbert-base | 2171551e2a5adb70878767c42f607f8b1ee7370a | 2021-05-19T16:30:25.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | entelecheia | null | entelecheia/ekonbert-base | 2 | null | transformers | 23,905 | Entry not found |
entelecheia/ekonelectra-base-generator | cd2e933a7d061e459c6b6a0f72246d0e99a7481f | 2021-05-04T09:08:34.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | entelecheia | null | entelecheia/ekonelectra-base-generator | 2 | null | transformers | 23,906 | Entry not found |
entelecheia/ekonelectra-small-discriminator | 3b1f2f36e39e80fcd3df579c02dc731d816220d3 | 2021-05-04T09:09:59.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | entelecheia | null | entelecheia/ekonelectra-small-discriminator | 2 | null | transformers | 23,907 | Entry not found |
epsil/bhagvad_gita | 74e18378997d6dff4843e002a236097503194561 | 2022-01-07T05:30:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | epsil | null | epsil/bhagvad_gita | 2 | 1 | transformers | 23,908 | This is fine-tuned model on Bhagvad Gita and creates text based on prompts.
Example of usage:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("epsil/bhagvad_gita")
model = AutoModelForCausalLM.from_pretrained("epsil/bhagvad_gita")
```
Input
```
from transformers import pipeline
pipeline = pipeline('text-generation',model=model, tokenizer=tokenizer)
result = samples('Krishna show me the right path')[0]['generated_text']
print(result)
```
Output
```
Krishna show me the right path, and I also to remember the lessons, and to remember them right.
Sama! in His Day, and by Thy own Eternal Grace.
A man like that who shall come to us
```
> Created by [Saurabh Mishra](https://www.linkedin.com/in/saurabh-mishra-12b5a1216/)
> Made with <span style="color: #e25555;">♥</span> in India
|
erayyildiz/electra-turkish-cased | f1e6d764fd0e21fbb117bb41557dce00af2ccf8f | 2020-05-30T16:37:16.000Z | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | false | erayyildiz | null | erayyildiz/electra-turkish-cased | 2 | null | transformers | 23,909 | Entry not found |
ericRosello/bert-frozen-v1 | f9a4a3cff6b71ae52ad65dbc956c4a488712af1c | 2021-12-31T20:41:19.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ericRosello | null | ericRosello/bert-frozen-v1 | 2 | null | transformers | 23,910 | Entry not found |
ericRosello/distilbert-frozen-v1 | 9d5cf85e5a6c8878264ca969d58e3cfeef018465 | 2021-12-30T16:40:46.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ericRosello | null | ericRosello/distilbert-frozen-v1 | 2 | null | transformers | 23,911 | Entry not found |
ericRosello/distilbert-frozen-v2 | a4321567ba770ff04b499dd729cc4a4c62f35ab4 | 2022-01-03T12:56:29.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ericRosello | null | ericRosello/distilbert-frozen-v2 | 2 | null | transformers | 23,912 | Entry not found |
ericRosello/results | 472e225d58f5e933566b1067c1c1db2b4b0f03c3 | 2022-01-04T11:27:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ericRosello | null | ericRosello/results | 2 | null | transformers | 23,913 | Entry not found |
erica/kcbase400 | 86dbb603999c7985501947dde27b94502af6a77a | 2021-05-19T16:35:38.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | erica | null | erica/kcbase400 | 2 | null | transformers | 23,914 | Entry not found |
ericzhou/DialoGPT-medium-elon | 53705af3a91ae77632cf9404047eb002ed6e4f87 | 2022-01-20T03:29:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ericzhou | null | ericzhou/DialoGPT-medium-elon | 2 | null | transformers | 23,915 | ---
tags:
- conversational
---
# elon |
erwanlc/t5-coktails_recipe-base | 1e1c35635b0014cc011006e207a4be720aa5637e | 2022-01-14T13:53:27.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | erwanlc | null | erwanlc/t5-coktails_recipe-base | 2 | null | transformers | 23,916 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-coktails_recipe-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-coktails_recipe-base
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
f00d4tehg0dz/Yoda | f7c366727a20b0c191ee89a7e4c736a02937c5de | 2021-08-30T02:35:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | f00d4tehg0dz | null | f00d4tehg0dz/Yoda | 2 | null | transformers | 23,917 | ---
tags:
- conversational
---
#yoda chat bot |
facebook/data2vec-audio-base-100h | 350c51850f318f7ac28ee4c135c6c39981b51c6a | 2022-04-18T16:19:17.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"transformers",
"speech",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/data2vec-audio-base-100h | 2 | null | transformers | 23,918 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Data2Vec-Audio-Base-100h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-100h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-100h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
|
facebook/s2t-small-covost2-en-ca-st | 72a7f6a9db91e0975caebc5c5ba3f35363d1301c | 2022-02-07T15:28:37.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"ca",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"transformers",
"audio",
"speech-translation",
"license:mit"
] | automatic-speech-recognition | false | facebook | null | facebook/s2t-small-covost2-en-ca-st | 2 | null | transformers | 23,919 | ---
language:
- en
- ca
datasets:
- covost2
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-COVOST2-EN-CA-ST
`s2t-small-covost2-en-ca-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Catlan text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-en-ca-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-en-ca-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-en-ca-st is trained on English-Catlan subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-ca (BLEU score): 21.68
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
facebook/wav2vec2-base-10k-voxpopuli-ft-ro | 880604b9e51acfcb156240d734000fe8627305d9 | 2021-07-06T01:52:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ro",
"arxiv:2101.00390",
"transformers",
"audio",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-10k-voxpopuli-ft-ro | 2 | null | transformers | 23,920 | ---
language: ro
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in ro (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-ro")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-ro")
# load dataset
ds = load_dataset("common_voice", "ro", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
facebook/wav2vec2-large-nl-voxpopuli | 2b83b407468420689ed8a35d904ff14e64e01dd0 | 2021-07-06T02:26:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"nl",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-nl-voxpopuli | 2 | null | transformers | 23,921 | ---
language: nl
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the nl unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-large-sv-voxpopuli | 0e543a4da9d1e17c7aa947ec3d7e66d565e1afe3 | 2021-07-06T02:30:55.000Z | [
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"sv",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-sv-voxpopuli | 2 | null | transformers | 23,922 | ---
language: sv
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the sv unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-xlsr-53-phon-cv-ft | 1c2ce0df451c63934f4bad5b65e8e87acf1bf15f | 2021-11-10T11:59:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-xlsr-53-phon-cv-ft | 2 | 1 | transformers | 23,923 | Entry not found |
fadhilarkan/distilbert-base-uncased-finetuned-squad | 67d74f7c431f5dd035b469c9dbb2803f99060e31 | 2021-08-16T06:45:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | fadhilarkan | null | fadhilarkan/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,924 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2171 | 1.0 | 5533 | 1.1511 |
| 0.952 | 2.0 | 11066 | 1.1180 |
| 0.7707 | 3.0 | 16599 | 1.1523 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
fadhilarkan/t5-small-finetuned-xsum-2 | 9f67c06543f2feae6ee3fc2a13e56797376af8c3 | 2021-08-21T13:51:05.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | fadhilarkan | null | fadhilarkan/t5-small-finetuned-xsum-2 | 2 | null | transformers | 23,925 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model_index:
- name: t5-small-finetuned-xsum-2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
args: plain_text
metric:
name: Rouge1
type: rouge
value: 28.8137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9536
- Rouge1: 28.8137
- Rouge2: 9.1265
- Rougel: 26.0238
- Rougelsum: 26.0217
- Gen Len: 13.854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.2142 | 1.0 | 8760 | 1.9994 | 29.007 | 9.2583 | 26.2377 | 26.2356 | 13.4546 |
| 2.1372 | 2.0 | 17520 | 1.9622 | 29.1077 | 9.445 | 26.3734 | 26.3687 | 13.6995 |
| 2.0755 | 3.0 | 26280 | 1.9536 | 28.8137 | 9.1265 | 26.0238 | 26.0217 | 13.854 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
fadhilarkan/test-summarization | b144c0ffe012a499e3a44f34ded52ff081c21798 | 2021-08-17T15:20:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | fadhilarkan | null | fadhilarkan/test-summarization | 2 | null | transformers | 23,926 | ---
metrics:
- rouge
model-index:
- name: test-summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-summarization
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4740
- Rouge1: 28.3487
- Rouge2: 7.7836
- Rougel: 22.3307
- Rougelsum: 22.3357
- Gen Len: 18.8307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7042 | 1.0 | 14575 | 2.4740 | 28.3487 | 7.7836 | 22.3307 | 22.3357 | 18.8307 |
### Framework versions
- Transformers 4.6.1
- Pytorch 1.7.0
- Datasets 1.11.0
- Tokenizers 0.10.3
|
faust/broken_t5_squad2 | 0f3e4b5b55e7a63de5d4ba4874a099f8cf848b2f | 2020-07-04T08:42:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | faust | null | faust/broken_t5_squad2 | 2 | null | transformers | 23,927 | Entry not found |
fav-kky/FERNET-CC_sk | 1a1d87d4ab1f64643de472334e690777d9de9cbc | 2021-09-09T13:42:54.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"sk",
"arxiv:2107.10042",
"transformers",
"Slovak",
"KKY",
"FAV",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | fav-kky | null | fav-kky/FERNET-CC_sk | 2 | null | transformers | 23,928 | ---
language: "sk"
tags:
- Slovak
- KKY
- FAV
license: "cc-by-nc-sa-4.0"
---
# FERNET-CC_sk
FERNET-CC_sk is a monolingual Slovak BERT-base model pre-trained from 29GB of filtered Slovak Common Crawl dataset.
It is a Slovak version of our Czech [FERNET-C5](https://huggingface.co/fav-kky/FERNET-C5) model.
Preprint of our paper is available at https://arxiv.org/abs/2107.10042. |
felixbmuller/bert-base-uncased-finetuned-copa | 59bfdb7734ce937ddd698b868add485eb2a71956 | 2022-02-16T07:21:56.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | felixbmuller | null | felixbmuller/bert-base-uncased-finetuned-copa | 2 | null | transformers | 23,929 | Entry not found |
ffsouza/t5-tiny-random-length-96-learning_rate-1e-05-weight_decay-0.01-finetuned-en-to-ro | 25b44d36dda787b0948668880b5477ffc73fbcdf | 2021-12-03T23:24:45.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ffsouza | null | ffsouza/t5-tiny-random-length-96-learning_rate-1e-05-weight_decay-0.01-finetuned-en-to-ro | 2 | null | transformers | 23,930 | Entry not found |
finiteautomata/bert-contextualized-hate-category-es | 7065e95458da9d3eedf8625b7b200d6a4f0785ea | 2021-05-19T16:50:33.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | finiteautomata | null | finiteautomata/bert-contextualized-hate-category-es | 2 | null | transformers | 23,931 | Entry not found |
finiteautomata/robertuitonews-cased-tweetcontext | 0d29304f9a893cc24d6f33e5783550cd75fd342c | 2021-11-16T12:51:50.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | finiteautomata | null | finiteautomata/robertuitonews-cased-tweetcontext | 2 | null | transformers | 23,932 | Entry not found |
flax-community/bert-base-uncased-swahili | 646f76bd3dc14008c7b901f45b984de0c682863d | 2021-07-25T16:19:20.000Z | [
"pytorch",
"jax",
"tensorboard",
"bert",
"fill-mask",
"sw",
"dataset:flax-community/swahili-safi",
"transformers",
"autotrain_compatible"
] | fill-mask | false | flax-community | null | flax-community/bert-base-uncased-swahili | 2 | null | transformers | 23,933 | ---
language: sw
widget:
- text: "Si kila mwenye makucha [MASK] simba."
datasets:
- flax-community/swahili-safi
---
## BERT base-uncased for in Swahili
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-base-uncased-swahili")
model = AutoModelForMaskedLM.from_pretrained("flax-community/bert-base-uncased-swahili")
print(round((model.num_parameters())/(1000*1000)),"Million Parameters")
110 Million Parameters
```
#### **Training Data**:
This model was trained on [Swahili Safi](https://huggingface.co/datasets/flax-community/swahili-safi)
#### **More Details**:
For more details and Demo please check [HF Swahili Space](https://huggingface.co/spaces/flax-community/Swahili)
|
flax-community/code-mt5-base-batch-mix | 2dfa92ae84e54d60a7d71a5527dc26421b65a0e4 | 2021-07-17T09:52:16.000Z | [
"pytorch",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | flax-community | null | flax-community/code-mt5-base-batch-mix | 2 | null | transformers | 23,934 | Entry not found |
flax-community/mr-indicnlp-classifier | 4df6b086dfcc260848a4c47a43a0de35441147c2 | 2021-07-19T12:53:33.000Z | [
"pytorch"
] | null | false | flax-community | null | flax-community/mr-indicnlp-classifier | 2 | 1 | null | 23,935 | # IndicNLP Marathi News Classifier
This model was fine-tuned using [Marathi RoBERTa](https://huggingface.co/flax-community/roberta-base-mr) on [IndicNLP Marathi News Dataset](https://github.com/AI4Bharat/indicnlp_corpus#indicnlp-news-article-classification-dataset)
## Dataset
IndicNLP Marathi news dataset consists 3 classes - `['lifestyle', 'entertainment', 'sports']` - with following docs distribution as per classes:
| train | eval | test |
| ----- | ---- | ---- |
| 9672 | 477 | 478 |
💯 Our **`mr-indicnlp-classifier`** model fine tuned from **roberta-base-mr** Pretrained Marathi RoBERTa model outperformed both classifier mentioned in [Arora, G. (2020). iNLTK](https://www.semanticscholar.org/paper/iNLTK%3A-Natural-Language-Toolkit-for-Indic-Languages-Arora/5039ed9e100d3a1cbbc25a02c82f6ee181609e83/figure/3) and [Kunchukuttan, Anoop et al. AI4Bharat-IndicNLP.](https://www.semanticscholar.org/paper/AI4Bharat-IndicNLP-Corpus%3A-Monolingual-Corpora-and-Kunchukuttan-Kakwani/7997d432925aff0ba05497d2893c09918298ca55/figure/4)
| Dataset | FT-W | FT-WC | INLP | iNLTK | **roberta-base-mr 🏆** |
| --------------- | ----- | ----- | ----- | ----- | --------------------- |
| iNLTK Headlines | 83.06 | 81.65 | 89.92 | 92.4 | **97.48** |
|
flax-community/mr-inltk-classifier | 48e544ec711aa7067cb04fd2c7884ad4e6ca8e19 | 2021-07-17T09:43:36.000Z | [
"pytorch"
] | null | false | flax-community | null | flax-community/mr-inltk-classifier | 2 | 1 | null | 23,936 | Entry not found |
flax-community/norsk-gpt-wiki | 4c8c2f1d557a9687a87613fdd70df31952207b43 | 2021-07-17T07:47:05.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"no",
"transformers"
] | text-generation | false | flax-community | null | flax-community/norsk-gpt-wiki | 2 | null | transformers | 23,937 | ---
language: no
widget:
- text: "Det er flott"
---
# GPT2-svenska-wikipedia
A norwegian GPT2 style model trained using Flax CLM pipeline on the Norwegian
part of the wiki40b dataset.
https://huggingface.co/datasets/wiki40b
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
## Data cleaning and preprocessing
The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work.
```python
from datasets import load_dataset
def load_and_clean_wiki():
dataset = load_dataset('wiki40b', 'no', beam_runner='DirectRunner', split="train")
#dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner')
dataset = dataset.remove_columns(['wikidata_id', 'version_id'])
filtered_dataset = dataset.map(filter_wikipedia)
# filtered_dataset[:3]
# print(filtered_dataset[:3])
return filtered_dataset
def filter_wikipedia(batch):
batch["text"] = " ".join(batch["text"].split("\
_START_SECTION_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_PARAGRAPH_\
"))
batch["text"] = " ".join(batch["text"].split("_NEWLINE_"))
batch["text"] = " ".join(batch["text"].split("\xa0"))
return batch
```
## Training script
The following training script was used to train the model.
```bash
./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="no" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub
```
|
flax-sentence-embeddings/multi-qa_v1-MiniLM-L6-cls_dot | a5317238b02aea495bdcce75a51756dd2a3cccd4 | 2021-07-26T01:33:36.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2102.07033",
"arxiv:2104.08727",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/multi-qa_v1-MiniLM-L6-cls_dot | 2 | null | sentence-transformers | 23,938 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa_v1-MiniLM-L6-cls_dot
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, cls output was used instead of mean pooling as sentence embeddings. Dot product was used to calculate similarity for learning objective.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-MiniLM-L6-cls_dot')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | |
flax-sentence-embeddings/multi-qa_v1-distilbert-cls_dot | 73f1476fb53442fa13f5fcb0b6c9b584bc1ece03 | 2021-07-26T01:34:32.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"arxiv:2102.07033",
"arxiv:2104.08727",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/multi-qa_v1-distilbert-cls_dot | 2 | null | sentence-transformers | 23,939 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa_v1-distilbert-cls_dot
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, cls output was used instead of mean pooling as sentence embeddings. Dot product was used to calculate similarity for learning objective.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-distilbert-cls_dot')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | |
flax-sentence-embeddings/multi-qa_v1-mpnet-mean_cos | 47dcad4dfcebaa6b40990082556fae91510653cb | 2021-07-26T01:35:19.000Z | [
"pytorch",
"mpnet",
"fill-mask",
"arxiv:2102.07033",
"arxiv:2104.08727",
"sentence-transformers",
"feature-extraction",
"sentence-similarity"
] | sentence-similarity | false | flax-sentence-embeddings | null | flax-sentence-embeddings/multi-qa_v1-mpnet-mean_cos | 2 | null | sentence-transformers | 23,940 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# multi-qa_v1-mpnet-mean_cos
## Model Description
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity. For this model, mean pooling of hidden states were used as sentence embeddings.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/multi-qa_v1-mpnet-mean_cos')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. MSMARCO, NQ & other question-answer datasets were also used.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [Stack Exchange QA - Title & Answer](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl) | - | 4,750,619 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| SearchQA | - | 582,261 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
|
flboehm/youtube-bert_10 | 725d2cd99bac36f39c658bb6a8f65a3188d03800 | 2022-01-10T11:39:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | flboehm | null | flboehm/youtube-bert_10 | 2 | null | transformers | 23,941 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: youtube-bert_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# youtube-bert_10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4456
- Perplexity: 11.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6799 | 1.0 | 1899 | 2.5135 |
| 2.5736 | 2.0 | 3798 | 2.4612 |
| 2.5172 | 3.0 | 5697 | 2.4363 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
formermagic/codet5-large | 0d33ed3d83fcb639df1ecb35b55d0315cc8364a2 | 2021-09-28T13:25:29.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | formermagic | null | formermagic/codet5-large | 2 | 3 | transformers | 23,942 | Entry not found |
fredwang08/bert-test | 439b3edf84151124934deb67d921e9f0908001f5 | 2022-02-02T21:00:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | fredwang08 | null | fredwang08/bert-test | 2 | null | transformers | 23,943 | Entry not found |
frgfm/cspdarknet53 | bca840ee1983c9d5485dfe0ecbe9b9ff3a500792 | 2022-07-20T00:57:40.000Z | [
"pytorch",
"dataset:frgfm/imagenette",
"arxiv:1911.11929",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | frgfm | null | frgfm/cspdarknet53 | 2 | null | transformers | 23,944 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
datasets:
- frgfm/imagenette
---
# CSP-Darknet-53 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/cspdarknet53").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1911-11929,
author = {Chien{-}Yao Wang and
Hong{-}Yuan Mark Liao and
I{-}Hau Yeh and
Yueh{-}Hua Wu and
Ping{-}Yang Chen and
Jun{-}Wei Hsieh},
title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
journal = {CoRR},
volume = {abs/1911.11929},
year = {2019},
url = {http://arxiv.org/abs/1911.11929},
eprinttype = {arXiv},
eprint = {1911.11929},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/darknet53 | af0ca08d933be798e34326018b551dc57f4ca925 | 2022-07-20T00:57:28.000Z | [
"pytorch",
"dataset:frgfm/imagenette",
"arxiv:1804.02767",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | frgfm | null | frgfm/darknet53 | 2 | null | transformers | 23,945 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
datasets:
- frgfm/imagenette
---
# Darknet-53 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The Darknet-53 architecture was introduced in [this paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf).
## Model description
The core idea of the author is to increase the depth of the Darknet-19 architecture, and adding shortcut connections to ease the gradient propagation.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/darknet53").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1804-02767,
author = {Joseph Redmon and
Ali Farhadi},
title = {YOLOv3: An Incremental Improvement},
journal = {CoRR},
volume = {abs/1804.02767},
year = {2018},
url = {http://arxiv.org/abs/1804.02767},
eprinttype = {arXiv},
eprint = {1804.02767},
timestamp = {Mon, 13 Aug 2018 16:48:24 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-02767.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/repvgg_a2 | 3187bd0598bd995b41c69a64d4c4ff19bff03ce6 | 2022-07-20T00:56:20.000Z | [
"pytorch",
"onnx",
"dataset:frgfm/imagenette",
"arxiv:2101.03697",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | frgfm | null | frgfm/repvgg_a2 | 2 | null | transformers | 23,946 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# RepVGG-A2 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a2").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2101-03697,
author = {Xiaohan Ding and
Xiangyu Zhang and
Ningning Ma and
Jungong Han and
Guiguang Ding and
Jian Sun},
title = {RepVGG: Making VGG-style ConvNets Great Again},
journal = {CoRR},
volume = {abs/2101.03697},
year = {2021},
url = {https://arxiv.org/abs/2101.03697},
eprinttype = {arXiv},
eprint = {2101.03697},
timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/resnet18 | d2706aa1e1bd312ac3bf0d40b623c5bbe1122dfd | 2022-07-20T00:56:53.000Z | [
"pytorch",
"onnx",
"dataset:frgfm/imagenette",
"arxiv:1512.03385",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | frgfm | null | frgfm/resnet18 | 2 | null | transformers | 23,947 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ResNet-18 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/resnet18").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/resnet34 | 64b1f1eaee68c1c2e435acc813e25bc5ed170b17 | 2022-07-20T00:57:04.000Z | [
"pytorch",
"onnx",
"dataset:frgfm/imagenette",
"arxiv:1512.03385",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | frgfm | null | frgfm/resnet34 | 2 | null | transformers | 23,948 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ResNet-34 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/resnet34").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
fspanda/Electra-Medical-v1.5-generator | edbbb88bd7c110ce5dda9dce087f9c3f278ec392 | 2020-11-04T14:58:52.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | fspanda | null | fspanda/Electra-Medical-v1.5-generator | 2 | null | transformers | 23,949 | Entry not found |
fspanda/electra-medical-discriminator | 4fd3b8478ec888480666dc7dc02f1526a7d6032b | 2020-10-28T11:33:37.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | fspanda | null | fspanda/electra-medical-discriminator | 2 | null | transformers | 23,950 | Entry not found |
furyhawk/t5-base-finetuned-bbc | 40bcfef54e58cd4e72f325a95c2530d14ab8327b | 2021-11-06T13:51:15.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | furyhawk | null | furyhawk/t5-base-finetuned-bbc | 2 | null | transformers | 23,951 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 334 | 0.1500 | 24.5024 | 21.4979 | 24.0227 | 24.0303 | 19.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
furyhawk/t5-small-finetuned-xsum | 45dbf23c6f3c4b6539f99e8d1f698b983c3a3d1a | 2021-10-22T05:06:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | furyhawk | null | furyhawk/t5-small-finetuned-xsum | 2 | null | transformers | 23,952 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 128 | 2.9003 | 19.4784 | 2.8529 | 14.7786 | 15.0614 | 18.9825 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gagan3012/xls-r-300m-hi | f689a18edc4f8b0e86c8f7ddfcb5d8b3fee0d4dc | 2022-01-30T20:39:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gagan3012 | null | gagan3012/xls-r-300m-hi | 2 | null | transformers | 23,953 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: xls-r-300m-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7522
- Wer: 1.0091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0417 | 2.59 | 500 | 5.1484 | 1.0 |
| 3.3722 | 5.18 | 1000 | 3.3380 | 1.0001 |
| 1.9752 | 7.77 | 1500 | 1.3910 | 1.0074 |
| 1.5868 | 10.36 | 2000 | 1.0298 | 1.0084 |
| 1.4413 | 12.95 | 2500 | 0.9313 | 1.0175 |
| 1.3296 | 15.54 | 3000 | 0.8966 | 1.0194 |
| 1.2746 | 18.13 | 3500 | 0.8875 | 1.0097 |
| 1.2147 | 20.73 | 4000 | 0.8746 | 1.0089 |
| 1.1774 | 23.32 | 4500 | 0.8383 | 1.0198 |
| 1.129 | 25.91 | 5000 | 0.7848 | 1.0167 |
| 1.0995 | 28.5 | 5500 | 0.7992 | 1.0210 |
| 1.0665 | 31.09 | 6000 | 0.7878 | 1.0107 |
| 1.0321 | 33.68 | 6500 | 0.7653 | 1.0082 |
| 1.0068 | 36.27 | 7000 | 0.7635 | 1.0065 |
| 0.9916 | 38.86 | 7500 | 0.7728 | 1.0090 |
| 0.9735 | 41.45 | 8000 | 0.7688 | 1.0070 |
| 0.9745 | 44.04 | 8500 | 0.7455 | 1.0097 |
| 0.9677 | 46.63 | 9000 | 0.7605 | 1.0099 |
| 0.9313 | 49.22 | 9500 | 0.7527 | 1.0097 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
gaotianyu1350/unsup-simcse-roberta-base | 5a17c5e7ee2bd039562736564cda1a694aed3418 | 2021-05-20T16:26:43.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | gaotianyu1350 | null | gaotianyu1350/unsup-simcse-roberta-base | 2 | null | transformers | 23,954 | Entry not found |
gborn/autonlp-news-summarization-483413089 | 10365751c656996353f2eaa6fe52517024728f34 | 2022-01-07T23:10:47.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:gborn/autonlp-data-news-summarization",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | gborn | null | gborn/autonlp-news-summarization-483413089 | 2 | 1 | transformers | 23,955 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- gborn/autonlp-data-news-summarization
co2_eq_emissions: 210.6348731063569
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 483413089
- CO2 Emissions (in grams): 210.6348731063569
## Validation Metrics
- Loss: 1.8478657007217407
- Rouge1: 50.5981
- Rouge2: 26.2167
- RougeL: 46.0513
- RougeLsum: 46.061
- Gen Len: 13.5987
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/gborn/autonlp-news-summarization-483413089
``` |
gchhablani/fnet-base-finetuned-rte | d7a09abdd3500117ae94b53bc956213578c6a629 | 2021-09-20T09:08:50.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-base-finetuned-rte | 2 | null | transformers | 23,956 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.628158844765343
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-rte
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6978
- Accuracy: 0.6282
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6829 | 1.0 | 156 | 0.6657 | 0.5704 |
| 0.6174 | 2.0 | 312 | 0.6784 | 0.6101 |
| 0.5141 | 3.0 | 468 | 0.6978 | 0.6282 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-qqp | 01d3c029f22573a329a5262d733d6fae609f2fd7 | 2021-10-09T08:56:52.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-qqp | 2 | null | transformers | 23,957 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8943111550828593
- name: F1
type: f1
value: 0.8556565212985171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-qqp
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5515
- Accuracy: 0.8943
- F1: 0.8557
- Combined Score: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.4574 | 1.0 | 90962 | 0.4946 | 0.8694 | 0.8297 | 0.8496 |
| 0.3387 | 2.0 | 181924 | 0.4745 | 0.8874 | 0.8437 | 0.8655 |
| 0.2029 | 3.0 | 272886 | 0.5515 | 0.8943 | 0.8557 | 0.8750 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-rte | 0b407cfe8067b380edff0c257dcf00266557e2bf | 2021-09-24T11:27:19.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-rte | 2 | null | transformers | 23,958 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6425992779783394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-rte
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.6426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7105 | 1.0 | 623 | 0.6887 | 0.5740 |
| 0.6714 | 2.0 | 1246 | 0.6742 | 0.6209 |
| 0.509 | 3.0 | 1869 | 0.7528 | 0.6426 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-sst2 | 31092f0b1e37e1d616f5b661af7437128ebf5250 | 2021-10-07T16:48:43.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-sst2 | 2 | null | transformers | 23,959 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9048165137614679
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-sst2
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5240
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.394 | 1.0 | 16838 | 0.3896 | 0.8968 |
| 0.2076 | 2.0 | 33676 | 0.5100 | 0.8956 |
| 0.1148 | 3.0 | 50514 | 0.5240 | 0.9048 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/fnet-large-finetuned-wnli | 910ed18a5dacd9aa36867311f9003a575b8c0445 | 2021-09-23T05:39:44.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-wnli | 2 | null | transformers | 23,960 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.38028169014084506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-wnli
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 |
| 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 |
| 0.706 | 3.0 | 477 | 0.6875 | 0.5634 |
| 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 |
| 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gchhablani/wav2vec2-large-xlsr-eo | 55c9e3941c3675b06fa9f71981d991a700a3717b | 2021-07-06T04:31:31.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"eo",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gchhablani | null | gchhablani/wav2vec2-large-xlsr-eo | 2 | null | transformers | 23,961 | ---
language: eo
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Esperanto by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eo
type: common_voice
args: eo
metrics:
- name: Test WER
type: wer
value: 10.13
---
# Wav2Vec2-Large-XLSR-53-Esperanto
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model.to("cuda")
chars_to_ignore_regex = """[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\„\\\\\\\\«\\\\\\\\(\\\\\\\\»\\\\\\\\)\\\\\\\\’\\\\\\\\']"""
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace('—',' ').replace('–',' ')
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=5000)))
```
**Test Result**: 10.13 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://github.com/gchhablani/wav2vec2-week/blob/main/fine-tune-xlsr-wav2vec2-on-esperanto-asr-with-transformers-final.ipynb). |
gchhablani/wav2vec2-large-xlsr-it | c4017d49e89eb5d425e69e07faee8c9d2b60bf4e | 2021-07-06T04:55:11.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"it",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | gchhablani | null | gchhablani/wav2vec2-large-xlsr-it | 2 | null | transformers | 23,962 | ---
language: it
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Italian by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice it
type: common_voice
args: it
metrics:
- name: Test WER
type: wer
value: 11.49
---
# Wav2Vec2-Large-XLSR-53-Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "it", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import unicodedata
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
allowed_characters = [
" ",
"'",
'a',
'b',
'c',
'd',
'e',
'f',
'g',
'h',
'i',
'j',
'k',
'l',
'm',
'n',
'o',
'p',
'q',
'r',
's',
't',
'u',
'v',
'w',
'x',
'y',
'z',
'à',
'á',
'è',
'é',
'ì',
'í',
'ò',
'ó',
'ù',
'ú',
]
def remove_accents(input_str):
if input_str in allowed_characters:
return input_str
if input_str == 'ø':
return 'o'
elif input_str=='ß' or input_str =='ß':
return 'b'
elif input_str=='ё':
return 'e'
elif input_str=='đ':
return 'd'
nfkd_form = unicodedata.normalize('NFKD', input_str)
only_ascii = nfkd_form.encode('ASCII', 'ignore').decode()
if only_ascii is None or only_ascii=='':
return input_str
else:
return only_ascii
def fix_accents(sentence):
new_sentence=''
for char in sentence:
new_sentence+=remove_accents(char)
return new_sentence
test_dataset = load_dataset("common_voice", "it", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_remove= [",", "?", ".", "!", "-", ";", ":", '""', "%", '"', "�",'ʿ','“','”','(','=','`','_','+','«','<','>','~','…','«','»','–','\[','\]','°','̇','´','ʾ','„','̇','̇','̇','¡'] # All extra characters
chars_to_remove_regex = f'[{"".join(chars_to_remove)}]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower().replace('‘',"'").replace('ʻ',"'").replace('ʼ',"'").replace('’',"'").replace('ʹ',"''").replace('̇','')
batch["sentence"] = fix_accents(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=5000)))
```
**Test Result**: 11.49 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://github.com/gchhablani/wav2vec2-week/blob/main/fine-tune-xlsr-wav2vec2-on-italian-asr-with-transformers_final.ipynb). |
ghadeermobasher/BC5CDR-Chemical-Disease_ImbalancedBioM-ELECTRA-Base-Discriminator | 9193e56ea8f676662b5b58df9fb1e6b92c71f98f | 2022-01-22T23:39:05.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease_ImbalancedBioM-ELECTRA-Base-Discriminator | 2 | null | transformers | 23,963 | Entry not found |
ghadeermobasher/BC5CDR-Chemical_ImbalancedBioM-ELECTRA-Base-Discriminator | 8355c7168186fd9897a4b14064d742f925140e03 | 2022-01-23T00:52:03.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical_ImbalancedBioM-ELECTRA-Base-Discriminator | 2 | null | transformers | 23,964 | Entry not found |
ghadeermobasher/BioNLP13CG-Chem_ImbalancedBioM-ELECTRA-Base-Discriminator | 95b29846fcefe91e2657554868ba8f57dbc50a8f | 2022-01-22T23:21:19.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BioNLP13CG-Chem_ImbalancedBioM-ELECTRA-Base-Discriminator | 2 | null | transformers | 23,965 | Entry not found |
ghazikhanihamed/A-TCDB-BERT-C | 261940af2f6b2c45aaa1ba3f4d5878844575972e | 2022-02-19T19:09:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ghazikhanihamed | null | ghazikhanihamed/A-TCDB-BERT-C | 2 | null | transformers | 23,966 | Entry not found |
ghazikhanihamed/TCDB-BERT-C | 7b6fc1375700a9156a33b03b81517dc20cbad967 | 2022-02-18T20:37:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ghazikhanihamed | null | ghazikhanihamed/TCDB-BERT-C | 2 | null | transformers | 23,967 | Entry not found |
ghazikhanihamed/IonchannelBERT | a48c19fff5aa01a35688f3b3ae6319a3783edaf9 | 2022-02-19T18:58:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ghazikhanihamed | null | ghazikhanihamed/IonchannelBERT | 2 | null | transformers | 23,968 | Entry not found |
ghazikhanihamed/TransporterBERT | 34ab3b72d02ea59289aceb20720c63edc45aed1e | 2022-02-19T18:49:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ghazikhanihamed | null | ghazikhanihamed/TransporterBERT | 2 | null | transformers | 23,969 | Entry not found |
ghomasHudson/style_change_detection | 40fa1f8666203ac8b42d38ab07f463e20c55ad48 | 2022-01-01T17:44:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ghomasHudson | null | ghomasHudson/style_change_detection | 2 | null | transformers | 23,970 | Entry not found |
ghosh-r/bn-poets | 7c28c05f5508b503ec1a9873cd8915ee921eb46a | 2021-07-20T18:25:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ghosh-r | null | ghosh-r/bn-poets | 2 | null | transformers | 23,971 | Entry not found |
ghosh-r/robi-kobi | dad41b5a2ebb651053fb9d756d1304b04ba128cd | 2021-07-20T15:25:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"bn",
"transformers"
] | text-generation | false | ghosh-r | null | ghosh-r/robi-kobi | 2 | 1 | transformers | 23,972 | ---
language: bn
tags:
- text-generation
widget:
- text: তোমাকে দেখেছি আমার হৃদয় মাঝে
---
# Robi Kobi
### Created by [Ritobrata Ghosh](https://ghosh-r.github.io)
A model that writes Bengali poems in the style of Nobel Laureate poet Rabindranath Tagore.
This model is fine-tuned on 1,400+ poems written by Rabindranath Tagore. This model leverages the [Bangla GPT-2](https://huggingface.co/ghosh-r/bangla-gpt2) pretrained model, trained on mc4-Bengali dataset. |
giacomomiolo/electramed_base_scivocab_500k | 17838316f1084bd64a95bb2d1bb116c5291f7fda | 2020-09-28T07:33:46.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
] | null | false | giacomomiolo | null | giacomomiolo/electramed_base_scivocab_500k | 2 | null | transformers | 23,973 | Entry not found |
giganticode/bert-base-StackOverflow-comments_1M | 96482783ebdcf46c7c705c7ea6671197e3de9e6c | 2021-10-25T13:01:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | giganticode | null | giganticode/bert-base-StackOverflow-comments_1M | 2 | null | transformers | 23,974 | Entry not found |
giganticode/bert-base-ar_miner | b2e7e3cf835afbccce17126e8a57b68135d8dd81 | 2021-10-25T13:00:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | giganticode | null | giganticode/bert-base-ar_miner | 2 | null | transformers | 23,975 | Entry not found |
giganticode/roberta-base-ar_miner | a5e9cf2b235912558c68265229798204fb8c2fde | 2021-10-25T13:00:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | giganticode | null | giganticode/roberta-base-ar_miner | 2 | null | transformers | 23,976 | Entry not found |
glasses/cse_resnet50 | 6e60ce1e455ef0c0696d2188c7ff483ff014c5d4 | 2021-04-24T10:50:58.000Z | [
"pytorch",
"arxiv:1512.03385",
"arxiv:1812.01187",
"transformers"
] | null | false | glasses | null | glasses/cse_resnet50 | 2 | null | transformers | 23,977 | # cse_resnet50
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/densenet161 | 154287b54f54c2f0367f57572ddba73176a0128a | 2021-12-01T07:50:20.000Z | [
"pytorch",
"arxiv:1608.06993",
"transformers"
] | null | false | glasses | null | glasses/densenet161 | 2 | null | transformers | 23,978 | # densenet161
Implementation of DenseNet proposed in [Densely Connected Convolutional
Networks](https://arxiv.org/abs/1608.06993)
Create a default models
``` {.sourceCode .}
DenseNet.densenet121()
DenseNet.densenet161()
DenseNet.densenet169()
DenseNet.densenet201()
```
Examples:
``` {.sourceCode .}
# change activation
DenseNet.densenet121(activation = nn.SELU)
# change number of classes (default is 1000 )
DenseNet.densenet121(n_classes=100)
# pass a different block
DenseNet.densenet121(block=...)
# change the initial convolution
model = DenseNet.densenet121()
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = DenseNet.densenet121()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
# [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])]
```
|
glasses/regnetx_006 | ce3cfb5d69d89a3ee3d79769fb6d6c3f0b69bf39 | 2021-11-30T20:26:24.000Z | [
"pytorch",
"arxiv:2003.13678",
"transformers"
] | null | false | glasses | null | glasses/regnetx_006 | 2 | null | transformers | 23,979 | # regnetx_006
Implementation of RegNet proposed in [Designing Network Design
Spaces](https://arxiv.org/abs/2003.13678)
The main idea is to start with a high dimensional search space and
iteratively reduce the search space by empirically apply constrains
based on the best performing models sampled by the current search
space.
The resulting models are light, accurate, and faster than
EfficientNets (up to 5x times!)
For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the
bottleneck ratio $b_i$ for all stage $i$. The following table shows
all the restrictions applied from one search space to the next one.

The paper is really well written and very interesting, I highly
recommended read it.
``` python
ResNet.regnetx_002()
ResNet.regnetx_004()
ResNet.regnetx_006()
ResNet.regnetx_008()
ResNet.regnetx_016()
ResNet.regnetx_040()
ResNet.regnetx_064()
ResNet.regnetx_080()
ResNet.regnetx_120()
ResNet.regnetx_160()
ResNet.regnetx_320()
# Y variants (with SE)
ResNet.regnety_002()
# ...
ResNet.regnetx_320()
You can easily customize your model
```
Examples:
``` python
# change activation
RegNet.regnetx_004(activation = nn.SELU)
# change number of classes (default is 1000 )
RegNet.regnetx_004(n_classes=100)
# pass a different block
RegNet.regnetx_004(block=RegNetYBotteneckBlock)
# change the steam
model = RegNet.regnetx_004(stem=ResNetStemC)
change shortcut
model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = RegNet.regnetx_004()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])]
```
|
glasses/regnety_040 | d32e05788849c68950174e2a89c849cf54ecd7ac | 2021-12-01T07:47:20.000Z | [
"pytorch",
"transformers"
] | null | false | glasses | null | glasses/regnety_040 | 2 | null | transformers | 23,980 | Entry not found |
glasses/resnet34d | b53c002f7bd81002fac17b1775e726fe6f3f6073 | 2021-11-30T20:08:51.000Z | [
"pytorch",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | glasses | null | glasses/resnet34d | 2 | null | transformers | 23,981 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet34d
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
glasses/vgg11_bn | 7b029f3935847f14a1ebf6bc397dc46f7685a3b6 | 2021-12-01T07:58:18.000Z | [
"pytorch",
"transformers"
] | null | false | glasses | null | glasses/vgg11_bn | 2 | null | transformers | 23,982 | # vgg11_bn
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/vgg19_bn | dd9bded2e8a1c4119ba2692a06ffe9bdfaa50ed6 | 2021-12-01T08:06:23.000Z | [
"pytorch",
"transformers"
] | null | false | glasses | null | glasses/vgg19_bn | 2 | null | transformers | 23,983 | # vgg19_bn
Implementation of VGG proposed in [Very Deep Convolutional Networks For
Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf)
``` python
VGG.vgg11()
VGG.vgg13()
VGG.vgg16()
VGG.vgg19()
VGG.vgg11_bn()
VGG.vgg13_bn()
VGG.vgg16_bn()
VGG.vgg19_bn()
```
Please be aware that the [bn]{.title-ref} models uses BatchNorm but
they are very old and people back then don\'t know the bias is
superfluous in a conv followed by a batchnorm.
Examples:
``` python
# change activation
VGG.vgg11(activation = nn.SELU)
# change number of classes (default is 1000 )
VGG.vgg11(n_classes=100)
# pass a different block
from nn.models.classification.senet import SENetBasicBlock
VGG.vgg11(block=SENetBasicBlock)
# store the features tensor after every block
```
|
glasses/wide_resnet101_2 | 1e96f7f5a1e6a4b3ec80e231449e7561c5264a20 | 2021-11-30T20:20:06.000Z | [
"pytorch",
"arxiv:1605.07146",
"transformers"
] | null | false | glasses | null | glasses/wide_resnet101_2 | 2 | null | transformers | 23,984 | # wide_resnet101_2
Implementation of Wide ResNet proposed in [\"Wide Residual
Networks\"](https://arxiv.org/pdf/1605.07146.pdf)
Create a default model
``` python
WideResNet.wide_resnet50_2()
WideResNet.wide_resnet101_2()
# create a wide_resnet18_4
WideResNet.resnet18(block=WideResNetBottleNeckBlock, width_factor=4)
```
Examples:
``` python
# change activation
WideResNet.resnext50_32x4d(activation = nn.SELU)
# change number of classes (default is 1000 )
WideResNet.resnext50_32x4d(n_classes=100)
# pass a different block
WideResNet.resnext50_32x4d(block=SENetBasicBlock)
# change the initial convolution
model = WideResNet.resnext50_32x4d
model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3)
# store each feature
x = torch.rand((1, 3, 224, 224))
model = WideResNet.wide_resnet50_2()
features = []
x = model.encoder.gate(x)
for block in model.encoder.layers:
x = block(x)
features.append(x)
print([x.shape for x in features])
# [torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7])]
```
|
google/multiberts-seed_0-step_1200k | c1a97ddff320d4c82d2d01a2fd6b85d525464c2f | 2021-11-06T00:11:30.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_1200k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_1200k | 2 | null | transformers | 23,985 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_1200k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1200k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 1200k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1200k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1200k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_1400k | 653707ac7e09bcf698e5bdd8c453f8f4c6f91862 | 2021-11-06T00:15:06.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_1400k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_1400k | 2 | null | transformers | 23,986 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_1400k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1400k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 1400k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1400k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1400k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_160k | db56b9241f639ba6fb106e94e323c09fdb467ac1 | 2021-11-05T23:50:48.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_160k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_160k | 2 | null | transformers | 23,987 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_160k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 160k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 160k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_160k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_160k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_1700k | d1f04fe50d1d7147fdb4712969fbc13a5cf22fa3 | 2021-11-06T00:20:20.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_1700k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_1700k | 2 | null | transformers | 23,988 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_1700k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 1700k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1700k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1700k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_2000k | acbb57400e56236672bd2d794196ff0280082a14 | 2021-11-06T00:25:17.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_2000k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_2000k | 2 | null | transformers | 23,989 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_2000k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 2000k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 2000k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_2000k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_2000k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_200k | 6eb8d7b244d784fab45aa1905917622063585c8a | 2021-11-05T23:54:26.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_200k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_200k | 2 | null | transformers | 23,990 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_200k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 200k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 200k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_200k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_200k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_700k | ae06621fa6d00385b52f5e91a8ad25a0a1a329c0 | 2021-11-06T00:02:59.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_700k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_700k | 2 | null | transformers | 23,991 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_700k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 700k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 700k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_700k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_700k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_0-step_800k | 7dff21ce1f35b24137b531900107eeb2c25260f0 | 2021-11-06T00:04:47.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_0",
"multiberts-seed_0-step_800k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_0-step_800k | 2 | null | transformers | 23,992 | ---
language: en
tags:
- multiberts
- multiberts-seed_0
- multiberts-seed_0-step_800k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 800k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 800k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_800k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_800k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_1-step_1000k | 0d46f8cefcf69ed3e8f7f91c9b0dd477feb004ef | 2021-11-06T01:03:18.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_1000k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_1-step_1000k | 2 | null | transformers | 23,993 | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_1000k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1000k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 1000k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1000k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1000k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_1-step_100k | f971671d7c49aee360b3d18c24c414c6abebcba8 | 2021-11-06T00:41:44.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_100k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_1-step_100k | 2 | null | transformers | 23,994 | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_100k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 100k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 100k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_100k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_100k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_1-step_1600k | 4d6efe4b69e715984733954a8673d544ee463935 | 2021-11-06T01:14:01.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_1600k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_1-step_1600k | 2 | null | transformers | 23,995 | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_1600k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1600k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 1600k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1600k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1600k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_1-step_1800k | 555c290f23fc1344c189554d3bec122ac9d49eeb | 2021-11-06T01:17:19.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_1800k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_1-step_1800k | 2 | null | transformers | 23,996 | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_1800k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 1800k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_1-step_600k | 090348489654b6da4f3dba561a39bbb90ce0a039 | 2021-11-06T00:56:42.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_600k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_1-step_600k | 2 | null | transformers | 23,997 | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_600k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 600k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 600k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_600k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_600k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_1-step_700k | cb4dd7d1b805cf63510517e3b0dd346687d41b73 | 2021-11-06T00:58:20.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_1",
"multiberts-seed_1-step_700k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_1-step_700k | 2 | null | transformers | 23,998 | ---
language: en
tags:
- multiberts
- multiberts-seed_1
- multiberts-seed_1-step_700k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 700k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
google/multiberts-seed_2-step_1100k | f4fe2148b0867e8099f1d1d84b2a7b48bdd05cc6 | 2021-11-06T01:54:18.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2106.16163",
"arxiv:1908.08962",
"transformers",
"multiberts",
"multiberts-seed_2",
"multiberts-seed_2-step_1100k",
"license:apache-2.0"
] | null | false | google | null | google/multiberts-seed_2-step_1100k | 2 | null | transformers | 23,999 | ---
language: en
tags:
- multiberts
- multiberts-seed_2
- multiberts-seed_2-step_1100k
license: apache-2.0
---
# MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1100k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #2, captured at step 1100k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1100k')
model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1100k')
model = BertModel.from_pretrained("google/multiberts-seed_2-step_1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.