modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
danurahul/distil | d77bcd0bb0d4c2c0bf1990ed14d95573c4d30631 | 2021-06-08T02:21:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/distil | 1 | null | transformers | 28,800 | Entry not found |
danurahul/ghosh_dentist | ed86362b7501f582bbe28ba986d30d6c4a950b08 | 2021-07-07T07:57:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/ghosh_dentist | 1 | null | transformers | 28,801 | Entry not found |
danurahul/gptneo_tarot | cf1b90179ed6789c2263749723ed789cc83b5a04 | 2021-05-16T11:01:00.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/gptneo_tarot | 1 | null | transformers | 28,802 | Entry not found |
danurahul/wav2vec2-large-xlsr-or | f8c84a7ccc70307b35684dc0c1cd48ede9d7516d | 2021-07-06T01:22:42.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | danurahul | null | danurahul/wav2vec2-large-xlsr-or | 1 | null | transformers | 28,803 | ---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: odia XLSR Wav2Vec2 Large 2000
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice or
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 54.6
---
# Wav2Vec2-Large-XLSR-53-or
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on odia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 54.6 %
## Training
The Common Voice `train`, `validation`, and test datasets were used for training as well as prediction and testing
The script used for training can be found [https://github.com/rahul-art/wav2vec2_or] |
danurahul/yoav_gpt_neo1.3B | 7ffe0d32ba66828c8be7c1adeef778a9156684fe | 2021-06-18T03:52:45.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/yoav_gpt_neo1.3B | 1 | null | transformers | 28,804 | Entry not found |
danurahul/yoav_neo_spaces | 32ab011ab3cc6ccdf0fad9eca803c26c75ea11b7 | 2021-06-28T07:34:43.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | danurahul | null | danurahul/yoav_neo_spaces | 1 | null | transformers | 28,805 | Entry not found |
dark-knight/wav2vec2-base-timit-demo-colab | b8f307f17ef64909b151c859aa9e8367908cfacf | 2022-02-06T16:25:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | dark-knight | null | dark-knight/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 28,806 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
darkzek/chickenbot-jon-snow | 0530347afe8849502b9da63aba0d831660139bca | 2022-02-17T01:51:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | darkzek | null | darkzek/chickenbot-jon-snow | 1 | null | transformers | 28,807 | ---
tags:
- conversational
---
# Chicken Bot's Jon Snow DialoGPT Model |
darthboii/DialoGPT-small-PickleRick | 79742c0527c1c7b17b53c42446ba182c30e33532 | 2021-09-15T07:48:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | darthboii | null | darthboii/DialoGPT-small-PickleRick | 1 | null | transformers | 28,808 | ---
tags:
- conversational
---
# Pickle Rick DialoGPT Model |
databuzzword/JointBERT-snips | 39e92ce5acb96c12319087acb39700edd983fe3b | 2021-09-22T14:02:14.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | databuzzword | null | databuzzword/JointBERT-snips | 1 | null | transformers | 28,809 | https://github.com/monologg/JointBERT |
davanstrien/eighteenth-century-distilbert | 561cb60bcc12d2cd7ed065e15986c292fd5ad032 | 2022-02-01T08:42:48.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | davanstrien | null | davanstrien/eighteenth-century-distilbert | 1 | null | transformers | 28,810 | Entry not found |
davidcechak/tss_bert_6 | 05e18f338a3e023d00fa4a6c3cec2851155d6eff | 2022-01-25T17:17:11.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | davidcechak | null | davidcechak/tss_bert_6 | 1 | null | transformers | 28,811 | Entry not found |
davidcechak/tss_bert_6_v1 | 96a2ed1a77c3ed60f9b971f67a17baab08f22dda | 2022-01-27T03:37:47.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | davidcechak | null | davidcechak/tss_bert_6_v1 | 1 | null | transformers | 28,812 | Entry not found |
day/first-bot-large | 86d1321e3098ab14e8a72350bb6da6d0978f0a12 | 2022-01-14T11:29:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | day | null | day/first-bot-large | 1 | null | transformers | 28,813 | Entry not found |
day/first-bot-medium | 9a988cb50329ea89ff5cd109ee35daf8b436aead | 2022-01-14T12:52:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | day | null | day/first-bot-medium | 1 | null | transformers | 28,814 | Entry not found |
day/first-bot-small | a45706d4f719a80e4202bca7f4dd14e11f83e24c | 2022-01-14T10:50:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | day | null | day/first-bot-small | 1 | null | transformers | 28,815 | Entry not found |
day/her-bot-small | 4131130a1490eb8867b4ef5413135ce7592d56b2 | 2022-01-16T15:49:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | day | null | day/her-bot-small | 1 | null | transformers | 28,816 | Entry not found |
dbernsohn/t5_measurement_time | 8cb68134da58f7785539faad606d27ddd9c4e707 | 2021-06-23T12:17:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:measurement_time",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dbernsohn | null | dbernsohn/t5_measurement_time | 1 | null | transformers | 28,817 | # measurement_time
---
language: en
datasets:
- measurement_time
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/measurement_time](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetmeasurement_time) for solving **measurement time equations** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_measurement_time")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_measurement_time")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "How many minutes are there between 2:09 PM and 2:27 PM?"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 18</s>
```
Another examples:
+ How many minutes are there between 2:09 PM and 2:27 PM?
+ Answer: 18 Pred: 18
----
+ What is 116 minutes after 10:06 AM?
+ Answer: 12:02 PM Pred: 12:02 PM
----
+ What is 608 minutes after 3:14 PM?
+ Answer: 1:22 AM Pred: 1:22 AM
----
+ What is 64 minutes before 9:16 AM?
+ Answer: 8:12 AM Pred: 8:12 AM
----
+ What is 427 minutes before 4:27 AM?
+ Answer: 9:20 PM Pred: 9:20 PM
----
+ How many minutes are there between 6:36 PM and 12:15 AM?
+ Answer: 339 Pred: 339
----
+ What is 554 minutes before 5:24 PM?
+ Answer: 8:10 AM Pred: 8:10 AM
----
+ What is 307 minutes after 5:15 AM?
+ Answer: 10:22 AM Pred: 10:22 AM
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
dbernsohn/t5_numbers_gcd | 6d5b21bb85463bc3d056bafa52535b7a44e7b188 | 2021-02-08T06:52:18.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:numbers_gcd",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | dbernsohn | null | dbernsohn/t5_numbers_gcd | 1 | null | transformers | 28,818 | # numbers_gcd
---
language: en
datasets:
- numbers_gcd
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/numbers_gcd](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetnumbers_gcd) for solving **greatest common divisor** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_numbers_gcd")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_numbers_gcd")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "What is the highest common factor of 4210884 and 72?"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 36</s>
```
Another examples:
+ Calculate the greatest common factor of 3470 and 97090.
+ Answer: 10 Pred: 10
----
+ Calculate the highest common factor of 3480 and 775431.
+ Answer: 87 Pred: 87
----
+ What is the highest common divisor of 26 and 88049?
+ Answer: 13 Pred: 13
----
+ Calculate the highest common factor of 1416 and 24203688.
+ Answer: 1416 Pred: 1416
----
+ Calculate the highest common divisor of 124 and 69445828.
+ Answer: 124 Pred: 124
----
+ What is the greatest common factor of 657906 and 470?
+ Answer: 94 Pred: 94
----
+ What is the highest common factor of 4210884 and 72?
+ Answer: 36 Pred: 36
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
dbmdz/bert-base-swedish-europeana-cased | 510c5372ecdb9e19aa27517a8e59ee3d1c6693b3 | 2021-11-18T21:35:46.000Z | [
"pytorch",
"jax",
"tensorboard",
"bert",
"fill-mask",
"swedish",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-base-swedish-europeana-cased | 1 | null | transformers | 28,819 | ---
language: swedish
license: mit
widget:
- text: "Det vore [MASK] häller nödvändigt att be"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbsamu/distilroberta-base-finetuned-ner | 21b5f55bcab9afba09efe4e920963699e70eda13 | 2022-01-23T17:53:07.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | dbsamu | null | dbsamu/distilroberta-base-finetuned-ner | 1 | null | transformers | 28,820 | Entry not found |
ddemszky/Feb25_09-02-16_combined_education_dataset_02252021.json_6.25e-05_hist1-truncated-acd81d | 0c7c21ecc96457144774181a1e955e6122749d68 | 2021-05-19T15:23:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"transformers"
] | null | false | ddemszky | null | ddemszky/Feb25_09-02-16_combined_education_dataset_02252021.json_6.25e-05_hist1-truncated-acd81d | 1 | null | transformers | 28,821 | Entry not found |
devin132/w2v-timit-ft-4001 | 55b7b88bd69435078117f3b9203728a1449c45ea | 2021-09-04T22:35:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | devin132 | null | devin132/w2v-timit-ft-4001 | 1 | null | transformers | 28,822 | # Fintuned Wav2Vec of Timit - 4001 checkpoint
|
dhanushlnaik/amySan | c68469e3a5e8cd675f8c145c0a98676d1118874f | 2021-08-28T06:39:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | dhanushlnaik | null | dhanushlnaik/amySan | 1 | null | transformers | 28,823 | ---
tags:
- conversational
---
# AMy San |
diegor2/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetu-truncated-d22eed | 0c8e7fb6a22331b52e368a7dcec9451822764b4f | 2021-12-05T23:13:14.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16_en_ro_pre_processed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | diegor2 | null | diegor2/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetu-truncated-d22eed | 1 | null | transformers | 28,824 | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
model-index:
- name: t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
diegor2/t5-tiny-random-length-96-learning_rate-0.0001-weight_decay-0.01-finetu-truncated-5e15da | ec98b8af44dce2e75fae3f59472c2d67cac475b9 | 2021-12-06T01:02:21.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | diegor2 | null | diegor2/t5-tiny-random-length-96-learning_rate-0.0001-weight_decay-0.01-finetu-truncated-5e15da | 1 | null | transformers | 28,825 | Entry not found |
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetu-truncated-41f800 | abbc528360814723772cfb77ff83c5997667fff1 | 2021-12-06T00:23:37.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16_en_ro_pre_processed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | diegor2 | null | diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetu-truncated-41f800 | 1 | null | transformers | 28,826 | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0002
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4897
- Bleu: 0.0002
- Gen Len: 9.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 6.2585 | 1.0 | 76290 | 6.4897 | 0.0002 | 9.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1 | 28d5bef639f8027b408c02a712d1f972918208a1 | 2021-12-05T23:16:54.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16_en_ro_pre_processed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | diegor2 | null | diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1 | 1 | null | transformers | 28,827 | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
model-index:
- name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
diegozs97/chemprot-seed-0-0k | 8b96da64e4f4a6fd4960aa894cef7ea1dccc82e1 | 2021-12-06T23:14:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-0k | 1 | null | transformers | 28,828 | Entry not found |
diegozs97/chemprot-seed-0-100k | b52ed45bb0565f43b61e3ca35c5015e132eb413b | 2021-12-06T23:36:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-100k | 1 | null | transformers | 28,829 | Entry not found |
diegozs97/chemprot-seed-0-1800k | 93eee6ff6e3508de706ba5e609eb4f77b124547f | 2021-12-07T00:16:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-1800k | 1 | null | transformers | 28,830 | Entry not found |
diegozs97/chemprot-seed-0-2000k | 0b1c2bbb51792d51ee158ecd16873a8aab20d6e2 | 2021-12-07T00:26:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-2000k | 1 | null | transformers | 28,831 | Entry not found |
diegozs97/chemprot-seed-0-20k | da0c808f906438ead13dcc063b70e838852e6762 | 2021-12-06T23:26:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-20k | 1 | null | transformers | 28,832 | Entry not found |
diegozs97/chemprot-seed-0-400k | 16999268343d765d8cf24130ca78a765847f3220 | 2021-12-06T23:46:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-0-400k | 1 | null | transformers | 28,833 | Entry not found |
diegozs97/chemprot-seed-1-0k | 608c586f4c5852cdea979b439553a14f941e940d | 2021-12-07T01:41:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-0k | 1 | null | transformers | 28,834 | Entry not found |
diegozs97/chemprot-seed-1-1000k | 9a8cae5708d5d0dbda3e6ae8a6b7012cee895bfe | 2021-12-07T02:12:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-1000k | 1 | null | transformers | 28,835 | Entry not found |
diegozs97/chemprot-seed-1-100k | e134cb28365957a6f3242295f525fcab4db5a856 | 2021-12-07T02:23:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-100k | 1 | null | transformers | 28,836 | Entry not found |
diegozs97/chemprot-seed-1-1500k | d486f6526aaa0128894e95325611e9cbbdd2fbd4 | 2021-12-07T01:17:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-1500k | 1 | null | transformers | 28,837 | Entry not found |
diegozs97/chemprot-seed-1-1800k | b226dbdbd3d2d15121da9bc46f542454a34d4314 | 2021-12-07T01:24:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-1800k | 1 | null | transformers | 28,838 | Entry not found |
diegozs97/chemprot-seed-1-20k | 90e32cf5d32378fac325ffe4f54cdb25bc84912a | 2021-12-07T00:18:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-20k | 1 | null | transformers | 28,839 | Entry not found |
diegozs97/chemprot-seed-1-60k | 6f7660e742767969c8c7f7c9635df23d38d4fe85 | 2021-12-07T00:27:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-60k | 1 | null | transformers | 28,840 | Entry not found |
diegozs97/chemprot-seed-1-700k | aa4fc526fa09710fb0ffcb668119fb23afb9a97c | 2021-12-07T01:11:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-1-700k | 1 | null | transformers | 28,841 | Entry not found |
diegozs97/chemprot-seed-2-0k | 7f8c13dd8e5948f14254907f7ae2157f265fe4f2 | 2021-12-07T02:48:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-0k | 1 | null | transformers | 28,842 | Entry not found |
diegozs97/chemprot-seed-2-1000k | c10ce797cb99826e8899099328905e9ba744d67d | 2021-12-07T04:04:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-1000k | 1 | null | transformers | 28,843 | Entry not found |
diegozs97/chemprot-seed-2-100k | b2dad843e353e2526fc8a150f083a50a1ede83dc | 2021-12-07T03:06:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-100k | 1 | null | transformers | 28,844 | Entry not found |
diegozs97/chemprot-seed-2-1800k | e0ca01f93c82c4153e0ea00341173792e8adc69e | 2021-12-07T03:48:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-1800k | 1 | null | transformers | 28,845 | Entry not found |
diegozs97/chemprot-seed-2-200k | 4cd34834e72ba9cd0492ffeef9b03e91574f11af | 2021-12-07T03:13:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-200k | 1 | null | transformers | 28,846 | Entry not found |
diegozs97/chemprot-seed-2-20k | 9e743c7522bcc5bdceba72937d0a786e062799ab | 2021-12-07T02:53:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-20k | 1 | null | transformers | 28,847 | Entry not found |
diegozs97/chemprot-seed-2-400k | 930cd7a6e536e609a6a4718db4f3de52b084a970 | 2021-12-07T03:20:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-400k | 1 | null | transformers | 28,848 | Entry not found |
diegozs97/chemprot-seed-2-700k | 02ba5ab54118dcc198102c7a6a27703e13579668 | 2021-12-07T03:29:29.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-2-700k | 1 | null | transformers | 28,849 | Entry not found |
diegozs97/chemprot-seed-3-0k | 64aee58176dadc8d4851ea90d63082bf0bd034ad | 2021-12-07T05:24:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-0k | 1 | null | transformers | 28,850 | Entry not found |
diegozs97/chemprot-seed-3-1000k | 71d7668224714df0d26523a002f7556087e629dd | 2021-12-07T15:26:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-1000k | 1 | null | transformers | 28,851 | Entry not found |
diegozs97/chemprot-seed-3-100k | 367f8814a32090eaca334e59fc5fd3db0f5e0d91 | 2021-12-07T05:46:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-100k | 1 | null | transformers | 28,852 | Entry not found |
diegozs97/chemprot-seed-3-1800k | fbaab09ef98ef4dc57f1a609adf3a39ec523c0f1 | 2021-12-07T06:31:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-1800k | 1 | null | transformers | 28,853 | Entry not found |
diegozs97/chemprot-seed-3-200k | b18b5d7f0ccc3a2a7336d351d625df1cfc97db10 | 2021-12-07T05:51:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-200k | 1 | null | transformers | 28,854 | Entry not found |
diegozs97/chemprot-seed-3-20k | 44a259d833e6f7bd3800865a43c22e0fd769eb9a | 2021-12-07T05:32:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-20k | 1 | null | transformers | 28,855 | Entry not found |
diegozs97/chemprot-seed-3-400k | 09a73a179b2a76c64946acebde369b9840f54b16 | 2021-12-07T05:56:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-400k | 1 | null | transformers | 28,856 | Entry not found |
diegozs97/chemprot-seed-3-700k | a6caf07d7511ea3b828fc9271c50b2ef111e7f5b | 2021-12-07T06:00:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-3-700k | 1 | null | transformers | 28,857 | Entry not found |
diegozs97/chemprot-seed-4-0k | 33d5fc6e432a20cf8a5707431301eb6812339e00 | 2021-12-07T15:49:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-0k | 1 | null | transformers | 28,858 | Entry not found |
diegozs97/chemprot-seed-4-100k | 8ae99a01cac8b84d160782d06139397d420dfd3e | 2021-12-07T18:17:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-100k | 1 | null | transformers | 28,859 | Entry not found |
diegozs97/chemprot-seed-4-1500k | 1f0fc6b7555a51e12f9f21dab3b5fd66f9b796e5 | 2021-12-07T17:03:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-1500k | 1 | null | transformers | 28,860 | Entry not found |
diegozs97/chemprot-seed-4-1800k | cc9c35597edc91c01fe7bc12cdc56873eaeeb680 | 2021-12-07T17:10:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-1800k | 1 | null | transformers | 28,861 | Entry not found |
diegozs97/chemprot-seed-4-2000k | 66acffdec620f633b15c6fc9893161a3128b76d5 | 2021-12-07T17:15:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-2000k | 1 | null | transformers | 28,862 | Entry not found |
diegozs97/chemprot-seed-4-200k | acc697d6e82ad8a973308429e4a3a545b95c3159 | 2021-12-07T16:25:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-200k | 1 | null | transformers | 28,863 | Entry not found |
diegozs97/chemprot-seed-4-20k | a63bd1df8424a812a00c4668902a38454c7441c2 | 2021-12-07T15:57:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-20k | 1 | null | transformers | 28,864 | Entry not found |
diegozs97/chemprot-seed-4-60k | 9ad23e6adaeaf94c455467a9baabb0a9f8b6856a | 2021-12-07T16:05:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-60k | 1 | null | transformers | 28,865 | Entry not found |
diegozs97/chemprot-seed-4-700k | 2cc626617c4ecd60a1a6e20e1924c73d0db489a6 | 2021-12-07T16:39:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/chemprot-seed-4-700k | 1 | null | transformers | 28,866 | Entry not found |
diegozs97/sciie-seed-0-0k | c021bbb7683d89804c7e769e4b156fb89bfa0234 | 2021-12-08T19:42:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-0k | 1 | null | transformers | 28,867 | Entry not found |
diegozs97/sciie-seed-0-100k | ea3f5c6311a47ce6f20c370321ca9386b48b1d63 | 2021-12-08T18:53:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-100k | 1 | null | transformers | 28,868 | Entry not found |
diegozs97/sciie-seed-0-1500k | 6b6dec6f9b3dd3f82f00d94cd0ad0cf814095f1a | 2021-12-08T18:46:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-1500k | 1 | null | transformers | 28,869 | Entry not found |
diegozs97/sciie-seed-0-1800k | 0b966dc6f229c903f5db57e5472e58c83a57fb4d | 2021-12-08T20:02:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-1800k | 1 | null | transformers | 28,870 | Entry not found |
diegozs97/sciie-seed-0-2000k | a73c098edd38034b03d5dc306d663fd4cb1b4baf | 2021-12-08T21:26:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-2000k | 1 | null | transformers | 28,871 | Entry not found |
diegozs97/sciie-seed-0-20k | e40ef8dc8231799ecbd989cb5f72b705f62f6e01 | 2021-12-08T22:25:29.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-20k | 1 | null | transformers | 28,872 | Entry not found |
diegozs97/sciie-seed-0-400k | 9a944f730591285f3e7ec9c260f42dc9367e9cb5 | 2021-12-08T22:51:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-0-400k | 1 | null | transformers | 28,873 | Entry not found |
diegozs97/sciie-seed-1-0k | 42da34a8b838d2baf55620fccf7f8db275755b22 | 2021-12-07T02:57:43.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-0k | 1 | null | transformers | 28,874 | Entry not found |
diegozs97/sciie-seed-1-1000k | d5b319fbca7d44b34c07a6a0685d9eb26898e3f8 | 2021-12-07T03:39:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-1000k | 1 | null | transformers | 28,875 | Entry not found |
diegozs97/sciie-seed-1-100k | daa2b4bdf37f4753d95167c52c7ce52d67455173 | 2021-12-07T03:09:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-100k | 1 | null | transformers | 28,876 | Entry not found |
diegozs97/sciie-seed-1-1500k | 2bd0f1d6e961fe7e5177bda9f884177f1acad37d | 2021-12-07T02:24:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-1500k | 1 | null | transformers | 28,877 | Entry not found |
diegozs97/sciie-seed-1-1800k | 5caec790cb1ec4cb2d9e3e67a8b088386454d16f | 2021-12-07T04:02:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-1800k | 1 | null | transformers | 28,878 | Entry not found |
diegozs97/sciie-seed-1-2000k | 8be04bf4c4ebab61aae9c3dca15d5bbb659b8dd9 | 2021-12-07T03:57:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-2000k | 1 | null | transformers | 28,879 | Entry not found |
diegozs97/sciie-seed-1-200k | 32b5af7cca3bfbc374b88f128bb0787a2ccd686e | 2021-12-07T03:22:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-200k | 1 | null | transformers | 28,880 | Entry not found |
diegozs97/sciie-seed-1-20k | f5842c5eb83f461524547ce09d1d19e292d3e426 | 2021-12-07T01:29:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-20k | 1 | null | transformers | 28,881 | Entry not found |
diegozs97/sciie-seed-1-400k | b15d955984ae67ffc3f8e2c54d0a1f717ce262a9 | 2021-12-07T02:04:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-400k | 1 | null | transformers | 28,882 | Entry not found |
diegozs97/sciie-seed-1-60k | a0ee5cc370e1e19b1c9fa3dd333547b816476868 | 2021-12-07T01:40:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-60k | 1 | null | transformers | 28,883 | Entry not found |
diegozs97/sciie-seed-1-700k | fd6093b918f5aedcf7f49938373ee970a0bec0d7 | 2021-12-07T02:09:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-1-700k | 1 | null | transformers | 28,884 | Entry not found |
diegozs97/sciie-seed-2-1000k | e7b498ee80aaa16d48afaf69a6c2fdb28945f1f5 | 2021-12-07T05:18:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-1000k | 1 | null | transformers | 28,885 | Entry not found |
diegozs97/sciie-seed-2-100k | 91b5496c908c1a5926513af0e01d102ba4f6a262 | 2021-12-07T04:56:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-100k | 1 | null | transformers | 28,886 | Entry not found |
diegozs97/sciie-seed-2-1800k | f1e7d5e120b98a7baeef0cc4f8c82cce45aa4ab8 | 2021-12-07T05:27:43.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-1800k | 1 | null | transformers | 28,887 | Entry not found |
diegozs97/sciie-seed-2-2000k | 5af981a94333461d9f44d9bedfc6b0a50219a588 | 2021-12-07T05:36:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-2000k | 1 | null | transformers | 28,888 | Entry not found |
diegozs97/sciie-seed-2-200k | efcccf4bb5cc4e4511878338e7df7b5d08a568ff | 2021-12-07T06:22:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-200k | 1 | null | transformers | 28,889 | Entry not found |
diegozs97/sciie-seed-2-20k | c120377933a2a4016b24bfdb794f0fb53be516d2 | 2021-12-07T04:34:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-20k | 1 | null | transformers | 28,890 | Entry not found |
diegozs97/sciie-seed-2-400k | c88be99c9c3bad943d3939b5937a4923bc6ca0f3 | 2021-12-07T05:08:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-400k | 1 | null | transformers | 28,891 | Entry not found |
diegozs97/sciie-seed-2-60k | 0f0bc2b99758969b0a47a28eeafdea2338578e3d | 2021-12-07T04:52:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-60k | 1 | null | transformers | 28,892 | Entry not found |
diegozs97/sciie-seed-2-700k | 2676980acb13ba8920b4fdcbe846e79927415155 | 2021-12-07T05:13:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-2-700k | 1 | null | transformers | 28,893 | Entry not found |
diegozs97/sciie-seed-3-0k | 1f4523fc2a719ed468ed48ec4557be4427160ecd | 2021-12-07T18:34:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-3-0k | 1 | null | transformers | 28,894 | Entry not found |
diegozs97/sciie-seed-3-1000k | dcde2bd538c936d94eaba5b6cf9a78c3a5d5e02b | 2021-12-07T19:16:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-3-1000k | 1 | null | transformers | 28,895 | Entry not found |
diegozs97/sciie-seed-3-100k | e7174892b6bfcee8fa08f6be2b9cb809abd023a8 | 2021-12-07T18:46:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-3-100k | 1 | null | transformers | 28,896 | Entry not found |
diegozs97/sciie-seed-3-1500k | 571568a0b7cee3aa0dcef3ae310890315aa192f5 | 2021-12-07T16:26:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-3-1500k | 1 | null | transformers | 28,897 | Entry not found |
diegozs97/sciie-seed-3-1800k | 91f4d40f10a83b6ad89fa8d0456f4d0c933cefad | 2021-12-07T19:25:14.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-3-1800k | 1 | null | transformers | 28,898 | Entry not found |
diegozs97/sciie-seed-3-2000k | d8430dfa9ef85c99288b10d155dd1437931011d8 | 2021-12-07T17:01:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | diegozs97 | null | diegozs97/sciie-seed-3-2000k | 1 | null | transformers | 28,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.