modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
s3h/mt5-small-finetuned-src-to-trg-testing | 4cc2bcde5b3bf42bca82f8daf733cff7b3ed19a8 | 2021-12-21T17:28:28.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | s3h | null | s3h/mt5-small-finetuned-src-to-trg-testing | 3 | null | transformers | 21,700 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-small-finetuned-src-to-trg-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-src-to-trg-testing
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 15.8614
- Bleu: 0.1222
- Gen Len: 3.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 4 | 15.8782 | 0.1222 | 3.75 |
| No log | 2.0 | 8 | 15.7909 | 0.1222 | 3.75 |
| No log | 3.0 | 12 | 15.8614 | 0.1222 | 3.75 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.7.1
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
s87204/distilbert-base-uncased-finetuned-cola | 266cbd3fbc3107e0a9a476d3859326c24f8083ce | 2022-01-07T14:03:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | s87204 | null | s87204/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 21,701 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5365264430934975
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8505
- Matthews Correlation: 0.5365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5201 | 1.0 | 535 | 0.5345 | 0.4153 |
| 0.3469 | 2.0 | 1070 | 0.5033 | 0.5109 |
| 0.2367 | 3.0 | 1605 | 0.6589 | 0.5209 |
| 0.1705 | 4.0 | 2140 | 0.7778 | 0.5354 |
| 0.125 | 5.0 | 2675 | 0.8505 | 0.5365 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
saburbutt/testing | b6860ee37555235014a6cd5eea732dd5ce31683d | 2020-12-09T17:11:22.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/testing | 3 | null | transformers | 21,702 | Entry not found |
sadakmed/dpr-passage_encoder-spanish | c029b37468a2ca1ac62c4302b93d50f4194ff02e | 2021-05-20T04:37:11.000Z | [
"pytorch",
"bert",
"es",
"transformers",
"dpr"
] | null | false | sadakmed | null | sadakmed/dpr-passage_encoder-spanish | 3 | null | transformers | 21,703 | ---
language: es
tags:
- dpr
---
This is a DPR passage_encoder model, finetuned with `dpr-question_encoder-spanish` on Spanish question answering data. |
saibo/random-roberta-mini | e5975979be8f930632c93595009c8d9965565ff3 | 2021-07-18T18:31:47.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | saibo | null | saibo/random-roberta-mini | 3 | null | transformers | 21,704 | # random-roberta-mini
We introduce random-roberta-mini, which is a unpretrained version of a mini RoBERTa model(4 layer and 256 heads). The weight of random-roberta-mini is randomly initiated and this can be particularly useful when we aim to train a language model from scratch or benchmark the effect of pretraining.
It's important to note that tokenizer of random-roberta-mini is the same as roberta-base because it's not a trivial task to get a random tokenizer and it's less meaningful compared to the random weight.
A debatable advantage of pulling random-roberta-mini from Huggingface is to avoid using random seed in order to obtain the same randomness at each time.
The code to obtain such random model:
```python
from transformers import RobertaConfig, RobertaModel
def get_custom_blank_roberta(h=768, l=12):
# Initializing a RoBERTa configuration
configuration = RobertaConfig(num_attention_heads=h, num_hidden_layers=l)
# Initializing a model from the configuration
model = RobertaModel(configuration)
return model
rank="mini"
h=256
l=4
model_type = "roberta"
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model_name ="random-"+model_type+"-"+rank
model = get_custom_blank_roberta(h, l)
```
|
saibo/random-roberta-tiny | c72295479db0e1332a060683e883d213fb21fe01 | 2021-07-18T18:28:26.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | saibo | null | saibo/random-roberta-tiny | 3 | null | transformers | 21,705 | # random-roberta-tiny
We introduce random-roberta-tiny, which is a unpretrained version of a mini RoBERTa model(2 layer and 128 heads). The weight of random-roberta-tiny is randomly initiated and this can be particularly useful when we aim to train a language model from scratch or benchmark the effect of pretraining.
It's important to note that tokenizer of random-roberta-tiny is the same as roberta-base because it's not a trivial task to get a random tokenizer and it's less meaningful compared to the random weight.
A debatable advantage of pulling random-roberta-tiny from Huggingface is to avoid using random seed in order to obtain the same randomness at each time.
The code to obtain such random model:
```python
from transformers import RobertaConfig, RobertaModel
def get_custom_blank_roberta(h=768, l=12):
# Initializing a RoBERTa configuration
configuration = RobertaConfig(num_attention_heads=h, num_hidden_layers=l)
# Initializing a model from the configuration
model = RobertaModel(configuration)
return model
rank="tiny"
h=128
l=2
model_type = "roberta"
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model_name ="random-"+model_type+"-"+rank
model = get_custom_blank_roberta(h, l)
```
|
salesken/clariq_gpt2 | fc78dbadf17a7957e35cca45134568de36a7a05d | 2021-05-23T12:22:04.000Z | [
"pytorch",
"jax",
"salesken",
"gpt2",
"lm-head",
"causal-lm",
"license:apache-2.0"
] | null | false | salesken | null | salesken/clariq_gpt2 | 3 | 1 | null | 21,706 |
---
tags:
- salesken
- gpt2
- lm-head
- causal-lm
- salesken
license: apache-2.0
inference: False
---
The ClariQ challenge [3] is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings:<br />
A user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers);, instead of trying to answer it directly, ask a good clarifying question.
__Query: Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code__
***Top 5 clarifications generated:*** <br />
- are you looking for a suitable cloud platform to run your models on (Score: 0.3862) <br />
- are you looking for a quick test or a more complex model (Score: 0.3364) <br />
- how would you like your nlp model to be used (Score: 0.3249) <br />
- are you looking for a suitable ldl to use as a server or a client (Score: 0.3182) <br />
- how would you like to consume the nlp model (Score: 0.2842) <br />
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("salesken/clariq_gpt2")
model = AutoModelWithLMHead.from_pretrained("salesken/clariq_gpt2")
input_query="Serve your models directly from Hugging Face infrastructure and run large scale NLP models in milliseconds with just a few lines of code"
query= input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt')
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_k = 40,
num_return_sequences=10)
clarifications_gen = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ~~')[1]
if r not in clarifications_gen:
clarifications_gen.append(r)
print(clarifications_gen)
# to select the top n results:
from sentence_transformers import SentenceTransformer, util
import torch
embedder = SentenceTransformer('paraphrase-distilroberta-base-v1')
corpus = clarifications_gen
corpus_embeddings = embedder.encode(corpus, convert_to_tensor=True)
query = input_query.lower()
query_embedding = embedder.encode(query, convert_to_tensor=True)
cos_scores = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]
top_results = torch.topk(cos_scores, k=5)
print("Top clarifications generated :")
for score, idx in zip(top_results[0], top_results[1]):
print(corpus[idx], "(Score: {:.4f})".format(score))
``` |
samitizerxu/wav2vec2-xls-r-300m-eo | 45c165446737b8fb0a54ed198a36f42f58b6cada | 2022-03-23T18:29:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"eo",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | samitizerxu | null | samitizerxu/wav2vec2-xls-r-300m-eo | 3 | null | transformers | 21,707 | ---
language:
- eo
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- eo
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-eo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: eo
metrics:
- name: Test WER
type: wer
value: 34.72
- name: Test CER
type: cer
value: 7.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-eo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - EO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2584
- Wer: 0.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1701 | 0.8 | 500 | 2.8105 | 1.0 |
| 1.9143 | 1.6 | 1000 | 0.5977 | 0.7002 |
| 1.1259 | 2.4 | 1500 | 0.5063 | 0.6157 |
| 0.9732 | 3.2 | 2000 | 0.4264 | 0.5673 |
| 0.8983 | 4.0 | 2500 | 0.4249 | 0.4902 |
| 0.8507 | 4.8 | 3000 | 0.3811 | 0.4536 |
| 0.8064 | 5.6 | 3500 | 0.3643 | 0.4467 |
| 0.7866 | 6.4 | 4000 | 0.3600 | 0.4453 |
| 0.7773 | 7.2 | 4500 | 0.3724 | 0.4470 |
| 0.747 | 8.0 | 5000 | 0.3501 | 0.4189 |
| 0.7279 | 8.8 | 5500 | 0.3500 | 0.4261 |
| 0.7153 | 9.6 | 6000 | 0.3328 | 0.3966 |
| 0.7 | 10.4 | 6500 | 0.3314 | 0.3869 |
| 0.6784 | 11.2 | 7000 | 0.3396 | 0.4051 |
| 0.6582 | 12.0 | 7500 | 0.3236 | 0.3899 |
| 0.6478 | 12.8 | 8000 | 0.3263 | 0.3832 |
| 0.6277 | 13.6 | 8500 | 0.3139 | 0.3769 |
| 0.6053 | 14.4 | 9000 | 0.2955 | 0.3536 |
| 0.5777 | 15.2 | 9500 | 0.2793 | 0.3413 |
| 0.5631 | 16.0 | 10000 | 0.2789 | 0.3353 |
| 0.5446 | 16.8 | 10500 | 0.2709 | 0.3264 |
| 0.528 | 17.6 | 11000 | 0.2693 | 0.3234 |
| 0.5169 | 18.4 | 11500 | 0.2656 | 0.3193 |
| 0.5041 | 19.2 | 12000 | 0.2575 | 0.3102 |
| 0.4971 | 20.0 | 12500 | 0.2584 | 0.3114 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-eo --dataset mozilla-foundation/common_voice_7_0 --config eo --split test
``` |
sammy786/wav2vec2-xlsr-breton | 3c63e90f648a6a21bf5a7e41a962544a7c4e9290 | 2022-03-23T18:33:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"br",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-breton | 3 | null | transformers | 21,708 | ---
language:
- br
license: apache-2.0
tags:
- automatic-speech-recognition
- br
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-breton
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: br
metrics:
- name: Test WER
type: wer
value: 48.2
- name: Test CER
type: cer
value: 15.02
---
# sammy786/wav2vec2-xlsr-breton
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - br dataset.
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 32
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-breton --dataset mozilla-foundation/common_voice_8_0 --config br --split test
``` |
sammy786/wav2vec2-xlsr-chuvash | 4d538d34ffcb21782ba93af5cc0450c5577f29f2 | 2022-03-24T11:58:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"cv",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-chuvash | 3 | null | transformers | 21,709 | ---
language:
- cv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- cv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-chuvash
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cv
metrics:
- name: Test WER
type: wer
value: 27.81
- name: Test CER
type: cer
value: 5.79
---
# sammy786/wav2vec2-xlsr-chuvash
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - cv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 18.02
- Wer: 29.22
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 6.559100 | 2.274687 | 1.000000 |
| 400 | 1.346100 | 0.508268 | 0.681995 |
| 600 | 0.797500 | 0.391174 | 0.572876 |
| 800 | 0.556300 | 0.308620 | 0.489283 |
| 1000 | 0.435800 | 0.273956 | 0.454014 |
| 1200 | 0.388700 | 0.311027 | 0.499415 |
| 1400 | 0.338300 | 0.243977 | 0.413874 |
| 1600 | 0.294000 | 0.214134 | 0.385230 |
| 1800 | 0.276000 | 0.245991 | 0.397311 |
| 2000 | 0.253900 | 0.208324 | 0.363016 |
| 2200 | 0.233600 | 0.222156 | 0.370811 |
| 2400 | 0.219700 | 0.202602 | 0.364186 |
| 2600 | 0.205000 | 0.241339 | 0.384451 |
| 2800 | 0.176000 | 0.263558 | 0.384061 |
| 3000 | 0.166700 | 0.211768 | 0.333398 |
| 3200 | 0.160600 | 0.198677 | 0.321512 |
| 3400 | 0.154600 | 0.208655 | 0.328722 |
| 3600 | 0.146800 | 0.188022 | 0.317810 |
| 3800 | 0.133200 | 0.181083 | 0.313133 |
| 4000 | 0.134200 | 0.190084 | 0.316251 |
| 4200 | 0.114200 | 0.193034 | 0.312159 |
| 4400 | 0.117300 | 0.194122 | 0.312354 |
| 4600 | 0.112300 | 0.191111 | 0.305534 |
| 4800 | 0.107800 | 0.185930 | 0.302611 |
| 5000 | 0.100400 | 0.178625 | 0.299883 |
| 5200 | 0.099800 | 0.176442 | 0.294622 |
| 5400 | 0.100800 | 0.177935 | 0.294427 |
| 5600 | 0.096300 | 0.182903 | 0.293843 |
| 5800 | 0.094200 | 0.181041 | 0.293453 |
| 6000 | 0.097600 | 0.179865 | 0.290725 |
| 6200 | 0.091600 | 0.180327 | 0.292868 |
| 6400 | 0.093100 | 0.180275 | 0.292284 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-chuvash --dataset mozilla-foundation/common_voice_8_0 --config cv --split test
``` |
sammy786/wav2vec2-xlsr-georgian | 60c826901a411f75d9c4f97d5afd265081b9d931 | 2022-03-24T11:56:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ka",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-georgian | 3 | null | transformers | 21,710 | ---
language:
- ka
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ka
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-czech
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ka
metrics:
- name: Test WER
type: wer
value: 23.9
- name: Test CER
type: cer
value: 3.59
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ka
metrics:
- name: Test WER
type: wer
value: 75.07
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ka
metrics:
- name: Test WER
type: wer
value: 74.41
---
# sammy786/wav2vec2-xlsr-georgian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ka dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 10.54
- Wer: 27.53
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.152100 | 0.823672 | 0.967814 |
| 400 | 0.889500 | 0.196740 | 0.444792 |
| 600 | 0.493700 | 0.155659 | 0.366115 |
| 800 | 0.328000 | 0.138066 | 0.358069 |
| 1000 | 0.260600 | 0.119236 | 0.324989 |
| 1200 | 0.217200 | 0.114050 | 0.313366 |
| 1400 | 0.188800 | 0.112600 | 0.302190 |
| 1600 | 0.166900 | 0.111154 | 0.295485 |
| 1800 | 0.155500 | 0.109963 | 0.286544 |
| 2000 | 0.140400 | 0.107587 | 0.277604 |
| 2200 | 0.142600 | 0.105662 | 0.277157 |
| 2400 | 0.135400 | 0.105414 | 0.275369 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-georgian --dataset mozilla-foundation/common_voice_8_0 --config ka --split test
``` |
sancharidan/quantized_expfinder | 3bb7d910e0cd7363777f254ff2ed578744621822 | 2022-02-22T11:25:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:other"
] | text-classification | false | sancharidan | null | sancharidan/quantized_expfinder | 3 | null | transformers | 21,711 | ---
license: other
---
|
sanjaycode/demo_model | 083cf9c45265fa7fe26f0cb4c8159e3c05359c3e | 2021-09-07T04:22:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sanjaycode | null | sanjaycode/demo_model | 3 | null | transformers | 21,712 | Entry not found |
sanqiang/qa_base | d2a0d57bc0ffea2941222c6158d13ae5f41cb8dd | 2021-10-21T21:27:40.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | sanqiang | null | sanqiang/qa_base | 3 | null | transformers | 21,713 | Entry not found |
saraks/cuad-distil-governing_law-08-25-v1 | 54dbac6da583eb5de5cb80bd3629c6e0a48810f6 | 2021-08-25T16:31:01.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-governing_law-08-25-v1 | 3 | null | transformers | 21,714 | Entry not found |
sarnikowski/convbert-medium-small-da-cased | 2920e56f28103cf552b43c12fc94c8f4fb9826bb | 2021-03-18T22:27:12.000Z | [
"pytorch",
"tf",
"convbert",
"da",
"arxiv:2008.02496",
"transformers",
"license:cc-by-4.0"
] | null | false | sarnikowski | null | sarnikowski/convbert-medium-small-da-cased | 3 | null | transformers | 21,715 | ---
language: da
license: cc-by-4.0
---
# Danish ConvBERT medium small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-medium-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-medium-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
seduerr/pai_pol | f1c78e740add53d59c8af81096d70648a116087b | 2021-06-25T06:24:06.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_pol | 3 | null | transformers | 21,716 | Entry not found |
sefaozalpadl/stop_the_steal_relevancy_analysis-binary | e3f918a269c2ae0edec42bbe8dd00e0d2518cfc7 | 2021-11-07T16:57:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:sefaozalpadl/autonlp-data-stop_the_steal_relevancy_analysis",
"transformers",
"coe",
"co2_eq_emissions"
] | text-classification | false | sefaozalpadl | null | sefaozalpadl/stop_the_steal_relevancy_analysis-binary | 3 | null | transformers | 21,717 | ---
tags: coe
language: en
widget:
- text: "take our country back. Stop the steal! #trump2020"
datasets:
- sefaozalpadl/autonlp-data-stop_the_steal_relevancy_analysis
co2_eq_emissions: 0.6503024714880831
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 23995359
- CO2 Emissions (in grams): 0.6503024714880831
## Validation Metrics
- Loss: 0.49598395824432373
- Accuracy: 0.7907801418439716
- Precision: 0.7841726618705036
- Recall: 0.7898550724637681
- AUC: 0.8774154589371981
- F1: 0.7870036101083032
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/sefaozalpadl/stop_the_steal_relevancy_analysis-binary
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("sefaozalpadl/stop_the_steal_relevancy_analysis-binary", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("sefaozalpadl/stop_the_steal_relevancy_analysis-binary", use_auth_token=True)
inputs = tokenizer("take our country back. Stop the steal! #trump2020", return_tensors="pt")
outputs = model(**inputs)
``` |
sello-ralethe/bert-base-frozen-generics-mlm | 9aecb0488f70826d0ee70b2d1e6679ec6bed7ec2 | 2021-05-20T05:11:38.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sello-ralethe | null | sello-ralethe/bert-base-frozen-generics-mlm | 3 | null | transformers | 21,718 | BERT model finetuned for masked language modeling on generics dataset by freezing all the weights of pretrained BERT except the last layer. The aim is to investigate if the model will overgeneralize generics and treat quantified statements such as 'All ducks lay eggs', 'All tigers have stripes' as if these are generics. |
seyonec/ChemBERTA_PubChem1M_shard00 | 83412d7b3bb604e2912e2a7258da186fa82f0cdf | 2021-05-20T20:50:55.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/ChemBERTA_PubChem1M_shard00 | 3 | null | transformers | 21,719 | Entry not found |
seyonec/ChemBERTA_PubChem1M_shard00_75k | d9f425d6043840cb02e285c4f40ec4e36f36a0d2 | 2021-05-20T20:54:57.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/ChemBERTA_PubChem1M_shard00_75k | 3 | null | transformers | 21,720 | Entry not found |
seyonec/PubChem10M_SMILES_BPE_390k | 922b97451583e4e54fd590946c9571a6b869313c | 2021-05-20T21:00:52.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/PubChem10M_SMILES_BPE_390k | 3 | null | transformers | 21,721 | Entry not found |
seyonec/SMILES_BPE_PubChem_100k_shard00 | e6b39d103d1ca94d0cf51e56c8e7a221d0d2dd00 | 2021-05-20T21:05:05.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | seyonec | null | seyonec/SMILES_BPE_PubChem_100k_shard00 | 3 | null | transformers | 21,722 | Entry not found |
sgugger/custom-resnet | 235083771e73d9fdaea63c012dfef9dbfa85e51c | 2022-02-09T14:47:38.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | sgugger | null | sgugger/custom-resnet | 3 | null | transformers | 21,723 | Entry not found |
sgugger/esberto-small | 88c67f644f42f41bf35ff1d7e21fc79333e0b667 | 2021-07-26T20:53:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"dataset:oscar",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | false | sgugger | null | sgugger/esberto-small | 3 | null | transformers | 21,724 | ---
tags:
- generated_from_trainer
datasets:
- oscar
model_index:
- name: esberto-small
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: oscar
type: oscar
args: unshuffled_original_eo
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esberto-small
This model is a fine-tuned version of [](https://huggingface.co/) on the oscar dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.10.3.dev0
- Tokenizers 0.10.3
|
sgugger/finetuned-bert | 91ffe4fc44a670119a874124497f056eca12dd08 | 2021-06-23T19:45:24.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | false | sgugger | null | sgugger/finetuned-bert | 3 | null | transformers | 21,725 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model_index:
- name: finetuned-bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metric:
name: F1
type: f1
value: 0.9125214408233276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3916
- Accuracy: 0.875
- F1: 0.9125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.581 | 1.0 | 230 | 0.4086 | 0.8260 | 0.8711 |
| 0.366 | 2.0 | 460 | 0.3758 | 0.8480 | 0.8963 |
| 0.2328 | 3.0 | 690 | 0.3916 | 0.875 | 0.9125 |
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.8.1.dev0
- Tokenizers 0.10.1
|
shaer/xlm-roberta-base-finetuned-marc-en-test-run | a55eeb586d7535c02cfc85bf9e080df6aeff8853 | 2021-10-22T13:12:39.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | shaer | null | shaer/xlm-roberta-base-finetuned-marc-en-test-run | 3 | null | transformers | 21,726 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en-test-run
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-test-run
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8957
- Mae: 0.4390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1079 | 1.0 | 235 | 0.9742 | 0.5366 |
| 0.9488 | 2.0 | 470 | 0.8957 | 0.4390 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
shainahub/covid_qa_distillbert | 34fe91fea8afd148d8b615d5c682da4341cce2fb | 2021-12-15T19:10:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | shainahub | null | shainahub/covid_qa_distillbert | 3 | null | transformers | 21,727 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
metrics:
- squad_v2 # Example: wer. Use metric id from https://hf.co/metrics
widget:
- text: "What is COVID-19?"
context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic."
- text: "Where was COVID-19 first discovered?"
context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event."
- text: "What is Post-COVID syndrome?"
context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2502 | 1.0 | 3880 | 0.1824 |
| 0.2007 | 2.0 | 7760 | 0.1250 |
| 0.1338 | 3.0 | 11640 | 0.0976 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
shamikbose89/mt5-small-finetuned-arxiv-cs | a0dc7519a2e498e8b3c0731e44d275319cf47163 | 2021-11-19T17:48:21.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | shamikbose89 | null | shamikbose89/mt5-small-finetuned-arxiv-cs | 3 | null | transformers | 21,728 | ---
license: apache-2.0
tags:
- generated_from_trainer
- summarization
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-arxiv-cs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-arxiv-cs
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on a subset of the arxiv dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6922
- Rouge1: 0.7734
- Rouge2: 0.2865
- Rougel: 0.6665
- Rougelsum: 0.6743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 14.0947 | 1.0 | 500 | 2.7666 | 1.2101 | 0.459 | 1.1426 | 1.1385 |
| 2.8524 | 2.0 | 1000 | 1.8208 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2623 | 3.0 | 1500 | 1.6922 | 0.7734 | 0.2865 | 0.6665 | 0.6743 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
shivam/wav2vec2-xls-r-hindi | 7142f5a4f435af9a41ecb75be68d48998d804532 | 2022-03-23T18:33:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shivam | null | shivam/wav2vec2-xls-r-hindi | 3 | 1 | transformers | 21,729 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- hi
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
model-index:
- name: shivam/wav2vec2-xls-r-hindi
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice Corpus 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 52.3
- name: Test CER
type: cer
value: 26.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2282
- Wer: 0.6838
## Evaluation results on Common Voice 7 "test" (Running ./eval.py):
### With LM
- WER: 52.30
- CER: 26.09
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3155 | 3.4 | 500 | 4.5582 | 1.0 |
| 3.3369 | 6.8 | 1000 | 3.4269 | 1.0 |
| 2.1785 | 10.2 | 1500 | 1.7191 | 0.8831 |
| 1.579 | 13.6 | 2000 | 1.3604 | 0.7647 |
| 1.3773 | 17.01 | 2500 | 1.2737 | 0.7519 |
| 1.3165 | 20.41 | 3000 | 1.2457 | 0.7401 |
| 1.2274 | 23.81 | 3500 | 1.3617 | 0.7301 |
| 1.1787 | 27.21 | 4000 | 1.2068 | 0.7010 |
| 1.1467 | 30.61 | 4500 | 1.2416 | 0.6946 |
| 1.0801 | 34.01 | 5000 | 1.2312 | 0.6990 |
| 1.0709 | 37.41 | 5500 | 1.2984 | 0.7138 |
| 1.0307 | 40.81 | 6000 | 1.2049 | 0.6871 |
| 1.0003 | 44.22 | 6500 | 1.1956 | 0.6841 |
| 1.004 | 47.62 | 7000 | 1.2101 | 0.6793 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
shivam/xls-r-hindi | 4631df09751ca7151560f04dc38496e72bdfab81 | 2022-01-21T14:00:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | shivam | null | shivam/xls-r-hindi | 3 | 1 | transformers | 21,730 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4484
- Wer: 1.0145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1844 | 3.4 | 500 | 5.2015 | 0.9999 |
| 3.3962 | 6.8 | 1000 | 3.4017 | 1.0002 |
| 2.5433 | 10.2 | 1500 | 1.6884 | 1.0222 |
| 1.5099 | 13.6 | 2000 | 0.7929 | 1.0188 |
| 1.2685 | 17.01 | 2500 | 0.6122 | 1.0191 |
| 1.1844 | 20.41 | 3000 | 0.5434 | 1.0197 |
| 1.0945 | 23.81 | 3500 | 0.5208 | 1.0316 |
| 1.0506 | 27.21 | 4000 | 0.4941 | 1.0139 |
| 1.0199 | 30.61 | 4500 | 0.4736 | 1.0106 |
| 0.9546 | 34.01 | 5000 | 0.4664 | 1.0164 |
| 0.9388 | 37.41 | 5500 | 0.4565 | 1.0085 |
| 0.9125 | 40.81 | 6000 | 0.4636 | 1.0148 |
| 0.8733 | 44.22 | 6500 | 0.4530 | 1.0154 |
| 0.8829 | 47.62 | 7000 | 0.4494 | 1.0152 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
shivangi/STS-B_64_128_output | e211276f8ca10fea2c5b1b7efe93b3ac24b5d0c9 | 2021-05-20T05:53:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | shivangi | null | shivangi/STS-B_64_128_output | 3 | null | transformers | 21,731 | Entry not found |
shiyue/roberta-large-pyrxsum | d23b1420759cd001d0a0c73b169b077a5036e544 | 2021-09-22T02:09:07.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-pyrxsum | 3 | null | transformers | 21,732 | Entry not found |
shiyue/roberta-large-realsumm-by-examples-fold3 | c75e5e21c267b80dae719a5062708e09e6186a60 | 2021-09-23T19:19:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-examples-fold3 | 3 | null | transformers | 21,733 | Entry not found |
shiyue/roberta-large-realsumm-by-systems-fold1 | 691854fd714a64ac1bc9672e0084ff5d7534bdcc | 2021-09-23T19:36:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-systems-fold1 | 3 | null | transformers | 21,734 | Entry not found |
shiyue/roberta-large-realsumm-by-systems-fold2 | ce3601810fac67b6ad3b37131f44f87a6e308b94 | 2021-09-23T19:39:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-systems-fold2 | 3 | null | transformers | 21,735 | Entry not found |
shiyue/roberta-large-realsumm-by-systems-fold5 | 9916d08de9c2c68dba7443a7c16db8cb038431c9 | 2021-09-23T19:50:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-realsumm-by-systems-fold5 | 3 | null | transformers | 21,736 | Entry not found |
shiyue/roberta-large-tac08-tac09 | 9c229d6ca92f240e370487a1f496bf4ca218c066 | 2021-12-24T02:41:44.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | shiyue | null | shiyue/roberta-large-tac08-tac09 | 3 | null | transformers | 21,737 | Entry not found |
shokiokita/distilbert-base-uncased-finetuned-cola | 8401d32a56571b2ed422d3deeb66fff77d61b589 | 2021-11-05T10:27:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | shokiokita | null | shokiokita/distilbert-base-uncased-finetuned-cola | 3 | null | transformers | 21,738 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5536405531329313
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8455
- Matthews Correlation: 0.5536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.524 | 1.0 | 535 | 0.5547 | 0.3891 |
| 0.3463 | 2.0 | 1070 | 0.5250 | 0.5011 |
| 0.2329 | 3.0 | 1605 | 0.6321 | 0.5239 |
| 0.1677 | 4.0 | 2140 | 0.7752 | 0.5372 |
| 0.1197 | 5.0 | 2675 | 0.8455 | 0.5536 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
shreeshaaithal/DialoGPT-small-Michael-Scott | 4b14fcd47bb1c6924fdfdf015eae0dc32f032987 | 2021-07-07T11:56:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | shreeshaaithal | null | shreeshaaithal/DialoGPT-small-Michael-Scott | 3 | null | transformers | 21,739 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on WhatsApp chats
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on WhatsApp chats or you can train this model on [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
feel free to ask me questions on discord server [discord server](https://discord.gg/Gqhje8Z7DX)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("harrydonni/DialoGPT-small-Michael-Scott")
model = AutoModelWithLMHead.from_pretrained("harrydonni/DialoGPT-small-Michael-Scott")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Michael: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
this is done by shreesha thank you...... |
sibyl/BART-large-commongen | 8353dfbd66022cc25fe621fde65ee306118f4d76 | 2021-08-10T02:22:28.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:gem",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | sibyl | null | sibyl/BART-large-commongen | 3 | null | transformers | 21,740 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- gem
model_index:
- name: BART-large-commongen
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: gem
type: gem
args: common_gen
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-large-commongen
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the gem dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1409
- Spice: 0.4009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 6317
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spice |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.1086 | 0.05 | 100 | 4.9804 | 0.3736 |
| 4.4168 | 0.09 | 200 | 2.4402 | 0.4079 |
| 1.8158 | 0.14 | 300 | 1.1096 | 0.4258 |
| 1.1723 | 0.19 | 400 | 1.0845 | 0.4086 |
| 1.0894 | 0.24 | 500 | 1.0727 | 0.423 |
| 1.0949 | 0.28 | 600 | 1.0889 | 0.4224 |
| 1.0773 | 0.33 | 700 | 1.0977 | 0.4201 |
| 1.0708 | 0.38 | 800 | 1.1157 | 0.4213 |
| 1.0663 | 0.43 | 900 | 1.1798 | 0.421 |
| 1.0985 | 0.47 | 1000 | 1.1611 | 0.4025 |
| 1.0561 | 0.52 | 1100 | 1.1048 | 0.421 |
| 1.0594 | 0.57 | 1200 | 1.2044 | 0.3626 |
| 1.0689 | 0.62 | 1300 | 1.1409 | 0.4009 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
simjo/dummy-model | 13818bb63d1d1d8f049ee2ae37696fad5f058155 | 2021-11-29T21:51:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | simjo | null | simjo/dummy-model | 3 | null | transformers | 21,741 | Entry not found |
simonlevine/biomed_roberta_base-4096 | 5ff70e92dfbe1f7e362e95da136d62fe0591db0b | 2021-05-20T21:28:43.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonlevine | null | simonlevine/biomed_roberta_base-4096 | 3 | null | transformers | 21,742 | Entry not found |
simonmun/COHA1860s | d600d92eba3833708a1e4e1a52581a5a2639bef0 | 2021-05-20T21:34:42.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1860s | 3 | null | transformers | 21,743 | Entry not found |
simonmun/COHA1880s | 63aaad78a0573da6d534bb339f4f42d6c118cfed | 2021-05-20T21:37:16.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1880s | 3 | null | transformers | 21,744 | Entry not found |
simonmun/COHA1890s | c4ffa49e32ac37874f5de7b3ebe46f782e5960f6 | 2021-05-20T21:38:04.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1890s | 3 | null | transformers | 21,745 | Entry not found |
simonmun/COHA1940s | bd10ea7396fd7981e9b713fb53ab2b3b2180369e | 2021-05-20T21:43:36.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1940s | 3 | null | transformers | 21,746 | Entry not found |
simonmun/COHA1960s | 9e7175e61adff57d7bb6cc6793a0c0648c90bae5 | 2021-05-20T21:45:53.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simonmun | null | simonmun/COHA1960s | 3 | null | transformers | 21,747 | Entry not found |
simonmun/Eyse_SentenceClassification | 628cee44593400370c463d01bdc5f9a6e61606d1 | 2021-05-20T05:57:27.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | simonmun | null | simonmun/Eyse_SentenceClassification | 3 | null | transformers | 21,748 | Entry not found |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-liniscrowd | ffb93ed5edd4ce5ba55488e08d91cd1e22c8a0d4 | 2021-02-21T15:23:51.000Z | [
"pytorch",
"mbart",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/mbart_ru_sum_gazeta-ru-sentiment-liniscrowd | 3 | null | transformers | 21,749 | Entry not found |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rutweetcorp | 7229a631968d2e2baf4aa80c7b66e49780f2811b | 2021-02-26T09:17:28.000Z | [
"pytorch",
"mbart",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rutweetcorp | 3 | null | transformers | 21,750 | Entry not found |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-sentirueval2016 | 6be5922e23e3b0796adce0b78f1865f47e8ae544 | 2021-02-25T02:51:46.000Z | [
"pytorch",
"mbart",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/mbart_ru_sum_gazeta-ru-sentiment-sentirueval2016 | 3 | null | transformers | 21,751 | Entry not found |
sismetanin/rubert_conversational-ru-sentiment-krnd | a2ac479145d5100714de6f970bccc5d4f03bb5f2 | 2021-05-20T06:17:56.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/rubert_conversational-ru-sentiment-krnd | 3 | null | transformers | 21,752 | Entry not found |
sismetanin/rubert_conversational-ru-sentiment-liniscrowd | c3d36406997bcadd1fea01a7c16083d8227c4176 | 2021-05-20T06:19:29.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/rubert_conversational-ru-sentiment-liniscrowd | 3 | null | transformers | 21,753 | Entry not found |
sismetanin/xlm_roberta_base-ru-sentiment-rutweetcorp | 132967300fe62094771ed75ff6600e7524b292be | 2021-02-22T02:27:30.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_base-ru-sentiment-rutweetcorp | 3 | null | transformers | 21,754 | Entry not found |
sismetanin/xlm_roberta_base-ru-sentiment-sentirueval2016 | d944d0c6619f24d48ca1fc371e34871a2ac60edf | 2021-02-25T02:52:13.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_base-ru-sentiment-sentirueval2016 | 3 | null | transformers | 21,755 | Entry not found |
sismetanin/xlm_roberta_large-ru-sentiment-rutweetcorp | 4bfde058505de3012ecd283c0af33d7059a5ea12 | 2021-02-22T02:27:46.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | sismetanin | null | sismetanin/xlm_roberta_large-ru-sentiment-rutweetcorp | 3 | null | transformers | 21,756 | Entry not found |
slider/ernie-gram | 8cbece2d121f5a34b5923b9b8fd629dce49aa784 | 2021-12-10T01:58:06.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | slider | null | slider/ernie-gram | 3 | null | transformers | 21,757 | Entry not found |
sm6342/FinRoberta | f4a58cfc5cd22b4886f3c27447625ff076271f67 | 2021-05-20T21:54:09.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sm6342 | null | sm6342/FinRoberta | 3 | null | transformers | 21,758 | "hello"
|
smallbenchnlp/ELECTRA-DeBERTa-Small | e2edf04bf05aba3225f5f5f7fa7cc948a4b0599f | 2021-10-25T05:51:07.000Z | [
"pytorch",
"deberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | smallbenchnlp | null | smallbenchnlp/ELECTRA-DeBERTa-Small | 3 | null | transformers | 21,759 | Entry not found |
smartpim/k2t_ru_03 | 6ff03ae7fc38c9eb9bff9d53176d318cfb05e178 | 2022-02-14T06:08:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | smartpim | null | smartpim/k2t_ru_03 | 3 | null | transformers | 21,760 | Entry not found |
smartpim/k2t_ru_04 | d286d23c00f6051537946f018012e54b28178cd1 | 2022-02-14T13:08:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:other",
"autotrain_compatible"
] | text2text-generation | false | smartpim | null | smartpim/k2t_ru_04 | 3 | null | transformers | 21,761 | ---
license: other
---
|
smeoni/electra-large-discriminator-clrp | e0f4aa6a09a1d31bd26e526b769b01b3657ce303 | 2021-06-23T09:56:24.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/electra-large-discriminator-clrp | 3 | null | transformers | 21,762 | Entry not found |
soheeyang/rdr-ctx_encoder-single-trivia-base | 0bb7797b445302490bf1727942862319755175c6 | 2021-04-15T15:52:44.000Z | [
"pytorch",
"tf",
"dpr",
"arxiv:2010.10999",
"transformers"
] | null | false | soheeyang | null | soheeyang/rdr-ctx_encoder-single-trivia-base | 3 | null | transformers | 21,763 | # rdr-ctx_encoder-single-trivia-base
Reader-Distilled Retriever (`RDR`)
Sohee Yang and Minjoon Seo, [Is Retriever Merely an Approximator of Reader?](https://arxiv.org/abs/2010.10999), arXiv 2020
The paper proposes to distill the reader into the retriever so that the retriever absorbs the strength of the reader while keeping its own benefit. The model is a DPR retriever further finetuned using knowledge distillation from the DPR reader. Using this approach, the answer recall rate increases by a large margin, especially at small numbers of top-k.
This model is the context encoder of RDR trained solely on TriviaQA (single-trivia). This model is trained by the authors and is the official checkpoint of RDR.
## Performance
The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
For the values of DPR, those in parentheses are directly taken from the paper. The values without parentheses are reported using the reproduction of DPR that consists of [this context encoder](https://huggingface.co/soheeyang/dpr-ctx_encoder-single-trivia-base) and [this queston encoder](https://huggingface.co/soheeyang/dpr-question_encoder-single-trivia-base).
| | Top-K Passages | 1 | 5 | 20 | 50 | 100 |
|-------------|------------------|-----------|-----------|-----------|-----------|-----------|
|**TriviaQA Dev** | **DPR** | 54.27 | 71.11 | 79.53 | 82.72 | 85.07 |
| | **RDR (This Model)** | **61.84** | **75.93** | **82.56** | **85.35** | **87.00** |
|**TriviaQA Test**| **DPR** | 54.41 | 70.99 | 79.31 (79.4) | 82.90 | 84.99 (85.0) |
| | **RDR (This Model)** | **62.56** | **75.92** | **82.52** | **85.64** | **87.26** |
## How to Use
RDR shares the same architecture with DPR. Therefore, It uses `DPRContextEncoder` as the model class.
Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`.
Therefore, please specify the exact class to use the model.
```python
from transformers import DPRContextEncoder, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("soheeyang/rdr-ctx_encoder-single-trivia-base")
ctx_encoder = DPRContextEncoder.from_pretrained("soheeyang/rdr-ctx_encoder-single-trivia-base")
data = tokenizer("context comes here", return_tensors="pt")
ctx_embedding = ctx_encoder(**data).pooler_output # embedding vector for context
```
|
song/bert_cn_finetuning | 0d9854a8ff738ecdb1958faec953f4074b3e5ec6 | 2021-05-20T07:08:53.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | song | null | song/bert_cn_finetuning | 3 | null | transformers | 21,764 | Entry not found |
spacemanidol/neuralmagic-bert-squad-12layer-0sparse | 2a4c3c13af312b7813f73a7d19f8dbb9b0e80bfb | 2021-05-20T07:11:25.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | spacemanidol | null | spacemanidol/neuralmagic-bert-squad-12layer-0sparse | 3 | null | transformers | 21,765 | hello
|
spencerh/rightcenterpartisan | eda3914f5a57623a47870cde63302760e9977d86 | 2021-04-23T19:56:43.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | spencerh | null | spencerh/rightcenterpartisan | 3 | null | transformers | 21,766 | Entry not found |
spentaur/post-here | 0b0b04fc85c23a88256ba01017eeae7111f09214 | 2020-11-11T18:38:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | spentaur | null | spentaur/post-here | 3 | null | transformers | 21,767 | Entry not found |
springml111/T5_Paraphrase_model | 3735446afbf1786057fe27936ae35ebd29f8b795 | 2021-12-01T05:51:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | springml111 | null | springml111/T5_Paraphrase_model | 3 | null | transformers | 21,768 | Entry not found |
sramasamy8/testModel | a12f24e8b282ec75608a2f19177ec38a1661a1d3 | 2021-05-20T20:58:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | sramasamy8 | null | sramasamy8/testModel | 3 | null | transformers | 21,769 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> |
ssardorf/t5-web-summ | 5ca51c226a52967f18d6c97bef35538cd5a18ea6 | 2022-02-20T16:27:25.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | ssardorf | null | ssardorf/t5-web-summ | 3 | null | transformers | 21,770 | Entry not found |
sshleifer/student_cnn_9_9 | c1b4760109de4ce301bbdd91fec3f534f3a656b4 | 2021-06-14T09:25:20.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_cnn_9_9 | 3 | null | transformers | 21,771 | Entry not found |
sshleifer/student_xsum_12_1 | 712030af6ebc0980db38f4c466dc9329bedaa573 | 2021-06-14T09:40:24.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_12_1 | 3 | null | transformers | 21,772 | Entry not found |
sshleifer/student_xsum_12_9 | 070224b6611e69dc07a14bfc47ad87fc0cbcd41b | 2021-06-14T09:54:50.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_12_9 | 3 | null | transformers | 21,773 | Entry not found |
sshleifer/student_xsum_6_6 | c331a6bdad645c15d2c78afa19a3ca1e17dd1482 | 2021-06-14T10:10:51.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_6_6 | 3 | null | transformers | 21,774 | Entry not found |
sshleifer/student_xsum_9_12 | a9964851946debb85e1c2dbbc827447e0f60e56d | 2021-06-14T10:13:47.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_xsum_9_12 | 3 | null | transformers | 21,775 | Entry not found |
sshleifer/t5-tinier-random | 2165f97265e45a3b45f19007ae1aeacb23465fc5 | 2021-06-23T14:25:45.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/t5-tinier-random | 3 | null | transformers | 21,776 | Entry not found |
sszyr/finetuned-bert-bounti | c556b17e11edb8caa770e4eaf239b55d56cf6d7f | 2021-11-18T18:44:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | sszyr | null | sszyr/finetuned-bert-bounti | 3 | null | transformers | 21,777 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned-bert-bounti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-bounti
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on the BounTi Turkish Twitter sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1188
- Accuracy: 0.7246
- F1: 0.6845
- Precision: 0.6892
- Recall: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0974 | 0.02 | 5 | 1.0790 | 0.3756 | 0.3064 | 0.3255 | 0.3232 |
| 1.1345 | 0.04 | 10 | 1.0784 | 0.3725 | 0.3037 | 0.3219 | 0.3197 |
| 1.1441 | 0.06 | 15 | 1.0776 | 0.3772 | 0.3072 | 0.3250 | 0.3234 |
| 1.122 | 0.08 | 20 | 1.0774 | 0.3787 | 0.3077 | 0.3244 | 0.3228 |
| 1.1201 | 0.1 | 25 | 1.0776 | 0.3787 | 0.3047 | 0.3193 | 0.3216 |
| 1.1489 | 0.13 | 30 | 1.0783 | 0.3787 | 0.3012 | 0.3120 | 0.3189 |
| 1.0716 | 0.15 | 35 | 1.0783 | 0.3897 | 0.3093 | 0.3212 | 0.3282 |
| 1.082 | 0.17 | 40 | 1.0767 | 0.3865 | 0.3060 | 0.3203 | 0.3238 |
| 1.1113 | 0.19 | 45 | 1.0738 | 0.3897 | 0.3058 | 0.3219 | 0.3211 |
| 1.0892 | 0.21 | 50 | 1.0715 | 0.4069 | 0.3290 | 0.3475 | 0.3374 |
| 1.0913 | 0.23 | 55 | 1.0719 | 0.4178 | 0.3283 | 0.3398 | 0.3361 |
| 1.1114 | 0.25 | 60 | 1.0694 | 0.4397 | 0.3479 | 0.3605 | 0.3538 |
| 1.1129 | 0.27 | 65 | 1.0682 | 0.4491 | 0.3593 | 0.3731 | 0.3648 |
| 1.1283 | 0.29 | 70 | 1.0671 | 0.4664 | 0.3719 | 0.3775 | 0.3780 |
| 1.1267 | 0.31 | 75 | 1.0714 | 0.4507 | 0.3826 | 0.3834 | 0.3835 |
| 1.1325 | 0.33 | 80 | 1.0762 | 0.4335 | 0.3909 | 0.3918 | 0.3954 |
| 1.0919 | 0.36 | 85 | 1.0723 | 0.4335 | 0.3930 | 0.3937 | 0.3982 |
| 1.0545 | 0.38 | 90 | 1.0694 | 0.4507 | 0.4161 | 0.4180 | 0.4279 |
| 1.1121 | 0.4 | 95 | 1.0698 | 0.4491 | 0.4151 | 0.4280 | 0.4324 |
| 1.0675 | 0.42 | 100 | 1.0711 | 0.4382 | 0.4005 | 0.4349 | 0.4494 |
| 1.0954 | 0.44 | 105 | 1.0720 | 0.4085 | 0.3690 | 0.4233 | 0.4326 |
| 1.1087 | 0.46 | 110 | 1.0562 | 0.4820 | 0.4463 | 0.4762 | 0.4841 |
| 1.0669 | 0.48 | 115 | 1.0459 | 0.5086 | 0.4746 | 0.4844 | 0.4997 |
| 1.0529 | 0.5 | 120 | 1.0364 | 0.5243 | 0.4935 | 0.4946 | 0.5119 |
| 1.0348 | 0.52 | 125 | 1.0248 | 0.5321 | 0.4953 | 0.4977 | 0.5067 |
| 1.0454 | 0.54 | 130 | 1.0169 | 0.5415 | 0.5089 | 0.5084 | 0.5232 |
| 1.0366 | 0.56 | 135 | 1.0071 | 0.5493 | 0.5176 | 0.5156 | 0.5344 |
| 1.0197 | 0.59 | 140 | 1.0010 | 0.5446 | 0.5132 | 0.5150 | 0.5350 |
| 1.0459 | 0.61 | 145 | 0.9966 | 0.5399 | 0.5094 | 0.5184 | 0.5383 |
| 1.0059 | 0.63 | 150 | 1.0011 | 0.5477 | 0.5222 | 0.5394 | 0.5617 |
| 0.9455 | 0.65 | 155 | 0.9898 | 0.5399 | 0.5173 | 0.5390 | 0.5583 |
| 0.9732 | 0.67 | 160 | 0.9750 | 0.5477 | 0.5207 | 0.5406 | 0.5601 |
| 1.0215 | 0.69 | 165 | 0.9494 | 0.5790 | 0.5495 | 0.5511 | 0.5759 |
| 0.99 | 0.71 | 170 | 0.9331 | 0.5696 | 0.5355 | 0.5372 | 0.5500 |
| 1.0102 | 0.73 | 175 | 0.9284 | 0.5759 | 0.5425 | 0.5488 | 0.5567 |
| 0.9633 | 0.75 | 180 | 0.9313 | 0.5837 | 0.5571 | 0.5726 | 0.5758 |
| 0.9388 | 0.77 | 185 | 0.9262 | 0.5869 | 0.5625 | 0.5830 | 0.5817 |
| 0.9606 | 0.79 | 190 | 0.9140 | 0.5915 | 0.5638 | 0.5728 | 0.5835 |
| 0.969 | 0.82 | 195 | 0.9170 | 0.5978 | 0.5712 | 0.5769 | 0.5964 |
| 0.8779 | 0.84 | 200 | 0.9089 | 0.5947 | 0.5696 | 0.5790 | 0.5925 |
| 0.9041 | 0.86 | 205 | 0.9013 | 0.6166 | 0.5874 | 0.5894 | 0.6083 |
| 0.8643 | 0.88 | 210 | 0.8783 | 0.6275 | 0.5961 | 0.5972 | 0.6140 |
| 0.8864 | 0.9 | 215 | 0.8651 | 0.6307 | 0.5984 | 0.6060 | 0.6152 |
| 0.9075 | 0.92 | 220 | 0.8562 | 0.6401 | 0.6107 | 0.6096 | 0.6313 |
| 0.8659 | 0.94 | 225 | 0.8407 | 0.6244 | 0.5896 | 0.5864 | 0.6085 |
| 0.8921 | 0.96 | 230 | 0.8171 | 0.6385 | 0.6014 | 0.5955 | 0.6138 |
| 0.9176 | 0.98 | 235 | 0.8120 | 0.6432 | 0.6052 | 0.6001 | 0.6183 |
| 0.8124 | 1.0 | 240 | 0.8084 | 0.6479 | 0.6087 | 0.6058 | 0.6229 |
| 0.7606 | 1.03 | 245 | 0.7978 | 0.6588 | 0.6198 | 0.6166 | 0.6258 |
| 0.7879 | 1.05 | 250 | 0.8361 | 0.6322 | 0.6002 | 0.6090 | 0.6310 |
| 0.8515 | 1.07 | 255 | 0.8527 | 0.6307 | 0.6063 | 0.6070 | 0.6368 |
| 0.7861 | 1.09 | 260 | 0.8300 | 0.6510 | 0.6229 | 0.6172 | 0.6449 |
| 0.8782 | 1.11 | 265 | 0.8068 | 0.6588 | 0.6262 | 0.6195 | 0.6412 |
| 0.6993 | 1.13 | 270 | 0.8127 | 0.6573 | 0.6245 | 0.6186 | 0.6414 |
| 0.7961 | 1.15 | 275 | 0.8302 | 0.6448 | 0.6129 | 0.6142 | 0.6382 |
| 0.829 | 1.17 | 280 | 0.8130 | 0.6416 | 0.6068 | 0.6047 | 0.6264 |
| 0.7315 | 1.19 | 285 | 0.8127 | 0.6714 | 0.6414 | 0.6348 | 0.6609 |
| 0.7115 | 1.21 | 290 | 0.8074 | 0.6651 | 0.6367 | 0.6297 | 0.6577 |
| 0.7937 | 1.23 | 295 | 0.8018 | 0.6667 | 0.6405 | 0.6338 | 0.6595 |
| 0.8213 | 1.26 | 300 | 0.7846 | 0.6651 | 0.6317 | 0.6313 | 0.6424 |
| 0.9309 | 1.28 | 305 | 0.7801 | 0.6651 | 0.6267 | 0.6314 | 0.6357 |
| 0.7616 | 1.3 | 310 | 0.8000 | 0.6635 | 0.6403 | 0.6352 | 0.6657 |
| 0.7075 | 1.32 | 315 | 0.8006 | 0.6635 | 0.6395 | 0.6354 | 0.6642 |
| 0.8925 | 1.34 | 320 | 0.8418 | 0.6385 | 0.6185 | 0.6205 | 0.6531 |
| 0.7579 | 1.36 | 325 | 0.8114 | 0.6541 | 0.6308 | 0.6281 | 0.6602 |
| 0.6983 | 1.38 | 330 | 0.7589 | 0.6745 | 0.6424 | 0.6356 | 0.6538 |
| 0.756 | 1.4 | 335 | 0.7540 | 0.6870 | 0.6423 | 0.6454 | 0.6436 |
| 0.8183 | 1.42 | 340 | 0.7762 | 0.6651 | 0.6304 | 0.6248 | 0.6486 |
| 0.7386 | 1.44 | 345 | 0.8212 | 0.6510 | 0.6244 | 0.6229 | 0.6535 |
| 0.7175 | 1.46 | 350 | 0.8002 | 0.6573 | 0.6269 | 0.6229 | 0.6512 |
| 0.7076 | 1.49 | 355 | 0.7799 | 0.6682 | 0.6310 | 0.6281 | 0.6506 |
| 0.7115 | 1.51 | 360 | 0.7525 | 0.6886 | 0.6576 | 0.6510 | 0.6697 |
| 0.7092 | 1.53 | 365 | 0.7882 | 0.6714 | 0.6272 | 0.6513 | 0.6330 |
| 0.6852 | 1.55 | 370 | 0.7909 | 0.6698 | 0.6287 | 0.6548 | 0.6363 |
| 0.673 | 1.57 | 375 | 0.7396 | 0.6901 | 0.6523 | 0.6536 | 0.6542 |
| 0.7115 | 1.59 | 380 | 0.7270 | 0.6933 | 0.6539 | 0.6532 | 0.6546 |
| 0.6391 | 1.61 | 385 | 0.7389 | 0.6964 | 0.6654 | 0.6576 | 0.6790 |
| 0.6018 | 1.63 | 390 | 0.7619 | 0.6886 | 0.6628 | 0.6571 | 0.6835 |
| 0.743 | 1.65 | 395 | 0.7635 | 0.6854 | 0.6579 | 0.6546 | 0.6780 |
| 0.6865 | 1.67 | 400 | 0.7457 | 0.7011 | 0.6709 | 0.6681 | 0.6855 |
| 0.6629 | 1.69 | 405 | 0.7309 | 0.7058 | 0.6752 | 0.6717 | 0.6861 |
| 0.6887 | 1.72 | 410 | 0.7389 | 0.6933 | 0.6628 | 0.6555 | 0.6809 |
| 0.6494 | 1.74 | 415 | 0.7742 | 0.6823 | 0.6565 | 0.6519 | 0.6831 |
| 0.6798 | 1.76 | 420 | 0.7751 | 0.6667 | 0.6337 | 0.6345 | 0.6614 |
| 0.6825 | 1.78 | 425 | 0.7798 | 0.6604 | 0.6269 | 0.6375 | 0.6594 |
| 0.7926 | 1.8 | 430 | 0.7085 | 0.7105 | 0.6726 | 0.6670 | 0.6804 |
| 0.6508 | 1.82 | 435 | 0.7455 | 0.6964 | 0.6439 | 0.6653 | 0.6460 |
| 0.7772 | 1.84 | 440 | 0.7669 | 0.6964 | 0.6531 | 0.6780 | 0.6594 |
| 0.7265 | 1.86 | 445 | 0.7454 | 0.7089 | 0.6722 | 0.6800 | 0.6826 |
| 0.5965 | 1.88 | 450 | 0.7700 | 0.6933 | 0.6670 | 0.6623 | 0.6931 |
| 0.6436 | 1.9 | 455 | 0.7910 | 0.6901 | 0.6654 | 0.6620 | 0.6951 |
| 0.6887 | 1.92 | 460 | 0.7752 | 0.6870 | 0.6590 | 0.6552 | 0.6872 |
| 0.7574 | 1.95 | 465 | 0.7511 | 0.6980 | 0.6686 | 0.6621 | 0.6925 |
| 0.6853 | 1.97 | 470 | 0.7446 | 0.7074 | 0.6775 | 0.6711 | 0.6981 |
| 0.7416 | 1.99 | 475 | 0.7151 | 0.7105 | 0.6783 | 0.6703 | 0.6938 |
| 0.723 | 2.01 | 480 | 0.6886 | 0.7105 | 0.6727 | 0.6691 | 0.6776 |
| 0.5993 | 2.03 | 485 | 0.6947 | 0.7152 | 0.6767 | 0.6711 | 0.6865 |
| 0.549 | 2.05 | 490 | 0.7140 | 0.7167 | 0.6833 | 0.6764 | 0.6969 |
| 0.5739 | 2.07 | 495 | 0.7372 | 0.7136 | 0.6843 | 0.6828 | 0.6961 |
| 0.6444 | 2.09 | 500 | 0.7733 | 0.7089 | 0.6796 | 0.6943 | 0.6920 |
| 0.5526 | 2.11 | 505 | 0.7368 | 0.7277 | 0.6954 | 0.6927 | 0.7074 |
| 0.5429 | 2.13 | 510 | 0.7194 | 0.7246 | 0.6886 | 0.6879 | 0.6913 |
| 0.5838 | 2.15 | 515 | 0.7465 | 0.7214 | 0.6818 | 0.6933 | 0.6866 |
| 0.6746 | 2.18 | 520 | 0.7644 | 0.7152 | 0.6865 | 0.6819 | 0.7054 |
| 0.7252 | 2.2 | 525 | 0.7564 | 0.7042 | 0.6713 | 0.6645 | 0.6918 |
| 0.5443 | 2.22 | 530 | 0.7337 | 0.7027 | 0.6636 | 0.6598 | 0.6782 |
| 0.5526 | 2.24 | 535 | 0.7324 | 0.7183 | 0.6795 | 0.6831 | 0.6865 |
| 0.692 | 2.26 | 540 | 0.7622 | 0.7121 | 0.6826 | 0.6841 | 0.6971 |
| 0.5897 | 2.28 | 545 | 0.7525 | 0.7089 | 0.6771 | 0.6708 | 0.6951 |
| 0.708 | 2.3 | 550 | 0.7366 | 0.7105 | 0.6763 | 0.6690 | 0.6938 |
| 0.6009 | 2.32 | 555 | 0.7232 | 0.7136 | 0.6741 | 0.6690 | 0.6843 |
| 0.6622 | 2.34 | 560 | 0.7104 | 0.7136 | 0.6763 | 0.6727 | 0.6816 |
| 0.8816 | 2.36 | 565 | 0.7150 | 0.7183 | 0.6830 | 0.6775 | 0.6932 |
| 0.6642 | 2.38 | 570 | 0.7545 | 0.6980 | 0.6681 | 0.6652 | 0.6961 |
| 0.5929 | 2.41 | 575 | 0.7167 | 0.7136 | 0.6778 | 0.6704 | 0.6930 |
| 0.6612 | 2.43 | 580 | 0.7078 | 0.7277 | 0.6912 | 0.6858 | 0.7023 |
| 0.4924 | 2.45 | 585 | 0.7138 | 0.7167 | 0.6809 | 0.6753 | 0.6938 |
| 0.544 | 2.47 | 590 | 0.7088 | 0.7183 | 0.6807 | 0.6749 | 0.6901 |
| 0.4047 | 2.49 | 595 | 0.7210 | 0.7199 | 0.6843 | 0.6775 | 0.6965 |
| 0.5416 | 2.51 | 600 | 0.7199 | 0.7214 | 0.6845 | 0.6777 | 0.6952 |
| 0.5407 | 2.53 | 605 | 0.7159 | 0.7293 | 0.6934 | 0.6873 | 0.7017 |
| 0.5775 | 2.55 | 610 | 0.7354 | 0.7308 | 0.6975 | 0.6902 | 0.7133 |
| 0.6107 | 2.57 | 615 | 0.7402 | 0.7261 | 0.6932 | 0.6863 | 0.7103 |
| 0.5679 | 2.59 | 620 | 0.7266 | 0.7293 | 0.6946 | 0.6869 | 0.7091 |
| 0.5599 | 2.62 | 625 | 0.7049 | 0.7136 | 0.6736 | 0.6716 | 0.6757 |
| 0.6608 | 2.64 | 630 | 0.7150 | 0.7183 | 0.6834 | 0.6761 | 0.6952 |
| 0.6886 | 2.66 | 635 | 0.7334 | 0.7230 | 0.6925 | 0.6856 | 0.7107 |
| 0.6524 | 2.68 | 640 | 0.7106 | 0.7324 | 0.6955 | 0.6907 | 0.7060 |
| 0.5027 | 2.7 | 645 | 0.7031 | 0.7261 | 0.6871 | 0.6896 | 0.6883 |
| 0.5327 | 2.72 | 650 | 0.7033 | 0.7230 | 0.6824 | 0.6863 | 0.6812 |
| 0.6561 | 2.74 | 655 | 0.7188 | 0.7183 | 0.6846 | 0.6770 | 0.6979 |
| 0.591 | 2.76 | 660 | 0.7449 | 0.7136 | 0.6844 | 0.6793 | 0.7087 |
| 0.4584 | 2.78 | 665 | 0.7220 | 0.7074 | 0.6732 | 0.6661 | 0.6855 |
| 0.501 | 2.8 | 670 | 0.7212 | 0.7199 | 0.6829 | 0.6830 | 0.6879 |
| 0.7118 | 2.82 | 675 | 0.7327 | 0.7167 | 0.6827 | 0.6775 | 0.6962 |
| 0.5037 | 2.85 | 680 | 0.7544 | 0.7121 | 0.6818 | 0.6742 | 0.7042 |
| 0.4921 | 2.87 | 685 | 0.7265 | 0.7136 | 0.6791 | 0.6714 | 0.6926 |
| 0.5255 | 2.89 | 690 | 0.7278 | 0.7074 | 0.6706 | 0.6659 | 0.6855 |
| 0.509 | 2.91 | 695 | 0.7334 | 0.7027 | 0.6654 | 0.6599 | 0.6806 |
| 0.4321 | 2.93 | 700 | 0.7358 | 0.7152 | 0.6805 | 0.6728 | 0.6944 |
| 0.6196 | 2.95 | 705 | 0.7406 | 0.7293 | 0.6971 | 0.6895 | 0.7119 |
| 0.5289 | 2.97 | 710 | 0.7363 | 0.7324 | 0.7017 | 0.6944 | 0.7162 |
| 0.6204 | 2.99 | 715 | 0.7401 | 0.7324 | 0.7024 | 0.6949 | 0.7182 |
| 0.5459 | 3.01 | 720 | 0.7360 | 0.7308 | 0.7010 | 0.6937 | 0.7152 |
| 0.4793 | 3.03 | 725 | 0.7363 | 0.7324 | 0.7007 | 0.6966 | 0.7123 |
| 0.5157 | 3.05 | 730 | 0.7330 | 0.7355 | 0.7026 | 0.6999 | 0.7107 |
| 0.4863 | 3.08 | 735 | 0.7231 | 0.7199 | 0.6842 | 0.6803 | 0.6887 |
| 0.423 | 3.1 | 740 | 0.7313 | 0.7230 | 0.6873 | 0.6816 | 0.6950 |
| 0.4879 | 3.12 | 745 | 0.7546 | 0.7199 | 0.6895 | 0.6828 | 0.7064 |
| 0.2499 | 3.14 | 750 | 0.7727 | 0.7214 | 0.6934 | 0.6913 | 0.7093 |
| 0.487 | 3.16 | 755 | 0.7621 | 0.7230 | 0.6906 | 0.6832 | 0.7052 |
| 0.3501 | 3.18 | 760 | 0.7966 | 0.7027 | 0.6689 | 0.6664 | 0.6919 |
| 0.5762 | 3.2 | 765 | 0.7694 | 0.7121 | 0.6747 | 0.6708 | 0.6896 |
| 0.4491 | 3.22 | 770 | 0.7482 | 0.7230 | 0.6873 | 0.6860 | 0.6887 |
| 0.4803 | 3.24 | 775 | 0.7584 | 0.7261 | 0.6895 | 0.6910 | 0.6934 |
| 0.3349 | 3.26 | 780 | 0.7874 | 0.7183 | 0.6870 | 0.6929 | 0.6956 |
| 0.5481 | 3.28 | 785 | 0.8124 | 0.7105 | 0.6856 | 0.6831 | 0.7075 |
| 0.3695 | 3.31 | 790 | 0.7935 | 0.7089 | 0.6798 | 0.6714 | 0.6995 |
| 0.3998 | 3.33 | 795 | 0.7702 | 0.7152 | 0.6811 | 0.6748 | 0.6912 |
| 0.5214 | 3.35 | 800 | 0.7705 | 0.7152 | 0.6765 | 0.6772 | 0.6759 |
| 0.4914 | 3.37 | 805 | 0.7796 | 0.7293 | 0.6954 | 0.6887 | 0.7048 |
| 0.4096 | 3.39 | 810 | 0.7912 | 0.7121 | 0.6818 | 0.6732 | 0.6999 |
| 0.4346 | 3.41 | 815 | 0.7758 | 0.7293 | 0.6958 | 0.6887 | 0.7060 |
| 0.4933 | 3.43 | 820 | 0.7802 | 0.7136 | 0.6795 | 0.6719 | 0.6942 |
| 0.4561 | 3.45 | 825 | 0.7670 | 0.7261 | 0.6929 | 0.6863 | 0.7020 |
| 0.5619 | 3.47 | 830 | 0.7656 | 0.7293 | 0.6916 | 0.6950 | 0.6915 |
| 0.4934 | 3.49 | 835 | 0.7875 | 0.7277 | 0.6872 | 0.7002 | 0.6866 |
| 0.545 | 3.51 | 840 | 0.7675 | 0.7199 | 0.6733 | 0.6852 | 0.6663 |
| 0.4279 | 3.54 | 845 | 0.7582 | 0.7136 | 0.6709 | 0.6735 | 0.6690 |
| 0.351 | 3.56 | 850 | 0.7599 | 0.7136 | 0.6728 | 0.6724 | 0.6741 |
| 0.3701 | 3.58 | 855 | 0.7602 | 0.7293 | 0.6922 | 0.6940 | 0.6915 |
| 0.5307 | 3.6 | 860 | 0.7689 | 0.7308 | 0.6936 | 0.6968 | 0.6940 |
| 0.3895 | 3.62 | 865 | 0.7657 | 0.7246 | 0.6897 | 0.6852 | 0.6952 |
| 0.4676 | 3.64 | 870 | 0.7715 | 0.7230 | 0.6875 | 0.6811 | 0.6965 |
| 0.4124 | 3.66 | 875 | 0.7795 | 0.7230 | 0.6899 | 0.6822 | 0.7024 |
| 0.464 | 3.68 | 880 | 0.7933 | 0.7214 | 0.6893 | 0.6829 | 0.7022 |
| 0.4911 | 3.7 | 885 | 0.8201 | 0.7324 | 0.6947 | 0.6999 | 0.7029 |
| 0.4753 | 3.72 | 890 | 0.7907 | 0.7324 | 0.6978 | 0.6928 | 0.7060 |
| 0.3981 | 3.74 | 895 | 0.7811 | 0.7214 | 0.6832 | 0.6823 | 0.6842 |
| 0.5685 | 3.77 | 900 | 0.7806 | 0.7277 | 0.6899 | 0.6880 | 0.6920 |
| 0.4643 | 3.79 | 905 | 0.7792 | 0.7308 | 0.6961 | 0.6942 | 0.6995 |
| 0.4609 | 3.81 | 910 | 0.7886 | 0.7152 | 0.6814 | 0.6738 | 0.6940 |
| 0.5575 | 3.83 | 915 | 0.8158 | 0.7011 | 0.6688 | 0.6656 | 0.6925 |
| 0.4409 | 3.85 | 920 | 0.7921 | 0.7074 | 0.6717 | 0.6657 | 0.6890 |
| 0.5152 | 3.87 | 925 | 0.7839 | 0.7214 | 0.6859 | 0.6783 | 0.7003 |
| 0.4547 | 3.89 | 930 | 0.7646 | 0.7387 | 0.7034 | 0.6998 | 0.7111 |
| 0.32 | 3.91 | 935 | 0.7502 | 0.7277 | 0.6885 | 0.6893 | 0.6881 |
| 0.2742 | 3.93 | 940 | 0.7583 | 0.7167 | 0.6734 | 0.6794 | 0.6686 |
| 0.5842 | 3.95 | 945 | 0.7613 | 0.7261 | 0.6885 | 0.6842 | 0.6942 |
| 0.4406 | 3.97 | 950 | 0.7951 | 0.7387 | 0.7056 | 0.7011 | 0.7178 |
| 0.5251 | 4.0 | 955 | 0.7932 | 0.7261 | 0.6918 | 0.6851 | 0.7056 |
| 0.4235 | 4.02 | 960 | 0.7839 | 0.7167 | 0.6818 | 0.6745 | 0.6949 |
| 0.3876 | 4.04 | 965 | 0.7668 | 0.7277 | 0.6918 | 0.6864 | 0.6987 |
| 0.4244 | 4.06 | 970 | 0.7622 | 0.7246 | 0.6851 | 0.6872 | 0.6834 |
| 0.3872 | 4.08 | 975 | 0.7696 | 0.7261 | 0.6879 | 0.6903 | 0.6867 |
| 0.3878 | 4.1 | 980 | 0.7760 | 0.7183 | 0.6781 | 0.6779 | 0.6787 |
| 0.3029 | 4.12 | 985 | 0.7897 | 0.7340 | 0.6971 | 0.6933 | 0.7027 |
| 0.3147 | 4.14 | 990 | 0.7987 | 0.7308 | 0.6946 | 0.6903 | 0.7003 |
| 0.3531 | 4.16 | 995 | 0.8009 | 0.7167 | 0.6750 | 0.6746 | 0.6753 |
| 0.393 | 4.18 | 1000 | 0.8072 | 0.7136 | 0.6724 | 0.6730 | 0.6718 |
| 0.5162 | 4.21 | 1005 | 0.8105 | 0.7277 | 0.6902 | 0.6861 | 0.6952 |
| 0.4582 | 4.23 | 1010 | 0.8124 | 0.7293 | 0.6919 | 0.6873 | 0.6977 |
| 0.4746 | 4.25 | 1015 | 0.8130 | 0.7340 | 0.7015 | 0.6944 | 0.7125 |
| 0.453 | 4.27 | 1020 | 0.8024 | 0.7418 | 0.7083 | 0.7019 | 0.7174 |
| 0.3852 | 4.29 | 1025 | 0.7856 | 0.7183 | 0.6778 | 0.6763 | 0.6798 |
| 0.3614 | 4.31 | 1030 | 0.7797 | 0.7167 | 0.6766 | 0.6757 | 0.6781 |
| 0.3222 | 4.33 | 1035 | 0.7949 | 0.7293 | 0.6897 | 0.6983 | 0.6899 |
| 0.3769 | 4.35 | 1040 | 0.8036 | 0.7246 | 0.6853 | 0.6974 | 0.6826 |
| 0.3626 | 4.37 | 1045 | 0.7951 | 0.7340 | 0.6947 | 0.7033 | 0.6925 |
| 0.335 | 4.39 | 1050 | 0.8133 | 0.7293 | 0.6999 | 0.6923 | 0.7139 |
| 0.4664 | 4.41 | 1055 | 0.8644 | 0.7074 | 0.6818 | 0.6747 | 0.7095 |
| 0.3939 | 4.44 | 1060 | 0.8280 | 0.7246 | 0.6949 | 0.6859 | 0.7140 |
| 0.3793 | 4.46 | 1065 | 0.7876 | 0.7293 | 0.6919 | 0.6879 | 0.6966 |
| 0.4559 | 4.48 | 1070 | 0.7933 | 0.7277 | 0.6837 | 0.6939 | 0.6787 |
| 0.362 | 4.5 | 1075 | 0.7908 | 0.7308 | 0.6886 | 0.6955 | 0.6862 |
| 0.3833 | 4.52 | 1080 | 0.8061 | 0.7246 | 0.6894 | 0.6912 | 0.6948 |
| 0.2983 | 4.54 | 1085 | 0.8001 | 0.7371 | 0.6958 | 0.7029 | 0.6956 |
| 0.4279 | 4.56 | 1090 | 0.7939 | 0.7340 | 0.6985 | 0.6970 | 0.7007 |
| 0.371 | 4.58 | 1095 | 0.8178 | 0.7355 | 0.7047 | 0.6957 | 0.7213 |
| 0.2119 | 4.6 | 1100 | 0.8276 | 0.7277 | 0.6953 | 0.6877 | 0.7129 |
| 0.4231 | 4.62 | 1105 | 0.8099 | 0.7402 | 0.7089 | 0.7007 | 0.7219 |
| 0.1754 | 4.64 | 1110 | 0.8107 | 0.7340 | 0.6973 | 0.7013 | 0.6991 |
| 0.2922 | 4.67 | 1115 | 0.8135 | 0.7324 | 0.6945 | 0.6989 | 0.6954 |
| 0.3584 | 4.69 | 1120 | 0.8163 | 0.7433 | 0.7120 | 0.7076 | 0.7192 |
| 0.3186 | 4.71 | 1125 | 0.8135 | 0.7449 | 0.7120 | 0.7076 | 0.7178 |
| 0.2247 | 4.73 | 1130 | 0.8224 | 0.7418 | 0.7103 | 0.7060 | 0.7166 |
| 0.5324 | 4.75 | 1135 | 0.8359 | 0.7402 | 0.7119 | 0.7071 | 0.7216 |
| 0.3348 | 4.77 | 1140 | 0.8277 | 0.7340 | 0.6964 | 0.6981 | 0.6991 |
| 0.2568 | 4.79 | 1145 | 0.8138 | 0.7340 | 0.6960 | 0.6974 | 0.6956 |
| 0.3209 | 4.81 | 1150 | 0.8127 | 0.7293 | 0.6892 | 0.6901 | 0.6883 |
| 0.4479 | 4.83 | 1155 | 0.8081 | 0.7340 | 0.6962 | 0.6930 | 0.6999 |
| 0.3882 | 4.85 | 1160 | 0.8195 | 0.7371 | 0.7053 | 0.6981 | 0.7156 |
| 0.3669 | 4.87 | 1165 | 0.8290 | 0.7293 | 0.6967 | 0.6885 | 0.7107 |
| 0.3157 | 4.9 | 1170 | 0.8288 | 0.7355 | 0.7019 | 0.6943 | 0.7135 |
| 0.4165 | 4.92 | 1175 | 0.8225 | 0.7340 | 0.6982 | 0.6948 | 0.7039 |
| 0.2225 | 4.94 | 1180 | 0.8172 | 0.7293 | 0.6896 | 0.6894 | 0.6903 |
| 0.3322 | 4.96 | 1185 | 0.8276 | 0.7246 | 0.6833 | 0.6856 | 0.6814 |
| 0.3355 | 4.98 | 1190 | 0.8414 | 0.7214 | 0.6813 | 0.6819 | 0.6838 |
| 0.3134 | 5.0 | 1195 | 0.8560 | 0.7324 | 0.6976 | 0.6927 | 0.7103 |
| 0.2255 | 5.02 | 1200 | 0.8507 | 0.7308 | 0.6970 | 0.6901 | 0.7070 |
| 0.3257 | 5.04 | 1205 | 0.8506 | 0.7214 | 0.6806 | 0.6834 | 0.6814 |
| 0.2508 | 5.06 | 1210 | 0.8652 | 0.7261 | 0.6840 | 0.6932 | 0.6805 |
| 0.2465 | 5.08 | 1215 | 0.8663 | 0.7246 | 0.6814 | 0.6902 | 0.6771 |
| 0.273 | 5.1 | 1220 | 0.8629 | 0.7199 | 0.6769 | 0.6790 | 0.6765 |
| 0.2377 | 5.13 | 1225 | 0.8664 | 0.7355 | 0.6996 | 0.6956 | 0.7052 |
| 0.2537 | 5.15 | 1230 | 0.8793 | 0.7324 | 0.6998 | 0.6947 | 0.7088 |
| 0.2031 | 5.17 | 1235 | 0.8715 | 0.7261 | 0.6928 | 0.6877 | 0.7005 |
| 0.2148 | 5.19 | 1240 | 0.8654 | 0.7355 | 0.6980 | 0.6962 | 0.7001 |
| 0.2889 | 5.21 | 1245 | 0.8712 | 0.7261 | 0.6872 | 0.6881 | 0.6863 |
| 0.368 | 5.23 | 1250 | 0.8732 | 0.7308 | 0.6917 | 0.6929 | 0.6913 |
| 0.2998 | 5.25 | 1255 | 0.8758 | 0.7293 | 0.6927 | 0.6905 | 0.6958 |
| 0.3705 | 5.27 | 1260 | 0.8713 | 0.7308 | 0.6939 | 0.6906 | 0.6975 |
| 0.2486 | 5.29 | 1265 | 0.8734 | 0.7277 | 0.6929 | 0.6872 | 0.7003 |
| 0.2424 | 5.31 | 1270 | 0.8772 | 0.7214 | 0.6847 | 0.6820 | 0.6909 |
| 0.3169 | 5.33 | 1275 | 0.8768 | 0.7230 | 0.6828 | 0.6847 | 0.6856 |
| 0.2918 | 5.36 | 1280 | 0.8836 | 0.7246 | 0.6856 | 0.6839 | 0.6913 |
| 0.2464 | 5.38 | 1285 | 0.8798 | 0.7246 | 0.6859 | 0.6835 | 0.6909 |
| 0.3308 | 5.4 | 1290 | 0.8762 | 0.7340 | 0.6947 | 0.6909 | 0.6995 |
| 0.2678 | 5.42 | 1295 | 0.8799 | 0.7340 | 0.6952 | 0.6900 | 0.7019 |
| 0.3768 | 5.44 | 1300 | 0.8762 | 0.7293 | 0.6880 | 0.6862 | 0.6907 |
| 0.3272 | 5.46 | 1305 | 0.8741 | 0.7246 | 0.6816 | 0.6831 | 0.6806 |
| 0.2762 | 5.48 | 1310 | 0.8801 | 0.7308 | 0.6872 | 0.6914 | 0.6850 |
| 0.3292 | 5.5 | 1315 | 0.8855 | 0.7324 | 0.6884 | 0.6922 | 0.6868 |
| 0.2974 | 5.52 | 1320 | 0.8856 | 0.7324 | 0.6879 | 0.6911 | 0.6868 |
| 0.3522 | 5.54 | 1325 | 0.8799 | 0.7214 | 0.6767 | 0.6759 | 0.6775 |
| 0.2946 | 5.56 | 1330 | 0.8815 | 0.7199 | 0.6783 | 0.6769 | 0.6804 |
| 0.2064 | 5.59 | 1335 | 0.8876 | 0.7293 | 0.6894 | 0.6839 | 0.6970 |
| 0.2353 | 5.61 | 1340 | 0.9266 | 0.7261 | 0.6938 | 0.6878 | 0.7087 |
| 0.2696 | 5.63 | 1345 | 0.9339 | 0.7152 | 0.6817 | 0.6789 | 0.6956 |
| 0.4084 | 5.65 | 1350 | 0.8897 | 0.7308 | 0.6886 | 0.6897 | 0.6901 |
| 0.3375 | 5.67 | 1355 | 0.8848 | 0.7246 | 0.6812 | 0.6874 | 0.6775 |
| 0.2449 | 5.69 | 1360 | 0.8848 | 0.7230 | 0.6789 | 0.6850 | 0.6749 |
| 0.2459 | 5.71 | 1365 | 0.8859 | 0.7246 | 0.6815 | 0.6832 | 0.6806 |
| 0.3471 | 5.73 | 1370 | 0.8895 | 0.7230 | 0.6818 | 0.6805 | 0.6832 |
| 0.3112 | 5.75 | 1375 | 0.9040 | 0.7261 | 0.6881 | 0.6876 | 0.6919 |
| 0.3404 | 5.77 | 1380 | 0.9397 | 0.7214 | 0.6836 | 0.6910 | 0.6897 |
| 0.2509 | 5.79 | 1385 | 0.9319 | 0.7277 | 0.6852 | 0.6963 | 0.6878 |
| 0.367 | 5.82 | 1390 | 0.8828 | 0.7261 | 0.6839 | 0.6861 | 0.6832 |
| 0.3158 | 5.84 | 1395 | 0.8770 | 0.7167 | 0.6741 | 0.6770 | 0.6729 |
| 0.1901 | 5.86 | 1400 | 0.8789 | 0.7183 | 0.6771 | 0.6783 | 0.6779 |
| 0.2183 | 5.88 | 1405 | 0.8804 | 0.7261 | 0.6845 | 0.6838 | 0.6856 |
| 0.3058 | 5.9 | 1410 | 0.8927 | 0.7277 | 0.6877 | 0.6921 | 0.6866 |
| 0.1906 | 5.92 | 1415 | 0.8929 | 0.7261 | 0.6859 | 0.6889 | 0.6856 |
| 0.2887 | 5.94 | 1420 | 0.8876 | 0.7293 | 0.6904 | 0.6908 | 0.6915 |
| 0.2236 | 5.96 | 1425 | 0.8900 | 0.7261 | 0.6866 | 0.6823 | 0.6918 |
| 0.3345 | 5.98 | 1430 | 0.8948 | 0.7293 | 0.6902 | 0.6884 | 0.6930 |
| 0.3004 | 6.0 | 1435 | 0.8938 | 0.7277 | 0.6871 | 0.6868 | 0.6873 |
| 0.3376 | 6.03 | 1440 | 0.8939 | 0.7308 | 0.6902 | 0.6895 | 0.6913 |
| 0.1774 | 6.05 | 1445 | 0.9019 | 0.7261 | 0.6893 | 0.6890 | 0.6915 |
| 0.1947 | 6.07 | 1450 | 0.8971 | 0.7308 | 0.6913 | 0.6917 | 0.6913 |
| 0.1641 | 6.09 | 1455 | 0.9135 | 0.7089 | 0.6639 | 0.6746 | 0.6574 |
| 0.3712 | 6.11 | 1460 | 0.9258 | 0.7089 | 0.6612 | 0.6755 | 0.6543 |
| 0.234 | 6.13 | 1465 | 0.8986 | 0.7261 | 0.6863 | 0.6868 | 0.6863 |
| 0.2605 | 6.15 | 1470 | 0.9004 | 0.7277 | 0.6875 | 0.6874 | 0.6881 |
| 0.1891 | 6.17 | 1475 | 0.9035 | 0.7293 | 0.6881 | 0.6867 | 0.6907 |
| 0.1988 | 6.19 | 1480 | 0.9032 | 0.7230 | 0.6807 | 0.6796 | 0.6824 |
| 0.1683 | 6.21 | 1485 | 0.9044 | 0.7293 | 0.6867 | 0.6876 | 0.6864 |
| 0.2669 | 6.23 | 1490 | 0.9156 | 0.7277 | 0.6879 | 0.6887 | 0.6885 |
| 0.2185 | 6.26 | 1495 | 0.9242 | 0.7324 | 0.6922 | 0.6927 | 0.6938 |
| 0.1485 | 6.28 | 1500 | 0.9264 | 0.7308 | 0.6916 | 0.6921 | 0.6925 |
| 0.1654 | 6.3 | 1505 | 0.9295 | 0.7308 | 0.6907 | 0.6913 | 0.6905 |
| 0.2177 | 6.32 | 1510 | 0.9347 | 0.7293 | 0.6884 | 0.6898 | 0.6871 |
| 0.1512 | 6.34 | 1515 | 0.9451 | 0.7261 | 0.6853 | 0.6842 | 0.6867 |
| 0.1006 | 6.36 | 1520 | 0.9623 | 0.7261 | 0.6869 | 0.6850 | 0.6911 |
| 0.1367 | 6.38 | 1525 | 0.9851 | 0.7277 | 0.6901 | 0.6916 | 0.6932 |
| 0.2743 | 6.4 | 1530 | 0.9740 | 0.7340 | 0.6958 | 0.6982 | 0.6960 |
| 0.2843 | 6.42 | 1535 | 0.9689 | 0.7261 | 0.6873 | 0.6892 | 0.6856 |
| 0.2563 | 6.44 | 1540 | 0.9781 | 0.7199 | 0.6757 | 0.6819 | 0.6706 |
| 0.2941 | 6.46 | 1545 | 0.9763 | 0.7246 | 0.6844 | 0.6915 | 0.6799 |
| 0.2245 | 6.49 | 1550 | 0.9718 | 0.7340 | 0.6948 | 0.6962 | 0.6952 |
| 0.1545 | 6.51 | 1555 | 0.9737 | 0.7324 | 0.6921 | 0.6921 | 0.6934 |
| 0.3361 | 6.53 | 1560 | 0.9692 | 0.7324 | 0.6944 | 0.6931 | 0.6966 |
| 0.162 | 6.55 | 1565 | 0.9704 | 0.7324 | 0.6946 | 0.6925 | 0.6982 |
| 0.2815 | 6.57 | 1570 | 0.9656 | 0.7340 | 0.6957 | 0.6962 | 0.6964 |
| 0.2087 | 6.59 | 1575 | 0.9639 | 0.7308 | 0.6927 | 0.6919 | 0.6952 |
| 0.2326 | 6.61 | 1580 | 0.9696 | 0.7324 | 0.6959 | 0.6929 | 0.7009 |
| 0.1923 | 6.63 | 1585 | 0.9611 | 0.7340 | 0.6981 | 0.6959 | 0.7019 |
| 0.1684 | 6.65 | 1590 | 0.9606 | 0.7355 | 0.6964 | 0.6978 | 0.6954 |
| 0.3993 | 6.67 | 1595 | 0.9609 | 0.7293 | 0.6888 | 0.6921 | 0.6860 |
| 0.3185 | 6.69 | 1600 | 0.9627 | 0.7355 | 0.6970 | 0.6974 | 0.6982 |
| 0.2099 | 6.72 | 1605 | 0.9814 | 0.7261 | 0.6910 | 0.6906 | 0.6962 |
| 0.1302 | 6.74 | 1610 | 0.9806 | 0.7308 | 0.6938 | 0.6922 | 0.6991 |
| 0.238 | 6.76 | 1615 | 0.9711 | 0.7324 | 0.6928 | 0.6940 | 0.6927 |
| 0.3351 | 6.78 | 1620 | 0.9749 | 0.7230 | 0.6788 | 0.6868 | 0.6738 |
| 0.3485 | 6.8 | 1625 | 0.9761 | 0.7308 | 0.6884 | 0.6937 | 0.6858 |
| 0.137 | 6.82 | 1630 | 0.9766 | 0.7324 | 0.6909 | 0.6947 | 0.6895 |
| 0.1751 | 6.84 | 1635 | 0.9776 | 0.7324 | 0.6932 | 0.6928 | 0.6946 |
| 0.1701 | 6.86 | 1640 | 0.9787 | 0.7355 | 0.6977 | 0.6954 | 0.7005 |
| 0.148 | 6.88 | 1645 | 0.9830 | 0.7387 | 0.7036 | 0.7001 | 0.7076 |
| 0.2204 | 6.9 | 1650 | 0.9860 | 0.7340 | 0.6949 | 0.6942 | 0.6960 |
| 0.1966 | 6.92 | 1655 | 0.9920 | 0.7214 | 0.6793 | 0.6817 | 0.6775 |
| 0.2242 | 6.95 | 1660 | 0.9979 | 0.7152 | 0.6727 | 0.6771 | 0.6688 |
| 0.157 | 6.97 | 1665 | 1.0002 | 0.7293 | 0.6876 | 0.6925 | 0.6852 |
| 0.2665 | 6.99 | 1670 | 1.0067 | 0.7230 | 0.6838 | 0.6860 | 0.6860 |
| 0.159 | 7.01 | 1675 | 1.0002 | 0.7230 | 0.6841 | 0.6834 | 0.6867 |
| 0.1399 | 7.03 | 1680 | 0.9954 | 0.7277 | 0.6887 | 0.6874 | 0.6909 |
| 0.16 | 7.05 | 1685 | 0.9981 | 0.7277 | 0.6878 | 0.6878 | 0.6889 |
| 0.1074 | 7.07 | 1690 | 1.0067 | 0.7277 | 0.6881 | 0.6886 | 0.6889 |
| 0.15 | 7.09 | 1695 | 1.0130 | 0.7261 | 0.6857 | 0.6860 | 0.6863 |
| 0.1956 | 7.11 | 1700 | 1.0177 | 0.7261 | 0.6858 | 0.6854 | 0.6871 |
| 0.0964 | 7.13 | 1705 | 1.0193 | 0.7277 | 0.6877 | 0.6884 | 0.6881 |
| 0.1922 | 7.15 | 1710 | 1.0224 | 0.7277 | 0.6867 | 0.6894 | 0.6854 |
| 0.1334 | 7.18 | 1715 | 1.0224 | 0.7261 | 0.6844 | 0.6883 | 0.6812 |
| 0.1071 | 7.2 | 1720 | 1.0252 | 0.7183 | 0.6746 | 0.6796 | 0.6704 |
| 0.1798 | 7.22 | 1725 | 1.0306 | 0.7214 | 0.6781 | 0.6851 | 0.6724 |
| 0.2293 | 7.24 | 1730 | 1.0302 | 0.7277 | 0.6878 | 0.6900 | 0.6865 |
| 0.1813 | 7.26 | 1735 | 1.0316 | 0.7261 | 0.6884 | 0.6898 | 0.6895 |
| 0.1884 | 7.28 | 1740 | 1.0327 | 0.7261 | 0.6884 | 0.6898 | 0.6895 |
| 0.1482 | 7.3 | 1745 | 1.0328 | 0.7261 | 0.6877 | 0.6900 | 0.6883 |
| 0.1044 | 7.32 | 1750 | 1.0387 | 0.7324 | 0.6947 | 0.6989 | 0.6946 |
| 0.3129 | 7.34 | 1755 | 1.0264 | 0.7261 | 0.6884 | 0.6905 | 0.6887 |
| 0.1136 | 7.36 | 1760 | 1.0226 | 0.7183 | 0.6789 | 0.6826 | 0.6759 |
| 0.1869 | 7.38 | 1765 | 1.0219 | 0.7214 | 0.6812 | 0.6852 | 0.6783 |
| 0.1363 | 7.41 | 1770 | 1.0230 | 0.7261 | 0.6865 | 0.6913 | 0.6836 |
| 0.0683 | 7.43 | 1775 | 1.0295 | 0.7230 | 0.6835 | 0.6885 | 0.6800 |
| 0.155 | 7.45 | 1780 | 1.0372 | 0.7214 | 0.6805 | 0.6870 | 0.6767 |
| 0.3063 | 7.47 | 1785 | 1.0365 | 0.7246 | 0.6849 | 0.6885 | 0.6834 |
| 0.0882 | 7.49 | 1790 | 1.0347 | 0.7214 | 0.6821 | 0.6856 | 0.6795 |
| 0.1951 | 7.51 | 1795 | 1.0363 | 0.7183 | 0.6786 | 0.6803 | 0.6771 |
| 0.1963 | 7.53 | 1800 | 1.0397 | 0.7261 | 0.6865 | 0.6878 | 0.6875 |
| 0.2286 | 7.55 | 1805 | 1.0406 | 0.7261 | 0.6868 | 0.6880 | 0.6883 |
| 0.1509 | 7.57 | 1810 | 1.0362 | 0.7293 | 0.6896 | 0.6930 | 0.6887 |
| 0.1184 | 7.59 | 1815 | 1.0418 | 0.7105 | 0.6661 | 0.6765 | 0.6584 |
| 0.1063 | 7.62 | 1820 | 1.0522 | 0.7105 | 0.6630 | 0.6777 | 0.6529 |
| 0.134 | 7.64 | 1825 | 1.0484 | 0.7199 | 0.6762 | 0.6882 | 0.6686 |
| 0.2583 | 7.66 | 1830 | 1.0450 | 0.7261 | 0.6826 | 0.6912 | 0.6789 |
| 0.1144 | 7.68 | 1835 | 1.0507 | 0.7277 | 0.6882 | 0.6944 | 0.6877 |
| 0.1107 | 7.7 | 1840 | 1.0511 | 0.7214 | 0.6839 | 0.6853 | 0.6877 |
| 0.2604 | 7.72 | 1845 | 1.0395 | 0.7246 | 0.6863 | 0.6858 | 0.6881 |
| 0.1464 | 7.74 | 1850 | 1.0398 | 0.7199 | 0.6787 | 0.6801 | 0.6777 |
| 0.2535 | 7.76 | 1855 | 1.0411 | 0.7246 | 0.6820 | 0.6869 | 0.6779 |
| 0.1572 | 7.78 | 1860 | 1.0406 | 0.7183 | 0.6765 | 0.6789 | 0.6743 |
| 0.1646 | 7.8 | 1865 | 1.0415 | 0.7183 | 0.6746 | 0.6796 | 0.6704 |
| 0.2349 | 7.82 | 1870 | 1.0426 | 0.7261 | 0.6844 | 0.6890 | 0.6816 |
| 0.2146 | 7.85 | 1875 | 1.0449 | 0.7277 | 0.6882 | 0.6907 | 0.6885 |
| 0.1505 | 7.87 | 1880 | 1.0456 | 0.7277 | 0.6915 | 0.6908 | 0.6944 |
| 0.2806 | 7.89 | 1885 | 1.0445 | 0.7261 | 0.6900 | 0.6894 | 0.6926 |
| 0.2245 | 7.91 | 1890 | 1.0402 | 0.7277 | 0.6908 | 0.6904 | 0.6916 |
| 0.1388 | 7.93 | 1895 | 1.0410 | 0.7293 | 0.6914 | 0.6919 | 0.6911 |
| 0.3175 | 7.95 | 1900 | 1.0403 | 0.7261 | 0.6876 | 0.6899 | 0.6856 |
| 0.2023 | 7.97 | 1905 | 1.0379 | 0.7230 | 0.6857 | 0.6885 | 0.6832 |
| 0.1165 | 7.99 | 1910 | 1.0389 | 0.7261 | 0.6881 | 0.6913 | 0.6852 |
| 0.1103 | 8.01 | 1915 | 1.0431 | 0.7246 | 0.6865 | 0.6899 | 0.6834 |
| 0.1822 | 8.03 | 1920 | 1.0520 | 0.7214 | 0.6820 | 0.6872 | 0.6775 |
| 0.1773 | 8.05 | 1925 | 1.0600 | 0.7121 | 0.6690 | 0.6790 | 0.6614 |
| 0.1259 | 8.08 | 1930 | 1.0601 | 0.7183 | 0.6773 | 0.6843 | 0.6716 |
| 0.1737 | 8.1 | 1935 | 1.0619 | 0.7183 | 0.6804 | 0.6845 | 0.6775 |
| 0.1776 | 8.12 | 1940 | 1.0646 | 0.7277 | 0.6901 | 0.6921 | 0.6905 |
| 0.112 | 8.14 | 1945 | 1.0652 | 0.7324 | 0.6965 | 0.6968 | 0.6982 |
| 0.1649 | 8.16 | 1950 | 1.0650 | 0.7324 | 0.6962 | 0.6960 | 0.6982 |
| 0.1296 | 8.18 | 1955 | 1.0660 | 0.7308 | 0.6958 | 0.6954 | 0.6976 |
| 0.1325 | 8.2 | 1960 | 1.0651 | 0.7277 | 0.6897 | 0.6905 | 0.6901 |
| 0.1422 | 8.22 | 1965 | 1.0680 | 0.7199 | 0.6782 | 0.6839 | 0.6738 |
| 0.3486 | 8.24 | 1970 | 1.0723 | 0.7183 | 0.6729 | 0.6821 | 0.6661 |
| 0.2213 | 8.26 | 1975 | 1.0700 | 0.7121 | 0.6632 | 0.6738 | 0.6563 |
| 0.1206 | 8.28 | 1980 | 1.0671 | 0.7152 | 0.6673 | 0.6766 | 0.6622 |
| 0.1196 | 8.31 | 1985 | 1.0657 | 0.7183 | 0.6723 | 0.6796 | 0.6692 |
| 0.1955 | 8.33 | 1990 | 1.0568 | 0.7183 | 0.6745 | 0.6812 | 0.6696 |
| 0.1085 | 8.35 | 1995 | 1.0566 | 0.7152 | 0.6735 | 0.6813 | 0.6672 |
| 0.1359 | 8.37 | 2000 | 1.0549 | 0.7230 | 0.6862 | 0.6890 | 0.6836 |
| 0.2431 | 8.39 | 2005 | 1.0555 | 0.7308 | 0.6960 | 0.6976 | 0.6944 |
| 0.1512 | 8.41 | 2010 | 1.0570 | 0.7324 | 0.6966 | 0.6972 | 0.6970 |
| 0.1002 | 8.43 | 2015 | 1.0601 | 0.7355 | 0.6997 | 0.7000 | 0.7005 |
| 0.1529 | 8.45 | 2020 | 1.0601 | 0.7277 | 0.6913 | 0.6915 | 0.6913 |
| 0.1633 | 8.47 | 2025 | 1.0618 | 0.7261 | 0.6881 | 0.6882 | 0.6883 |
| 0.068 | 8.49 | 2030 | 1.0657 | 0.7199 | 0.6816 | 0.6826 | 0.6812 |
| 0.1883 | 8.51 | 2035 | 1.0644 | 0.7261 | 0.6885 | 0.6881 | 0.6891 |
| 0.1484 | 8.54 | 2040 | 1.0624 | 0.7324 | 0.6961 | 0.6952 | 0.6970 |
| 0.1438 | 8.56 | 2045 | 1.0642 | 0.7340 | 0.6983 | 0.6973 | 0.6995 |
| 0.1164 | 8.58 | 2050 | 1.0660 | 0.7308 | 0.6950 | 0.6948 | 0.6952 |
| 0.1523 | 8.6 | 2055 | 1.0702 | 0.7246 | 0.6875 | 0.6895 | 0.6857 |
| 0.0793 | 8.62 | 2060 | 1.0749 | 0.7230 | 0.6832 | 0.6874 | 0.6797 |
| 0.0752 | 8.64 | 2065 | 1.0783 | 0.7214 | 0.6797 | 0.6853 | 0.6755 |
| 0.0825 | 8.66 | 2070 | 1.0854 | 0.7230 | 0.6798 | 0.6868 | 0.6745 |
| 0.1463 | 8.68 | 2075 | 1.0937 | 0.7199 | 0.6748 | 0.6837 | 0.6686 |
| 0.1806 | 8.7 | 2080 | 1.0951 | 0.7199 | 0.6786 | 0.6854 | 0.6741 |
| 0.1354 | 8.72 | 2085 | 1.0925 | 0.7277 | 0.6885 | 0.6918 | 0.6877 |
| 0.1348 | 8.74 | 2090 | 1.0896 | 0.7324 | 0.6960 | 0.6958 | 0.6982 |
| 0.174 | 8.77 | 2095 | 1.0875 | 0.7261 | 0.6908 | 0.6900 | 0.6918 |
| 0.1424 | 8.79 | 2100 | 1.0902 | 0.7261 | 0.6896 | 0.6897 | 0.6895 |
| 0.1056 | 8.81 | 2105 | 1.0938 | 0.7261 | 0.6886 | 0.6906 | 0.6867 |
| 0.1662 | 8.83 | 2110 | 1.0952 | 0.7261 | 0.6866 | 0.6900 | 0.6836 |
| 0.1077 | 8.85 | 2115 | 1.0970 | 0.7246 | 0.6853 | 0.6887 | 0.6830 |
| 0.2363 | 8.87 | 2120 | 1.0967 | 0.7230 | 0.6832 | 0.6872 | 0.6808 |
| 0.1287 | 8.89 | 2125 | 1.0975 | 0.7261 | 0.6875 | 0.6916 | 0.6860 |
| 0.141 | 8.91 | 2130 | 1.0982 | 0.7277 | 0.6890 | 0.6930 | 0.6877 |
| 0.1411 | 8.93 | 2135 | 1.0962 | 0.7230 | 0.6824 | 0.6861 | 0.6800 |
| 0.1088 | 8.95 | 2140 | 1.0954 | 0.7230 | 0.6823 | 0.6880 | 0.6777 |
| 0.1032 | 8.97 | 2145 | 1.0942 | 0.7214 | 0.6807 | 0.6866 | 0.6759 |
| 0.0683 | 9.0 | 2150 | 1.0915 | 0.7230 | 0.6825 | 0.6877 | 0.6785 |
| 0.1402 | 9.02 | 2155 | 1.0894 | 0.7277 | 0.6894 | 0.6934 | 0.6861 |
| 0.0853 | 9.04 | 2160 | 1.0914 | 0.7246 | 0.6841 | 0.6891 | 0.6802 |
| 0.1155 | 9.06 | 2165 | 1.0937 | 0.7214 | 0.6787 | 0.6846 | 0.6743 |
| 0.0675 | 9.08 | 2170 | 1.0961 | 0.7230 | 0.6801 | 0.6869 | 0.6753 |
| 0.0754 | 9.1 | 2175 | 1.0959 | 0.7246 | 0.6828 | 0.6881 | 0.6791 |
| 0.0974 | 9.12 | 2180 | 1.0975 | 0.7293 | 0.6892 | 0.6926 | 0.6867 |
| 0.1567 | 9.14 | 2185 | 1.0993 | 0.7246 | 0.6850 | 0.6886 | 0.6822 |
| 0.1691 | 9.16 | 2190 | 1.0999 | 0.7261 | 0.6866 | 0.6917 | 0.6824 |
| 0.1026 | 9.18 | 2195 | 1.1006 | 0.7246 | 0.6850 | 0.6904 | 0.6806 |
| 0.0727 | 9.21 | 2200 | 1.1029 | 0.7246 | 0.6850 | 0.6904 | 0.6806 |
| 0.0834 | 9.23 | 2205 | 1.1046 | 0.7199 | 0.6783 | 0.6843 | 0.6738 |
| 0.1159 | 9.25 | 2210 | 1.1049 | 0.7230 | 0.6823 | 0.6880 | 0.6777 |
| 0.1586 | 9.27 | 2215 | 1.1046 | 0.7214 | 0.6808 | 0.6852 | 0.6775 |
| 0.1292 | 9.29 | 2220 | 1.1043 | 0.7230 | 0.6824 | 0.6865 | 0.6793 |
| 0.0743 | 9.31 | 2225 | 1.1035 | 0.7246 | 0.6851 | 0.6889 | 0.6822 |
| 0.06 | 9.33 | 2230 | 1.1022 | 0.7277 | 0.6912 | 0.6927 | 0.6901 |
| 0.1545 | 9.35 | 2235 | 1.1039 | 0.7293 | 0.6916 | 0.6932 | 0.6907 |
| 0.1546 | 9.37 | 2240 | 1.1058 | 0.7230 | 0.6833 | 0.6861 | 0.6812 |
| 0.2023 | 9.39 | 2245 | 1.1066 | 0.7214 | 0.6808 | 0.6852 | 0.6775 |
| 0.1607 | 9.41 | 2250 | 1.1077 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0658 | 9.44 | 2255 | 1.1090 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0417 | 9.46 | 2260 | 1.1107 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.063 | 9.48 | 2265 | 1.1129 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0988 | 9.5 | 2270 | 1.1147 | 0.7230 | 0.6833 | 0.6886 | 0.6789 |
| 0.1082 | 9.52 | 2275 | 1.1155 | 0.7230 | 0.6833 | 0.6886 | 0.6789 |
| 0.1984 | 9.54 | 2280 | 1.1154 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1793 | 9.56 | 2285 | 1.1153 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1324 | 9.58 | 2290 | 1.1152 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.1059 | 9.6 | 2295 | 1.1157 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0473 | 9.62 | 2300 | 1.1158 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.1065 | 9.64 | 2305 | 1.1166 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.1373 | 9.67 | 2310 | 1.1173 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1248 | 9.69 | 2315 | 1.1177 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0966 | 9.71 | 2320 | 1.1183 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0742 | 9.73 | 2325 | 1.1189 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0827 | 9.75 | 2330 | 1.1193 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.143 | 9.77 | 2335 | 1.1202 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1623 | 9.79 | 2340 | 1.1201 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1495 | 9.81 | 2345 | 1.1197 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0965 | 9.83 | 2350 | 1.1195 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1297 | 9.85 | 2355 | 1.1194 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1164 | 9.87 | 2360 | 1.1195 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1759 | 9.9 | 2365 | 1.1195 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.2404 | 9.92 | 2370 | 1.1192 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1467 | 9.94 | 2375 | 1.1189 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1969 | 9.96 | 2380 | 1.1187 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1573 | 9.98 | 2385 | 1.1187 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.2614 | 10.0 | 2390 | 1.1188 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
stanleychu2/roberta-fever | 15491e847784e59c53d1c884017ba860fa28bba9 | 2021-06-15T21:43:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | stanleychu2 | null | stanleychu2/roberta-fever | 3 | null | transformers | 21,778 | Entry not found |
stefan-it/electra-base-gc4-64k-300000-cased-generator | 979473bc60a3d6b0d2538df775116951a8ce0e5b | 2021-05-01T11:18:30.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-300000-cased-generator | 3 | null | transformers | 21,779 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
stefan-it/flair-ner-conll03 | 6184fd8983961469b6b12a0e689b71cb9d7f41a7 | 2020-12-11T10:07:20.000Z | [
"pytorch",
"en",
"flair",
"sequence-tagger-model",
"license:mit"
] | null | false | stefan-it | null | stefan-it/flair-ner-conll03 | 3 | null | flair | 21,780 | ---
language: en
tags:
- flair
- sequence-tagger-model
license: mit
---
# CoNLL-2003 NER Model
Imported sequence tagger model for Flair, that was trained on English CoNLL-2003 corpus for NER.
|
stfuowned/nek | ca2d149608996f8a211e05cd6e4b64ad67278cd0 | 2021-06-08T18:38:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | stfuowned | null | stfuowned/nek | 3 | null | transformers | 21,781 | ---
tags:
- conversational
---
# My Awesome Model |
subbareddyiiit/RobertaNLP | 4337b5e6c370e066ec0cf82b9005fc7a9e193672 | 2021-05-20T21:57:23.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | subbareddyiiit | null | subbareddyiiit/RobertaNLP | 3 | null | transformers | 21,782 | hello
|
subbareddyiiit/TeRobeRta | e103f5986a4cb8093b5223210048712cc89961d6 | 2021-05-20T21:58:55.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | subbareddyiiit | null | subbareddyiiit/TeRobeRta | 3 | null | transformers | 21,783 | Entry not found |
sukritin/hindi-bert | e837db3a8976dfb9a90041b5bc7e4205a3b9da5a | 2021-05-20T07:19:00.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sukritin | null | sukritin/hindi-bert | 3 | null | transformers | 21,784 | Entry not found |
superb/wav2vec2-large-superb-ks | cd6f4485d59f23c9e158e35815633aff8f1a583c | 2021-11-04T16:03:43.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/wav2vec2-large-superb-ks | 3 | null | transformers | 21,785 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
- audio-classification
license: apache-2.0
widget:
- example_title: Speech Commands "down"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_down.wav
- example_title: Speech Commands "go"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_go.wav
---
# Wav2Vec2-Large for Keyword Spotting
## Model description
This is a ported version of
[S3PRL's Wav2Vec2 for the SUPERB Keyword Spotting task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/speech_commands).
The base model is [wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
inference time are all crucial. SUPERB uses the widely used
[Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task.
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
false positive.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ks-keyword-spotting).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
classifier = pipeline("audio-classification", model="superb/wav2vec2-large-superb-ks")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
from torchaudio.sox_effects import apply_effects_file
effects = [["channels", "1"], ["rate", "16000"], ["gain", "-3.0"]]
def map_to_array(example):
speech, _ = apply_effects_file(example["file"], effects)
example["speech"] = speech.squeeze(0).numpy()
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-large-superb-ks")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-large-superb-ks")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9666` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
tamedai/marian-mt-es-de-epoch1-paracrawl-europarl-tilde-books-news | 8636701a474f144d7f1472f87ed1c052a0be5aa8 | 2021-12-11T16:11:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tamedai | null | tamedai/marian-mt-es-de-epoch1-paracrawl-europarl-tilde-books-news | 3 | null | transformers | 21,786 | Entry not found |
tanay/layoutlm-custom | 337a897033cde2a6754c234e824f10d4d710c947 | 2021-07-09T06:51:24.000Z | [
"pytorch",
"layoutlm",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tanay | null | tanay/layoutlm-custom | 3 | null | transformers | 21,787 | Entry not found |
taoroalin/12_aug_50k_labels | a6e6e75489185e41cd62367f97b8221ec627e212 | 2021-09-21T01:07:16.000Z | [
"pytorch",
"deberta",
"text-classification",
"transformers"
] | text-classification | false | taoroalin | null | taoroalin/12_aug_50k_labels | 3 | null | transformers | 21,788 | Entry not found |
tareknaous/t5-empathetic-dialogues | e0d62ce0d4f5a71798eb4f4c8a1315e582ecbf39 | 2022-02-21T08:54:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tareknaous | null | tareknaous/t5-empathetic-dialogues | 3 | null | transformers | 21,789 | Entry not found |
tcaputi/guns-relevant-b300 | c82389b47a1924d1054eacfe8fd4c9124be2d2b9 | 2021-05-20T07:24:39.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tcaputi | null | tcaputi/guns-relevant-b300 | 3 | null | transformers | 21,790 | Entry not found |
teacookies/autonlp-roberta-base-squad2-24465516 | 17996508f48c40ec39823a4de1e656265c57534a | 2021-10-22T08:21:22.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"unk",
"dataset:teacookies/autonlp-data-roberta-base-squad2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | question-answering | false | teacookies | null | teacookies/autonlp-roberta-base-squad2-24465516 | 3 | null | transformers | 21,791 | ---
tags:
- autonlp
- question-answering
language: unk
widget:
- text: "Who loves AutoNLP?"
context: "Everyone loves AutoNLP"
datasets:
- teacookies/autonlp-data-roberta-base-squad2
co2_eq_emissions: 65.5797497320557
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- Model ID: 24465516
- CO2 Emissions (in grams): 65.5797497320557
## Validation Metrics
- Loss: 0.6545609831809998
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465516
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465516", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
textattack/facebook-bart-large-RTE | a0f41a32294a3471c256178b4e95d00a7f10fc78 | 2020-06-09T16:50:55.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/facebook-bart-large-RTE | 3 | null | transformers | 21,792 | Entry not found |
textattack/facebook-bart-large-WNLI | 7035cbb0022e7444722e6dd0f491c4863df8acc5 | 2020-06-09T16:52:24.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/facebook-bart-large-WNLI | 3 | null | transformers | 21,793 | Entry not found |
textattack/xlnet-base-cased-MRPC | 8efd034dcc1b7375dd30de77a8c70aecca584b51 | 2020-07-06T16:30:46.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
] | text-generation | false | textattack | null | textattack/xlnet-base-cased-MRPC | 3 | null | transformers | 21,794 | ## TextAttack Model Card
This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 5e-05, and a maximum sequence length of 256.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8897058823529411, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/xlnet-base-cased-QNLI | 02cd512cd4078cab27ed6f90e500600f62bb6f44 | 2020-06-09T16:56:10.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/xlnet-base-cased-QNLI | 3 | null | transformers | 21,795 | Entry not found |
thatdramebaazguy/roberta-base-MITmovie-squad | f08839e074d3c62464d536ce45d6c30b16f3e9e6 | 2022-07-01T18:56:48.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"question-answering",
"English",
"dataset:MIT Movie",
"dataset:SQuAD",
"transformers",
"roberta-base",
"qa",
"movies",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | thatdramebaazguy | null | thatdramebaazguy/roberta-base-MITmovie-squad | 3 | 1 | transformers | 21,796 | ---
datasets:
- MIT Movie
- SQuAD
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- question-answering
- qa
- movies
license: cc-by-4.0
---
# roberta-base + Task Transfer (NER) --> Domain-Specific QA
Objective:
This is Roberta Base without any Domain Adaptive Pretraining --> Then trained for the NER task using MIT Movie Dataset --> Then a changed head to do the SQuAD Task. This makes a QA model capable of answering questions in the movie domain, with additional information coming from a different task (NER - Task Transfer).
https://huggingface.co/thatdramebaazguy/roberta-base-MITmovie was used as the Roberta Base + NER model.
```
model_name = "thatdramebaazguy/roberta-base-MITmovie-squad"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** NER --> QA
**Training data:** MIT Movie, SQuADv1
**Eval data:** MoviesQA (From https://github.com/ibm-aur-nlp/domain-specific-QA)
**Infrastructure**: 4x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh)
## Hyperparameters
```
Num examples = 88567
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 128
```
## Performance
### Eval on MoviesQA
- eval_samples = 5032
- exact_match = 55.80286
- f1 = 70.31451
### Eval on SQuADv1
- exact_match = 85.6859
- f1 = 91.96064
Github Repo:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
theiconik/hermione-granger | 353d4d4c0c4f2b0680f33ba3c87c52c36c6ae6f1 | 2021-08-26T16:11:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | theiconik | null | theiconik/hermione-granger | 3 | null | transformers | 21,797 | ---
tags:
- conversational
---
# Hermione Granger Model |
thomwolf/codeparrot | f6657cfdaf922dee188c7e81412894dff2203d64 | 2021-07-21T14:19:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | thomwolf | null | thomwolf/codeparrot | 3 | 1 | transformers | 21,798 | Entry not found |
thyagosme/bert-base-cased-wikitext2 | b85f4e7230a5d2d48aa654d6aa9181d8a1690863 | 2022-02-09T03:44:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | thyagosme | null | thyagosme/bert-base-cased-wikitext2 | 3 | null | transformers | 21,799 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0902 | 1.0 | 2346 | 7.0492 |
| 6.9027 | 2.0 | 4692 | 6.8692 |
| 6.8553 | 3.0 | 7038 | 6.8882 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.