modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
argv947059/example-based-ner-bert | 0405e52e4ff75f97262c9e3241a4fcfac5bee3c9 | 2021-05-19T00:02:49.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | argv947059 | null | argv947059/example-based-ner-bert | 2 | null | transformers | 23,700 | hello
|
aristotletan/t5-small-finetuned-xsum | 826e7dccf2be1edf69846f244b8db983b4ef4019 | 2021-07-22T00:18:39.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wsj_markets",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | aristotletan | null | aristotletan/t5-small-finetuned-xsum | 2 | null | transformers | 23,701 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wsj_markets
metrics:
- rouge
model_index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wsj_markets
type: wsj_markets
args: default
metric:
name: Rouge1
type: rouge
value: 10.4492
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wsj_markets dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1447
- Rouge1: 10.4492
- Rouge2: 3.9563
- Rougel: 9.3368
- Rougelsum: 9.9828
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.2742 | 1.0 | 868 | 1.3135 | 9.4644 | 2.618 | 8.4048 | 8.9764 | 19.0 |
| 1.4607 | 2.0 | 1736 | 1.2134 | 9.6327 | 3.8535 | 9.0703 | 9.2466 | 19.0 |
| 1.3579 | 3.0 | 2604 | 1.1684 | 10.1616 | 3.5498 | 9.2294 | 9.4507 | 19.0 |
| 1.3314 | 4.0 | 3472 | 1.1514 | 10.0621 | 3.6907 | 9.1635 | 9.4955 | 19.0 |
| 1.3084 | 5.0 | 4340 | 1.1447 | 10.4492 | 3.9563 | 9.3368 | 9.9828 | 19.0 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.10.0
- Tokenizers 0.10.3
|
arjunth2001/priv_sum | 349d0b1f365406eb05a8afc428ec1bb50cf8255f | 2021-10-07T07:04:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | arjunth2001 | null | arjunth2001/priv_sum | 2 | null | transformers | 23,702 | Entry not found |
arman0320/bert-base-cased-wikitext2 | 0c077326d4943da35e4891f0160999c086cab2ae | 2022-01-25T05:51:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | arman0320 | null | arman0320/bert-base-cased-wikitext2 | 2 | null | transformers | 23,703 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0963 | 1.0 | 2346 | 7.0570 |
| 6.9063 | 2.0 | 4692 | 6.8721 |
| 6.8585 | 3.0 | 7038 | 6.8931 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
arnolfokam/roberta-base-kin | 68c304d76bc5dcf1042f46e4174fc5972f5ad5d7 | 2021-11-24T11:46:30.000Z | [
"pytorch",
"roberta",
"token-classification",
"kin",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | arnolfokam | null | arnolfokam/roberta-base-kin | 2 | null | transformers | 23,704 | ---
language:
- kin
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi."
---
# Model description
**roberta-base-kin** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-kin**| 76.26 | 80.58 |78.36
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` |
artursz/wav2vec2-large-xls-r-300m-lv-v05 | 786710d3426c87d89f70a0fb46dcb8a1b07c604c | 2021-11-23T02:47:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | artursz | null | artursz/wav2vec2-large-xls-r-300m-lv-v05 | 2 | null | transformers | 23,705 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-lv-v05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-lv-v05
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3862
- Wer: 0.2588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8836 | 2.81 | 400 | 0.8722 | 0.7244 |
| 0.5365 | 5.63 | 800 | 0.4622 | 0.4812 |
| 0.277 | 8.45 | 1200 | 0.4348 | 0.4056 |
| 0.1947 | 11.27 | 1600 | 0.4223 | 0.3636 |
| 0.1655 | 14.08 | 2000 | 0.4084 | 0.3465 |
| 0.1441 | 16.9 | 2400 | 0.4329 | 0.3497 |
| 0.121 | 19.72 | 2800 | 0.4371 | 0.3324 |
| 0.1062 | 22.53 | 3200 | 0.4202 | 0.3198 |
| 0.0937 | 25.35 | 3600 | 0.4063 | 0.3265 |
| 0.0871 | 28.17 | 4000 | 0.4253 | 0.3255 |
| 0.0755 | 30.98 | 4400 | 0.4368 | 0.3194 |
| 0.0627 | 33.8 | 4800 | 0.4067 | 0.2908 |
| 0.0595 | 36.62 | 5200 | 0.3929 | 0.2973 |
| 0.0523 | 39.44 | 5600 | 0.3748 | 0.2817 |
| 0.0434 | 42.25 | 6000 | 0.3769 | 0.2711 |
| 0.0391 | 45.07 | 6400 | 0.3901 | 0.2653 |
| 0.0319 | 47.88 | 6800 | 0.3862 | 0.2588 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
arunkumar629/distilbert-base-uncased-finetuned-squad | 40857b79c934055573a668ef5b7b4c706506d6dd | 2022-07-01T15:39:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | arunkumar629 | null | arunkumar629/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,706 | Entry not found |
arvalinno/distilbert-base-uncased-finetuned-squad | 7da41471e3ad86123c9dda68d5a6db670555a84e | 2021-11-20T17:31:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | arvalinno | null | arvalinno/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,707 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7604 | 1.0 | 6366 | 1.5329 |
| 1.4784 | 2.0 | 12732 | 1.3930 |
| 1.3082 | 3.0 | 19098 | 1.4232 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
asahi417/lmqg-mt5-small-jaquad | 66ae0f8701ec85fe6a4bc2d9fc265838598c71a8 | 2022-06-09T00:45:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"dataset:asahi417/qg_jaquad",
"transformers",
"question generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mt5-small-jaquad | 2 | null | transformers | 23,708 | ---
language: ja
tags:
- question generation
license: cc-by-4.0
datasets:
- asahi417/qg_jaquad
metrics:
- bleu
- meteor
- rouge
- bertscore
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
- text: "東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて<hl>中国<hl>から起こり、伝来したものであった。当時の宗とは、教団というよりは仏教教理の学派に近い。それゆえ、兼学の場ができたとも言える。この様な兼学の形態は、南都の寺院では広く見られたものである。この六宗兼学の場(後、真言、天台加わって八宗兼学の場)の性格は、現在の東大寺でも見られるが、中でも重んじられたのが、本尊の大仏の性格が華厳経の教えに則ったものであることからも分かるように、華厳宗である。"
example_title: "Question Generation Example 4"
pipeline_tag: text2text-generation
---
# MT5 SMALL fine-tuned for Japanese Question Generation
MT5 SMALL Model fine-tuned on Japanese question generation dataset (JaQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** mt5-small
**Language:** Japanese (ja)
**Downstream-task:** Question Generation
**Training data:** JaQuAD
**Eval data:** JaQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-mt5-small-squad'
pipe = pipeline("text2text-generation", model_path)
# Question Genration
paragraph = '東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて中国から起こり、伝来したものであった。'
# highlight an answer in the paragraph to generate question
answer = '中国'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': '六宗はどこから始まったの?'}]
```
## Evaluations
Evaluation on the test set of [JaQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_jaquad).
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore |
| ------ | -------- | ------ | --------- |
| 30.49 | 50.87 | 29.03 | 80.87 |
- [metric file](https://huggingface.co/asahi417/lmqg-mt5-small-jaquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_jaquad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-mt5-small-jaquad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
asahi417/lmqg-t5-small-squad | cf9e060a61fd91b5c0edcb9c99756b7b6be6a2e5 | 2022-06-09T18:16:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-t5-small-squad | 2 | null | transformers | 23,709 | ---
language: en
tags:
- question generation
license: cc-by-4.0
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
pipeline_tag: text2text-generation
---
# T5 SMALL fine-tuned for English Question Generation
T5 SMALL Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** t5-small
**Language:** English (en)
**Downstream-task:** Question Generation
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-t5-small-squad'
pipe = pipeline("text2text-generation", model_path)
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 24.39 | 51.43 | 25.83 | 90.20 | 63.88 |
- [metric file](https://huggingface.co/asahi417/lmqg-t5-small-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-t5-small-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
tner/xlm-roberta-base-conll2003 | 51b23257995bf6cd353476522327b7c11cb47a30 | 2021-02-13T00:07:07.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-conll2003 | 2 | null | transformers | 23,710 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-conll2003")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-conll2003")
``` |
tner/xlm-roberta-base-uncased-all-english | 376fa301095f3615bf063b96863412aadab9fd9c | 2021-02-12T23:35:06.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-all-english | 2 | null | transformers | 23,711 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-all-english")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-all-english")
``` |
tner/xlm-roberta-base-uncased-panx-dataset-en | 502e2a70262e6190b1cbff176c15391c7260b948 | 2021-02-13T00:10:50.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-panx-dataset-en | 2 | null | transformers | 23,712 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en")
``` |
tner/xlm-roberta-large-panx-dataset-es | d616f069e855ac85d86abd0cdc506b2308ff6456 | 2021-02-13T00:04:53.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-panx-dataset-es | 2 | null | transformers | 23,713 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-es")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-es")
``` |
tner/xlm-roberta-large-uncased-all-english | 3b20a99f250c04c3b591406cef825ddcb190472a | 2021-02-13T00:05:18.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-uncased-all-english | 2 | null | transformers | 23,714 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-all-english")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-all-english")
``` |
asakawa/distilroberta-base-finetuned-wikitext2 | 9242bb38a3a48ba2760395841c32dba6138fde84 | 2022-01-06T02:13:38.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | asakawa | null | asakawa/distilroberta-base-finetuned-wikitext2 | 2 | null | transformers | 23,715 | Entry not found |
aschvin/english_wav2_vec_classification | e1cdece4e744c1ae54cbf1efb750155441381113 | 2022-01-23T09:43:23.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | aschvin | null | aschvin/english_wav2_vec_classification | 2 | null | transformers | 23,716 | Entry not found |
assij/wav2vec2-common_voice-tr-demo-dist | 490f083e2d0e0a1be0282cd38a367bdb40e153ee | 2021-12-29T10:04:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | assij | null | assij/wav2vec2-common_voice-tr-demo-dist | 2 | null | transformers | 23,717 | Entry not found |
aszidon/distilbertcustom | add1cce72b600f43acd0144923529bf2f8a3c888 | 2021-11-04T03:40:07.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aszidon | null | aszidon/distilbertcustom | 2 | null | transformers | 23,718 | Entry not found |
aszidon/distilbertcustom2 | c284132d422004b400ad985068749394d9b5a19c | 2021-11-05T03:06:40.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aszidon | null | aszidon/distilbertcustom2 | 2 | null | transformers | 23,719 | Entry not found |
aszidon/distilbertcustom5 | 75f7dfb1dd8e460751978fe4e3901b5cd4630fc4 | 2021-11-08T04:00:23.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | aszidon | null | aszidon/distilbertcustom5 | 2 | null | transformers | 23,720 | Entry not found |
avioo1/distilbert-base-uncased-finetuned-squad | 35c81a4d95ae768fcd25594950645e9c5353e2f4 | 2021-09-12T01:58:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | avioo1 | null | avioo1/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 23,721 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2637 | 1.0 | 5533 | 1.2125 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
avnish100/DialoGPT-small-rick | 0200c757cb1f11b6e33dc1adcaafc6927e3c7201 | 2021-09-10T12:49:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | avnish100 | null | avnish100/DialoGPT-small-rick | 2 | null | transformers | 23,722 | ----
tags:
- conversational
---
#Rick DialoGPT model |
aws-ai/pairsupcon-bert-base-uncased | 80c78efa4f84fc7e3a5daba80ac9a128b7760aa4 | 2021-12-18T19:27:33.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | aws-ai | null | aws-ai/pairsupcon-bert-base-uncased | 2 | null | transformers | 23,723 | Entry not found |
ayameRushia/wav2vec2-large-xlsr-indo-base | 6c443cc964dfe10cfec172b1d799685d7fd7b0e3 | 2021-07-05T22:13:24.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ayameRushia | null | ayameRushia/wav2vec2-large-xlsr-indo-base | 2 | null | transformers | 23,724 | ---
language: id
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesia by Ayame Rushia
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: ???
---
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER = 20.072720 %
## Training
Training using common voice dataset |
azwierzc/plt5-base-pl-to-sql | 35e11c61a081e6c56e3afdf59e2823e792c4d57d | 2022-02-04T14:29:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | azwierzc | null | azwierzc/plt5-base-pl-to-sql | 2 | null | transformers | 23,725 | Entry not found |
baby-oogway/wav2vec2-timit_asr-oogway | dedeb9372ebd873cd5ff68614e7a82da4dfdbdb7 | 2021-11-27T20:14:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | baby-oogway | null | baby-oogway/wav2vec2-timit_asr-oogway | 2 | null | transformers | 23,726 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-timit_asr-oogway
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-timit_asr-oogway
This model is a fine-tuned version of [OthmaneJ/distil-wav2vec2](https://huggingface.co/OthmaneJ/distil-wav2vec2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bada/test | 4eaa674bbdd8e1bedc030d34ee2457dcdfce3397 | 2021-05-19T12:06:17.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | bada | null | bada/test | 2 | null | transformers | 23,727 | "hello"
|
bala1802/model_1_test | a24589e0ebbc9c6ab5109d1bbac819cdca1288da | 2021-05-21T13:59:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bala1802 | null | bala1802/model_1_test | 2 | null | transformers | 23,728 | Entry not found |
balamariannmt/LanguageModel_Trial_2 | f6c6b45e9b78119addff734adaade47845b800ef | 2021-05-21T14:01:08.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | balamariannmt | null | balamariannmt/LanguageModel_Trial_2 | 2 | null | transformers | 23,729 | Entry not found |
balawmt/LanguageModel_Trial_1 | 5bc9f3a1789d7e6b04976500f91b79c9e8ba5626 | 2021-05-21T14:03:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | balawmt | null | balawmt/LanguageModel_Trial_1 | 2 | null | transformers | 23,730 | Entry not found |
begar/distilgpt2-finetuned | e76d73de91316bc11c42cfa9ce69fb4e3b8c3047 | 2022-01-14T22:01:35.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | begar | null | begar/distilgpt2-finetuned | 2 | null | transformers | 23,731 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
beiluo/nlpload | 1b70bc5574b173ce92d3daeb624043498d11433a | 2021-11-04T06:47:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | beiluo | null | beiluo/nlpload | 2 | null | transformers | 23,732 | Entry not found |
benjamin/roberta-large-wechsel-hindi | 998bd9a3e9309e407544c6840ac9573d1f3a17d7 | 2021-11-11T10:31:36.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | benjamin | null | benjamin/roberta-large-wechsel-hindi | 2 | null | transformers | 23,733 | Entry not found |
benjamin/roberta-large-wechsel-tamil | 31629076c935eb169f71f5dd594d34b896179428 | 2021-11-11T10:39:28.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | benjamin | null | benjamin/roberta-large-wechsel-tamil | 2 | null | transformers | 23,734 | Entry not found |
benyong/testmodel | fcde8ce3a9cf059ed3485d4034f7a4593876548b | 2021-11-07T01:35:56.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | benyong | null | benyong/testmodel | 2 | null | transformers | 23,735 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
beomi/exKcBERT-kowiki | dae012c6b8af853c95005e246edea8d034aa5d9e | 2021-06-14T13:45:28.000Z | [
"pytorch",
"exbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | beomi | null | beomi/exKcBERT-kowiki | 2 | null | transformers | 23,736 | Entry not found |
bergurth/IceBERT-finetuned-ner | a9d634422f1c8e876a74d7b1b2667daaef9ce9b5 | 2021-10-05T21:48:37.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | bergurth | null | bergurth/IceBERT-finetuned-ner | 2 | null | transformers | 23,737 | ---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: Bob Dillan beit Maríu Markan á barkann.
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8873049035270985
- name: Recall
type: recall
value: 0.8627076114231091
- name: F1
type: f1
value: 0.8748333939173634
- name: Accuracy
type: accuracy
value: 0.9848076353832492
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0783
- Precision: 0.8873
- Recall: 0.8627
- F1: 0.8748
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0539 | 1.0 | 2904 | 0.0768 | 0.8732 | 0.8453 | 0.8590 | 0.9833 |
| 0.0281 | 2.0 | 5808 | 0.0737 | 0.8781 | 0.8492 | 0.8634 | 0.9838 |
| 0.0166 | 3.0 | 8712 | 0.0783 | 0.8873 | 0.8627 | 0.8748 | 0.9848 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
bertin-project/bertin-base-pos-conll2002-es | 25fcd8ff6777f8a02ced43eea7aa239d56ddf202 | 2021-09-23T13:41:54.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"transformers",
"spanish",
"ner",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | bertin-project | null | bertin-project/bertin-base-pos-conll2002-es | 2 | 1 | transformers | 23,738 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
- ner
---
This checkpoint has been trained for the POS task using the CoNLL 2002-es dataset.
This checkpoint was created from **Bertin Gaussian 512**, which is a **RoBERTa-base** model trained from scratch in Spanish. Information on this base model may be found at [its own card](https://huggingface.co/bertin-project/bertin-base-gaussian-exp-512seqlen) and at deeper detail on [the main project card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish).
The training dataset for the base model is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts).
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
bhavikardeshna/multilingual-bert-base-cased-chinese | c2efd38ab167f2a69570417e2afa644dc33c7948 | 2021-12-21T11:41:47.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
] | question-answering | false | bhavikardeshna | null | bhavikardeshna/multilingual-bert-base-cased-chinese | 2 | null | transformers | 23,739 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/multilingual-bert-base-cased-spanish | dfff6cecd37758bad0e82b5f966e352f84f4cb0b | 2021-12-21T11:43:55.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
] | question-answering | false | bhavikardeshna | null | bhavikardeshna/multilingual-bert-base-cased-spanish | 2 | null | transformers | 23,740 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bill/bert_finetuning_test1 | 45bf626e078ad4ad00e433f81175e1210f8b999f | 2021-09-03T15:39:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | bill | null | bill/bert_finetuning_test1 | 2 | null | transformers | 23,741 | Entry not found |
birgermoell/lm-swedish | 894b1777b8e8326a5aa268fbb2f26b86189b4105 | 2022-02-08T21:37:51.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv",
"dataset:common_voice",
"dataset:NST Swedish ASR Database",
"dataset:P4",
"transformers",
"audio",
"speech",
"license:cc0-1.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/lm-swedish | 2 | null | transformers | 23,742 | ---
language: sv
datasets:
- common_voice
- NST Swedish ASR Database
- P4
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: cc0-1.0
model-index:
- name: Wav2vec 2.0 large VoxRex Swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 9.914
---
# Wav2vec 2.0 large VoxRex Swedish (C)
Experiment with LM model.
**Disclaimer:** This is a work in progress. See [VoxRex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) for more details.
**Update 2022-01-10:** Updated to VoxRex-C version.
Finetuned version of KBs [VoxRex large](https://huggingface.co/KBLab/wav2vec2-large-voxrex) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **2.5%**. WER for Common Voice test set is **8.49%** directly and **7.37%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
# Performance\*

<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>
## Training
This model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.

## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
birgermoell/wav2vec2-large-xlrs-estonian | 96927200e328b887edee4c8adf73ef35527193e0 | 2021-07-05T23:07:04.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/wav2vec2-large-xlrs-estonian | 2 | null | transformers | 23,743 | ---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Estonian by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Estonian
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 36.951816
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 36.951816
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1VcWT92vBCwVn-5d-mkYxhgILPr11OHfR?usp=sharing
|
birgermoell/wav2vec2-speechdat | 884b9dd709fa74e56f59a2fe055e526007a731da | 2022-02-08T06:44:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/wav2vec2-speechdat | 2 | null | transformers | 23,744 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-speechdat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-speechdat
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4578
- Wer: 0.2927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| No log | 0.01 | 100 | 3.6252 | 1.0 |
| No log | 0.02 | 200 | 3.1906 | 1.0 |
| No log | 0.03 | 300 | 3.1090 | 1.0 |
| No log | 0.04 | 400 | 1.8796 | 0.9955 |
| 6.2575 | 0.05 | 500 | 1.3515 | 0.9058 |
| 6.2575 | 0.06 | 600 | 1.1209 | 0.8328 |
| 6.2575 | 0.07 | 700 | 1.1404 | 0.8309 |
| 6.2575 | 0.09 | 800 | 1.0599 | 0.8021 |
| 6.2575 | 0.1 | 900 | 0.9901 | 0.8335 |
| 0.7737 | 0.11 | 1000 | 0.8846 | 0.7400 |
| 0.7737 | 0.12 | 1100 | 0.9971 | 0.7820 |
| 0.7737 | 0.13 | 1200 | 0.8665 | 0.7123 |
| 0.7737 | 0.14 | 1300 | 0.8490 | 0.7366 |
| 0.7737 | 0.15 | 1400 | 0.8250 | 0.6765 |
| 0.6183 | 0.16 | 1500 | 0.8291 | 0.6965 |
| 0.6183 | 0.17 | 1600 | 0.7946 | 0.6823 |
| 0.6183 | 0.18 | 1700 | 0.8239 | 0.6894 |
| 0.6183 | 0.19 | 1800 | 0.8282 | 0.6796 |
| 0.6183 | 0.2 | 1900 | 0.7645 | 0.6518 |
| 0.561 | 0.21 | 2000 | 0.7530 | 0.6367 |
| 0.561 | 0.22 | 2100 | 0.7296 | 0.6177 |
| 0.561 | 0.24 | 2200 | 0.7527 | 0.6498 |
| 0.561 | 0.25 | 2300 | 0.7210 | 0.6316 |
| 0.561 | 0.26 | 2400 | 0.7938 | 0.6757 |
| 0.5402 | 0.27 | 2500 | 0.7485 | 0.6372 |
| 0.5402 | 0.28 | 2600 | 0.7146 | 0.6133 |
| 0.5402 | 0.29 | 2700 | 0.7308 | 0.6626 |
| 0.5402 | 0.3 | 2800 | 0.7078 | 0.5949 |
| 0.5402 | 0.31 | 2900 | 0.7679 | 0.6373 |
| 0.5303 | 0.32 | 3000 | 0.7263 | 0.6502 |
| 0.5303 | 0.33 | 3100 | 0.6613 | 0.5846 |
| 0.5303 | 0.34 | 3200 | 0.6784 | 0.5783 |
| 0.5303 | 0.35 | 3300 | 0.6908 | 0.5833 |
| 0.5303 | 0.36 | 3400 | 0.6595 | 0.5826 |
| 0.503 | 0.37 | 3500 | 0.6717 | 0.5938 |
| 0.503 | 0.39 | 3600 | 0.6938 | 0.5791 |
| 0.503 | 0.4 | 3700 | 0.6677 | 0.6052 |
| 0.503 | 0.41 | 3800 | 0.6544 | 0.5554 |
| 0.503 | 0.42 | 3900 | 0.6514 | 0.5728 |
| 0.4959 | 0.43 | 4000 | 0.6847 | 0.6188 |
| 0.4959 | 0.44 | 4100 | 0.6626 | 0.5869 |
| 0.4959 | 0.45 | 4200 | 0.6670 | 0.5700 |
| 0.4959 | 0.46 | 4300 | 0.6596 | 0.5846 |
| 0.4959 | 0.47 | 4400 | 0.6523 | 0.5468 |
| 0.4824 | 0.48 | 4500 | 0.6392 | 0.5688 |
| 0.4824 | 0.49 | 4600 | 0.6561 | 0.5687 |
| 0.4824 | 0.5 | 4700 | 0.6697 | 0.5817 |
| 0.4824 | 0.51 | 4800 | 0.6348 | 0.5608 |
| 0.4824 | 0.52 | 4900 | 0.6561 | 0.5600 |
| 0.4714 | 0.54 | 5000 | 0.6522 | 0.6181 |
| 0.4714 | 0.55 | 5100 | 0.6858 | 0.5921 |
| 0.4714 | 0.56 | 5200 | 0.6706 | 0.5497 |
| 0.4714 | 0.57 | 5300 | 0.7123 | 0.5768 |
| 0.4714 | 0.58 | 5400 | 0.6599 | 0.6100 |
| 0.471 | 0.59 | 5500 | 0.6421 | 0.5626 |
| 0.471 | 0.6 | 5600 | 0.6395 | 0.5753 |
| 0.471 | 0.61 | 5700 | 0.6788 | 0.5481 |
| 0.471 | 0.62 | 5800 | 0.6386 | 0.5516 |
| 0.471 | 0.63 | 5900 | 0.6694 | 0.5913 |
| 0.4707 | 0.64 | 6000 | 0.6251 | 0.5699 |
| 0.4707 | 0.65 | 6100 | 0.6243 | 0.5567 |
| 0.4707 | 0.66 | 6200 | 0.6645 | 0.5629 |
| 0.4707 | 0.67 | 6300 | 0.6296 | 0.5895 |
| 0.4707 | 0.69 | 6400 | 0.6078 | 0.5183 |
| 0.4632 | 0.7 | 6500 | 0.6270 | 0.5619 |
| 0.4632 | 0.71 | 6600 | 0.6050 | 0.5336 |
| 0.4632 | 0.72 | 6700 | 0.6185 | 0.5449 |
| 0.4632 | 0.73 | 6800 | 0.6281 | 0.5645 |
| 0.4632 | 0.74 | 6900 | 0.5877 | 0.5084 |
| 0.4514 | 0.75 | 7000 | 0.6199 | 0.5403 |
| 0.4514 | 0.76 | 7100 | 0.6293 | 0.5275 |
| 0.4514 | 0.77 | 7200 | 0.6290 | 0.5447 |
| 0.4514 | 0.78 | 7300 | 0.6130 | 0.5373 |
| 0.4514 | 0.79 | 7400 | 0.6138 | 0.5285 |
| 0.4457 | 0.8 | 7500 | 0.6040 | 0.5259 |
| 0.4457 | 0.81 | 7600 | 0.6220 | 0.5686 |
| 0.4457 | 0.82 | 7700 | 0.5915 | 0.5164 |
| 0.4457 | 0.84 | 7800 | 0.6270 | 0.5289 |
| 0.4457 | 0.85 | 7900 | 0.6224 | 0.5515 |
| 0.4458 | 0.86 | 8000 | 0.6161 | 0.5323 |
| 0.4458 | 0.87 | 8100 | 0.5827 | 0.5122 |
| 0.4458 | 0.88 | 8200 | 0.6067 | 0.5202 |
| 0.4458 | 0.89 | 8300 | 0.6087 | 0.5192 |
| 0.4458 | 0.9 | 8400 | 0.6859 | 0.5796 |
| 0.4409 | 0.91 | 8500 | 0.6180 | 0.5131 |
| 0.4409 | 0.92 | 8600 | 0.5945 | 0.4948 |
| 0.4409 | 0.93 | 8700 | 0.5967 | 0.5532 |
| 0.4409 | 0.94 | 8800 | 0.5770 | 0.4961 |
| 0.4409 | 0.95 | 8900 | 0.5809 | 0.5203 |
| 0.4305 | 0.96 | 9000 | 0.5805 | 0.5039 |
| 0.4305 | 0.97 | 9100 | 0.5873 | 0.5188 |
| 0.4305 | 0.98 | 9200 | 0.6277 | 0.5516 |
| 0.4305 | 1.0 | 9300 | 0.5727 | 0.5052 |
| 0.4305 | 1.01 | 9400 | 0.5858 | 0.5123 |
| 0.4264 | 1.02 | 9500 | 0.5692 | 0.4968 |
| 0.4264 | 1.03 | 9600 | 0.5954 | 0.5117 |
| 0.4264 | 1.04 | 9700 | 0.5904 | 0.5076 |
| 0.4264 | 1.05 | 9800 | 0.6046 | 0.5101 |
| 0.4264 | 1.06 | 9900 | 0.5616 | 0.4926 |
| 0.4176 | 1.07 | 10000 | 0.5971 | 0.5368 |
| 0.4176 | 1.08 | 10100 | 0.5706 | 0.4940 |
| 0.4176 | 1.09 | 10200 | 0.5612 | 0.5032 |
| 0.4176 | 1.1 | 10300 | 0.5672 | 0.4944 |
| 0.4176 | 1.11 | 10400 | 0.5915 | 0.5218 |
| 0.4033 | 1.12 | 10500 | 0.5706 | 0.5051 |
| 0.4033 | 1.13 | 10600 | 0.5661 | 0.4934 |
| 0.4033 | 1.15 | 10700 | 0.5724 | 0.4903 |
| 0.4033 | 1.16 | 10800 | 0.5792 | 0.4940 |
| 0.4033 | 1.17 | 10900 | 0.5744 | 0.4911 |
| 0.392 | 1.18 | 11000 | 0.5767 | 0.5162 |
| 0.392 | 1.19 | 11100 | 0.5588 | 0.4835 |
| 0.392 | 1.2 | 11200 | 0.5609 | 0.4922 |
| 0.392 | 1.21 | 11300 | 0.5890 | 0.4914 |
| 0.392 | 1.22 | 11400 | 0.5525 | 0.4897 |
| 0.387 | 1.23 | 11500 | 0.5704 | 0.5051 |
| 0.387 | 1.24 | 11600 | 0.5539 | 0.5014 |
| 0.387 | 1.25 | 11700 | 0.5473 | 0.4882 |
| 0.387 | 1.26 | 11800 | 0.5662 | 0.5004 |
| 0.387 | 1.27 | 11900 | 0.5785 | 0.5220 |
| 0.3956 | 1.28 | 12000 | 0.5990 | 0.5114 |
| 0.3956 | 1.3 | 12100 | 0.5497 | 0.4895 |
| 0.3956 | 1.31 | 12200 | 0.5538 | 0.4895 |
| 0.3956 | 1.32 | 12300 | 0.5652 | 0.4913 |
| 0.3956 | 1.33 | 12400 | 0.5682 | 0.5128 |
| 0.4043 | 1.34 | 12500 | 0.5830 | 0.4999 |
| 0.4043 | 1.35 | 12600 | 0.5686 | 0.4865 |
| 0.4043 | 1.36 | 12700 | 0.5688 | 0.4937 |
| 0.4043 | 1.37 | 12800 | 0.5753 | 0.5034 |
| 0.4043 | 1.38 | 12900 | 0.5898 | 0.4865 |
| 0.3997 | 1.39 | 13000 | 0.5723 | 0.4963 |
| 0.3997 | 1.4 | 13100 | 0.5767 | 0.4986 |
| 0.3997 | 1.41 | 13200 | 0.5960 | 0.5084 |
| 0.3997 | 1.42 | 13300 | 0.5859 | 0.5096 |
| 0.3997 | 1.43 | 13400 | 0.5491 | 0.4784 |
| 0.3997 | 1.45 | 13500 | 0.5636 | 0.5049 |
| 0.3997 | 1.46 | 13600 | 0.5667 | 0.4708 |
| 0.3997 | 1.47 | 13700 | 0.5757 | 0.4862 |
| 0.3997 | 1.48 | 13800 | 0.5444 | 0.4816 |
| 0.3997 | 1.49 | 13900 | 0.5557 | 0.4792 |
| 0.3954 | 1.5 | 14000 | 0.5437 | 0.4810 |
| 0.3954 | 1.51 | 14100 | 0.5489 | 0.4674 |
| 0.3954 | 1.52 | 14200 | 0.5415 | 0.4674 |
| 0.3954 | 1.53 | 14300 | 0.5481 | 0.4902 |
| 0.3954 | 1.54 | 14400 | 0.5474 | 0.4763 |
| 0.3814 | 1.55 | 14500 | 0.5588 | 0.4731 |
| 0.3814 | 1.56 | 14600 | 0.5746 | 0.4820 |
| 0.3814 | 1.57 | 14700 | 0.5676 | 0.4884 |
| 0.3814 | 1.58 | 14800 | 0.5495 | 0.4711 |
| 0.3814 | 1.6 | 14900 | 0.5565 | 0.4782 |
| 0.3877 | 1.61 | 15000 | 0.5671 | 0.5135 |
| 0.3877 | 1.62 | 15100 | 0.5512 | 0.4868 |
| 0.3877 | 1.63 | 15200 | 0.5683 | 0.4650 |
| 0.3877 | 1.64 | 15300 | 0.5427 | 0.4717 |
| 0.3877 | 1.65 | 15400 | 0.5519 | 0.4651 |
| 0.387 | 1.66 | 15500 | 0.5327 | 0.4456 |
| 0.387 | 1.67 | 15600 | 0.5371 | 0.4673 |
| 0.387 | 1.68 | 15700 | 0.5337 | 0.4705 |
| 0.387 | 1.69 | 15800 | 0.5606 | 0.4992 |
| 0.387 | 1.7 | 15900 | 0.5254 | 0.4613 |
| 0.3877 | 1.71 | 16000 | 0.5619 | 0.4882 |
| 0.3877 | 1.72 | 16100 | 0.5212 | 0.4560 |
| 0.3877 | 1.73 | 16200 | 0.5369 | 0.4696 |
| 0.3877 | 1.75 | 16300 | 0.5392 | 0.4677 |
| 0.3877 | 1.76 | 16400 | 0.5353 | 0.4768 |
| 0.3739 | 1.77 | 16500 | 0.5435 | 0.4777 |
| 0.3739 | 1.78 | 16600 | 0.5343 | 0.4884 |
| 0.3739 | 1.79 | 16700 | 0.5309 | 0.4942 |
| 0.3739 | 1.8 | 16800 | 0.5373 | 0.4727 |
| 0.3739 | 1.81 | 16900 | 0.5550 | 0.4686 |
| 0.3884 | 1.82 | 17000 | 0.5486 | 0.4826 |
| 0.3884 | 1.83 | 17100 | 0.5508 | 0.4862 |
| 0.3884 | 1.84 | 17200 | 0.5423 | 0.4855 |
| 0.3884 | 1.85 | 17300 | 0.5478 | 0.4730 |
| 0.3884 | 1.86 | 17400 | 0.5438 | 0.4938 |
| 0.3842 | 1.87 | 17500 | 0.5571 | 0.4818 |
| 0.3842 | 1.88 | 17600 | 0.5402 | 0.4753 |
| 0.3842 | 1.9 | 17700 | 0.5679 | 0.4827 |
| 0.3842 | 1.91 | 17800 | 0.5385 | 0.4642 |
| 0.3842 | 1.92 | 17900 | 0.5519 | 0.4942 |
| 0.3953 | 1.93 | 18000 | 0.5559 | 0.4745 |
| 0.3953 | 1.94 | 18100 | 0.5657 | 0.4963 |
| 0.3953 | 1.95 | 18200 | 0.5296 | 0.4642 |
| 0.3953 | 1.96 | 18300 | 0.5529 | 0.4907 |
| 0.3953 | 1.97 | 18400 | 0.5380 | 0.4536 |
| 0.3745 | 1.98 | 18500 | 0.5276 | 0.4678 |
| 0.3745 | 1.99 | 18600 | 0.5544 | 0.4854 |
| 0.3745 | 2.0 | 18700 | 0.5195 | 0.4535 |
| 0.3745 | 2.01 | 18800 | 0.5165 | 0.4635 |
| 0.3745 | 2.02 | 18900 | 0.5062 | 0.4431 |
| 0.3538 | 2.03 | 19000 | 0.5255 | 0.4509 |
| 0.3538 | 2.04 | 19100 | 0.5125 | 0.4512 |
| 0.3538 | 2.06 | 19200 | 0.5105 | 0.4504 |
| 0.3538 | 2.07 | 19300 | 0.5000 | 0.4490 |
| 0.3538 | 2.08 | 19400 | 0.5150 | 0.4520 |
| 0.356 | 2.09 | 19500 | 0.5053 | 0.4383 |
| 0.356 | 2.1 | 19600 | 0.5085 | 0.4417 |
| 0.356 | 2.11 | 19700 | 0.5229 | 0.4490 |
| 0.356 | 2.12 | 19800 | 0.5326 | 0.4492 |
| 0.356 | 2.13 | 19900 | 0.5139 | 0.4491 |
| 0.3474 | 2.14 | 20000 | 0.5134 | 0.4384 |
| 0.3474 | 2.15 | 20100 | 0.5498 | 0.4606 |
| 0.3474 | 2.16 | 20200 | 0.5324 | 0.4540 |
| 0.3474 | 2.17 | 20300 | 0.5338 | 0.4548 |
| 0.3474 | 2.18 | 20400 | 0.5076 | 0.4425 |
| 0.345 | 2.19 | 20500 | 0.5253 | 0.4550 |
| 0.345 | 2.21 | 20600 | 0.5125 | 0.4618 |
| 0.345 | 2.22 | 20700 | 0.5171 | 0.4487 |
| 0.345 | 2.23 | 20800 | 0.5232 | 0.4464 |
| 0.345 | 2.24 | 20900 | 0.5298 | 0.4588 |
| 0.341 | 2.25 | 21000 | 0.5342 | 0.4576 |
| 0.341 | 2.26 | 21100 | 0.5515 | 0.4678 |
| 0.341 | 2.27 | 21200 | 0.5041 | 0.4495 |
| 0.341 | 2.28 | 21300 | 0.5169 | 0.4473 |
| 0.341 | 2.29 | 21400 | 0.5227 | 0.4494 |
| 0.354 | 2.3 | 21500 | 0.5214 | 0.4458 |
| 0.354 | 2.31 | 21600 | 0.5303 | 0.4587 |
| 0.354 | 2.32 | 21700 | 0.5237 | 0.4597 |
| 0.354 | 2.33 | 21800 | 0.5067 | 0.4460 |
| 0.354 | 2.34 | 21900 | 0.5117 | 0.4560 |
| 0.3333 | 2.36 | 22000 | 0.5104 | 0.4359 |
| 0.3333 | 2.37 | 22100 | 0.5326 | 0.4679 |
| 0.3333 | 2.38 | 22200 | 0.5098 | 0.4510 |
| 0.3333 | 2.39 | 22300 | 0.5044 | 0.4445 |
| 0.3333 | 2.4 | 22400 | 0.5219 | 0.4489 |
| 0.3514 | 2.41 | 22500 | 0.4987 | 0.4433 |
| 0.3514 | 2.42 | 22600 | 0.5009 | 0.4338 |
| 0.3514 | 2.43 | 22700 | 0.5252 | 0.4444 |
| 0.3514 | 2.44 | 22800 | 0.4861 | 0.4269 |
| 0.3514 | 2.45 | 22900 | 0.5157 | 0.4421 |
| 0.3444 | 2.46 | 23000 | 0.5277 | 0.4426 |
| 0.3444 | 2.47 | 23100 | 0.5213 | 0.4378 |
| 0.3444 | 2.48 | 23200 | 0.5172 | 0.4482 |
| 0.3444 | 2.49 | 23300 | 0.5142 | 0.4376 |
| 0.3444 | 2.51 | 23400 | 0.5044 | 0.4231 |
| 0.3536 | 2.52 | 23500 | 0.5268 | 0.4496 |
| 0.3536 | 2.53 | 23600 | 0.5176 | 0.4326 |
| 0.3536 | 2.54 | 23700 | 0.5032 | 0.4296 |
| 0.3536 | 2.55 | 23800 | 0.5211 | 0.4460 |
| 0.3536 | 2.56 | 23900 | 0.5093 | 0.4379 |
| 0.337 | 2.57 | 24000 | 0.4990 | 0.4311 |
| 0.337 | 2.58 | 24100 | 0.4962 | 0.4329 |
| 0.337 | 2.59 | 24200 | 0.5033 | 0.4289 |
| 0.337 | 2.6 | 24300 | 0.5260 | 0.4534 |
| 0.337 | 2.61 | 24400 | 0.5309 | 0.4441 |
| 0.3393 | 2.62 | 24500 | 0.5132 | 0.4346 |
| 0.3393 | 2.63 | 24600 | 0.5189 | 0.4233 |
| 0.3393 | 2.64 | 24700 | 0.5074 | 0.4326 |
| 0.3393 | 2.66 | 24800 | 0.5111 | 0.4254 |
| 0.3393 | 2.67 | 24900 | 0.4933 | 0.4254 |
| 0.3334 | 2.68 | 25000 | 0.5046 | 0.4407 |
| 0.3334 | 2.69 | 25100 | 0.5010 | 0.4404 |
| 0.3334 | 2.7 | 25200 | 0.5045 | 0.4236 |
| 0.3334 | 2.71 | 25300 | 0.4938 | 0.4305 |
| 0.3334 | 2.72 | 25400 | 0.5021 | 0.4383 |
| 0.3366 | 2.73 | 25500 | 0.4953 | 0.4202 |
| 0.3366 | 2.74 | 25600 | 0.4985 | 0.4338 |
| 0.3366 | 2.75 | 25700 | 0.4765 | 0.4161 |
| 0.3366 | 2.76 | 25800 | 0.4873 | 0.4292 |
| 0.3366 | 2.77 | 25900 | 0.4998 | 0.4189 |
| 0.3359 | 2.78 | 26000 | 0.4991 | 0.4248 |
| 0.3359 | 2.79 | 26100 | 0.5012 | 0.4307 |
| 0.3359 | 2.81 | 26200 | 0.5081 | 0.4151 |
| 0.3359 | 2.82 | 26300 | 0.4997 | 0.4305 |
| 0.3359 | 2.83 | 26400 | 0.4969 | 0.4302 |
| 0.3396 | 2.84 | 26500 | 0.4784 | 0.4271 |
| 0.3396 | 2.85 | 26600 | 0.4804 | 0.4149 |
| 0.3396 | 2.86 | 26700 | 0.4900 | 0.4192 |
| 0.3396 | 2.87 | 26800 | 0.5044 | 0.4325 |
| 0.3396 | 2.88 | 26900 | 0.4935 | 0.4376 |
| 0.3356 | 2.89 | 27000 | 0.5007 | 0.4269 |
| 0.3356 | 2.9 | 27100 | 0.4887 | 0.4178 |
| 0.3356 | 2.91 | 27200 | 0.4770 | 0.4170 |
| 0.3356 | 2.92 | 27300 | 0.4847 | 0.4167 |
| 0.3356 | 2.93 | 27400 | 0.4861 | 0.4139 |
| 0.3395 | 2.94 | 27500 | 0.4975 | 0.4291 |
| 0.3395 | 2.95 | 27600 | 0.5056 | 0.4471 |
| 0.3395 | 2.97 | 27700 | 0.5111 | 0.4375 |
| 0.3395 | 2.98 | 27800 | 0.5327 | 0.4577 |
| 0.3395 | 2.99 | 27900 | 0.5067 | 0.4393 |
| 0.3332 | 3.0 | 28000 | 0.4898 | 0.4188 |
| 0.3332 | 3.01 | 28100 | 0.4790 | 0.4093 |
| 0.3332 | 3.02 | 28200 | 0.4828 | 0.4202 |
| 0.3332 | 3.03 | 28300 | 0.4836 | 0.4146 |
| 0.3332 | 3.04 | 28400 | 0.4901 | 0.4242 |
| 0.2984 | 3.05 | 28500 | 0.4772 | 0.4118 |
| 0.2984 | 3.06 | 28600 | 0.5055 | 0.4213 |
| 0.2984 | 3.07 | 28700 | 0.4911 | 0.4100 |
| 0.2984 | 3.08 | 28800 | 0.4737 | 0.4087 |
| 0.2984 | 3.09 | 28900 | 0.4930 | 0.4216 |
| 0.3056 | 3.1 | 29000 | 0.4736 | 0.4109 |
| 0.3056 | 3.12 | 29100 | 0.4863 | 0.4058 |
| 0.3056 | 3.13 | 29200 | 0.4784 | 0.4184 |
| 0.3056 | 3.14 | 29300 | 0.4923 | 0.4240 |
| 0.3056 | 3.15 | 29400 | 0.4846 | 0.4226 |
| 0.2995 | 3.16 | 29500 | 0.4829 | 0.4086 |
| 0.2995 | 3.17 | 29600 | 0.4934 | 0.4240 |
| 0.2995 | 3.18 | 29700 | 0.4893 | 0.4152 |
| 0.2995 | 3.19 | 29800 | 0.4730 | 0.4227 |
| 0.2995 | 3.2 | 29900 | 0.5027 | 0.4330 |
| 0.2926 | 3.21 | 30000 | 0.4903 | 0.4112 |
| 0.2926 | 3.22 | 30100 | 0.4961 | 0.4157 |
| 0.2926 | 3.23 | 30200 | 0.4980 | 0.4269 |
| 0.2926 | 3.24 | 30300 | 0.4896 | 0.4126 |
| 0.2926 | 3.25 | 30400 | 0.4726 | 0.4062 |
| 0.301 | 3.27 | 30500 | 0.4733 | 0.3985 |
| 0.301 | 3.28 | 30600 | 0.4772 | 0.4047 |
| 0.301 | 3.29 | 30700 | 0.4806 | 0.4082 |
| 0.301 | 3.3 | 30800 | 0.4683 | 0.4011 |
| 0.301 | 3.31 | 30900 | 0.4775 | 0.4079 |
| 0.2933 | 3.32 | 31000 | 0.4729 | 0.4083 |
| 0.2933 | 3.33 | 31100 | 0.4628 | 0.4016 |
| 0.2933 | 3.34 | 31200 | 0.4753 | 0.4192 |
| 0.2933 | 3.35 | 31300 | 0.4687 | 0.4185 |
| 0.2933 | 3.36 | 31400 | 0.4806 | 0.4106 |
| 0.2957 | 3.37 | 31500 | 0.4889 | 0.4240 |
| 0.2957 | 3.38 | 31600 | 0.4882 | 0.4182 |
| 0.2957 | 3.39 | 31700 | 0.4798 | 0.4162 |
| 0.2957 | 3.4 | 31800 | 0.4718 | 0.4108 |
| 0.2957 | 3.42 | 31900 | 0.4685 | 0.4101 |
| 0.3039 | 3.43 | 32000 | 0.4816 | 0.4188 |
| 0.3039 | 3.44 | 32100 | 0.4874 | 0.4139 |
| 0.3039 | 3.45 | 32200 | 0.4899 | 0.4115 |
| 0.3039 | 3.46 | 32300 | 0.4852 | 0.4180 |
| 0.3039 | 3.47 | 32400 | 0.5074 | 0.4129 |
| 0.3006 | 3.48 | 32500 | 0.4837 | 0.4076 |
| 0.3006 | 3.49 | 32600 | 0.4927 | 0.4098 |
| 0.3006 | 3.5 | 32700 | 0.4999 | 0.4172 |
| 0.3006 | 3.51 | 32800 | 0.4773 | 0.4194 |
| 0.3006 | 3.52 | 32900 | 0.4859 | 0.4058 |
| 0.3089 | 3.53 | 33000 | 0.4783 | 0.4104 |
| 0.3089 | 3.54 | 33100 | 0.4622 | 0.4020 |
| 0.3089 | 3.55 | 33200 | 0.4840 | 0.4065 |
| 0.3089 | 3.57 | 33300 | 0.4756 | 0.4241 |
| 0.3089 | 3.58 | 33400 | 0.4831 | 0.4170 |
| 0.3061 | 3.59 | 33500 | 0.4794 | 0.4068 |
| 0.3061 | 3.6 | 33600 | 0.4730 | 0.4037 |
| 0.3061 | 3.61 | 33700 | 0.4808 | 0.4138 |
| 0.3061 | 3.62 | 33800 | 0.4924 | 0.4248 |
| 0.3061 | 3.63 | 33900 | 0.4749 | 0.4112 |
| 0.3047 | 3.64 | 34000 | 0.4924 | 0.4326 |
| 0.3047 | 3.65 | 34100 | 0.4745 | 0.4104 |
| 0.3047 | 3.66 | 34200 | 0.4760 | 0.4123 |
| 0.3047 | 3.67 | 34300 | 0.4788 | 0.4066 |
| 0.3047 | 3.68 | 34400 | 0.4627 | 0.4158 |
| 0.3042 | 3.69 | 34500 | 0.4974 | 0.4131 |
| 0.3042 | 3.7 | 34600 | 0.4593 | 0.4063 |
| 0.3042 | 3.72 | 34700 | 0.4549 | 0.3928 |
| 0.3042 | 3.73 | 34800 | 0.4690 | 0.3898 |
| 0.3042 | 3.74 | 34900 | 0.4560 | 0.4007 |
| 0.2963 | 3.75 | 35000 | 0.4606 | 0.3959 |
| 0.2963 | 3.76 | 35100 | 0.4762 | 0.4057 |
| 0.2963 | 3.77 | 35200 | 0.4750 | 0.4034 |
| 0.2963 | 3.78 | 35300 | 0.4772 | 0.4114 |
| 0.2963 | 3.79 | 35400 | 0.4669 | 0.3995 |
| 0.3012 | 3.8 | 35500 | 0.4709 | 0.4090 |
| 0.3012 | 3.81 | 35600 | 0.4722 | 0.4123 |
| 0.3012 | 3.82 | 35700 | 0.4913 | 0.4165 |
| 0.3012 | 3.83 | 35800 | 0.4814 | 0.4063 |
| 0.3012 | 3.84 | 35900 | 0.4869 | 0.4171 |
| 0.3015 | 3.85 | 36000 | 0.4791 | 0.4059 |
| 0.3015 | 3.87 | 36100 | 0.4535 | 0.3976 |
| 0.3015 | 3.88 | 36200 | 0.4706 | 0.4009 |
| 0.3015 | 3.89 | 36300 | 0.4679 | 0.4012 |
| 0.3015 | 3.9 | 36400 | 0.4736 | 0.4096 |
| 0.2965 | 3.91 | 36500 | 0.4756 | 0.4106 |
| 0.2965 | 3.92 | 36600 | 0.4669 | 0.4085 |
| 0.2965 | 3.93 | 36700 | 0.4796 | 0.4054 |
| 0.2965 | 3.94 | 36800 | 0.4583 | 0.3932 |
| 0.2965 | 3.95 | 36900 | 0.4430 | 0.3969 |
| 0.2993 | 3.96 | 37000 | 0.4560 | 0.3914 |
| 0.2993 | 3.97 | 37100 | 0.4739 | 0.4002 |
| 0.2993 | 3.98 | 37200 | 0.4598 | 0.3912 |
| 0.2993 | 3.99 | 37300 | 0.4607 | 0.3907 |
| 0.2993 | 4.0 | 37400 | 0.4709 | 0.3986 |
| 0.2886 | 4.01 | 37500 | 0.4642 | 0.4067 |
| 0.2886 | 4.03 | 37600 | 0.4684 | 0.3984 |
| 0.2886 | 4.04 | 37700 | 0.4690 | 0.3979 |
| 0.2886 | 4.05 | 37800 | 0.4722 | 0.3980 |
| 0.2886 | 4.06 | 37900 | 0.4734 | 0.3927 |
| 0.2534 | 4.07 | 38000 | 0.4724 | 0.3988 |
| 0.2534 | 4.08 | 38100 | 0.4665 | 0.3986 |
| 0.2534 | 4.09 | 38200 | 0.4659 | 0.4036 |
| 0.2534 | 4.1 | 38300 | 0.4694 | 0.3952 |
| 0.2534 | 4.11 | 38400 | 0.4719 | 0.3891 |
| 0.2596 | 4.12 | 38500 | 0.4687 | 0.3994 |
| 0.2596 | 4.13 | 38600 | 0.4705 | 0.3903 |
| 0.2596 | 4.14 | 38700 | 0.4601 | 0.3975 |
| 0.2596 | 4.15 | 38800 | 0.4666 | 0.3971 |
| 0.2596 | 4.16 | 38900 | 0.4772 | 0.3892 |
| 0.2643 | 4.18 | 39000 | 0.4810 | 0.4071 |
| 0.2643 | 4.19 | 39100 | 0.4980 | 0.4167 |
| 0.2643 | 4.2 | 39200 | 0.4657 | 0.3996 |
| 0.2643 | 4.21 | 39300 | 0.4869 | 0.4002 |
| 0.2643 | 4.22 | 39400 | 0.4656 | 0.3913 |
| 0.265 | 4.23 | 39500 | 0.4720 | 0.3947 |
| 0.265 | 4.24 | 39600 | 0.4711 | 0.3970 |
| 0.265 | 4.25 | 39700 | 0.4689 | 0.3933 |
| 0.265 | 4.26 | 39800 | 0.4728 | 0.4017 |
| 0.265 | 4.27 | 39900 | 0.4673 | 0.3847 |
| 0.2644 | 4.28 | 40000 | 0.4636 | 0.3960 |
| 0.2644 | 4.29 | 40100 | 0.4699 | 0.3864 |
| 0.2644 | 4.3 | 40200 | 0.4580 | 0.3874 |
| 0.2644 | 4.31 | 40300 | 0.4763 | 0.3951 |
| 0.2644 | 4.33 | 40400 | 0.4752 | 0.4141 |
| 0.2633 | 4.34 | 40500 | 0.4918 | 0.3994 |
| 0.2633 | 4.35 | 40600 | 0.4783 | 0.4026 |
| 0.2633 | 4.36 | 40700 | 0.4739 | 0.4034 |
| 0.2633 | 4.37 | 40800 | 0.4750 | 0.4000 |
| 0.2633 | 4.38 | 40900 | 0.4608 | 0.3943 |
| 0.2679 | 4.39 | 41000 | 0.4615 | 0.3891 |
| 0.2679 | 4.4 | 41100 | 0.4730 | 0.3984 |
| 0.2679 | 4.41 | 41200 | 0.4728 | 0.4011 |
| 0.2679 | 4.42 | 41300 | 0.4675 | 0.3932 |
| 0.2679 | 4.43 | 41400 | 0.4662 | 0.3929 |
| 0.2682 | 4.44 | 41500 | 0.4490 | 0.3837 |
| 0.2682 | 4.45 | 41600 | 0.4611 | 0.3838 |
| 0.2682 | 4.46 | 41700 | 0.4605 | 0.3945 |
| 0.2682 | 4.48 | 41800 | 0.4730 | 0.3938 |
| 0.2682 | 4.49 | 41900 | 0.4567 | 0.3874 |
| 0.2658 | 4.5 | 42000 | 0.4715 | 0.3869 |
| 0.2658 | 4.51 | 42100 | 0.4514 | 0.3833 |
| 0.2658 | 4.52 | 42200 | 0.4602 | 0.3898 |
| 0.2658 | 4.53 | 42300 | 0.4846 | 0.4022 |
| 0.2658 | 4.54 | 42400 | 0.4474 | 0.3810 |
| 0.2676 | 4.55 | 42500 | 0.4513 | 0.3820 |
| 0.2676 | 4.56 | 42600 | 0.4588 | 0.3928 |
| 0.2676 | 4.57 | 42700 | 0.4601 | 0.3894 |
| 0.2676 | 4.58 | 42800 | 0.4516 | 0.3792 |
| 0.2676 | 4.59 | 42900 | 0.4482 | 0.3848 |
| 0.2693 | 4.6 | 43000 | 0.4695 | 0.4008 |
| 0.2693 | 4.61 | 43100 | 0.4580 | 0.3871 |
| 0.2693 | 4.63 | 43200 | 0.4419 | 0.3857 |
| 0.2693 | 4.64 | 43300 | 0.4534 | 0.3796 |
| 0.2693 | 4.65 | 43400 | 0.4532 | 0.3856 |
| 0.2641 | 4.66 | 43500 | 0.4421 | 0.3809 |
| 0.2641 | 4.67 | 43600 | 0.4400 | 0.3844 |
| 0.2641 | 4.68 | 43700 | 0.4515 | 0.3833 |
| 0.2641 | 4.69 | 43800 | 0.4462 | 0.3808 |
| 0.2641 | 4.7 | 43900 | 0.4741 | 0.3926 |
| 0.2626 | 4.71 | 44000 | 0.4542 | 0.3931 |
| 0.2626 | 4.72 | 44100 | 0.4555 | 0.3885 |
| 0.2626 | 4.73 | 44200 | 0.4505 | 0.3845 |
| 0.2626 | 4.74 | 44300 | 0.4593 | 0.3871 |
| 0.2626 | 4.75 | 44400 | 0.4359 | 0.3830 |
| 0.2648 | 4.76 | 44500 | 0.4387 | 0.3736 |
| 0.2648 | 4.78 | 44600 | 0.4529 | 0.3807 |
| 0.2648 | 4.79 | 44700 | 0.4566 | 0.3837 |
| 0.2648 | 4.8 | 44800 | 0.4557 | 0.4067 |
| 0.2648 | 4.81 | 44900 | 0.4609 | 0.3852 |
| 0.2603 | 4.82 | 45000 | 0.4667 | 0.4005 |
| 0.2603 | 4.83 | 45100 | 0.4666 | 0.3836 |
| 0.2603 | 4.84 | 45200 | 0.4775 | 0.3946 |
| 0.2603 | 4.85 | 45300 | 0.4701 | 0.3925 |
| 0.2603 | 4.86 | 45400 | 0.4579 | 0.3889 |
| 0.2626 | 4.87 | 45500 | 0.4516 | 0.3884 |
| 0.2626 | 4.88 | 45600 | 0.4605 | 0.3878 |
| 0.2626 | 4.89 | 45700 | 0.4576 | 0.3802 |
| 0.2626 | 4.9 | 45800 | 0.4553 | 0.3780 |
| 0.2626 | 4.91 | 45900 | 0.4336 | 0.3752 |
| 0.2602 | 4.93 | 46000 | 0.4419 | 0.3881 |
| 0.2602 | 4.94 | 46100 | 0.4601 | 0.3843 |
| 0.2602 | 4.95 | 46200 | 0.4437 | 0.3956 |
| 0.2602 | 4.96 | 46300 | 0.4524 | 0.3844 |
| 0.2602 | 4.97 | 46400 | 0.4709 | 0.4031 |
| 0.2609 | 4.98 | 46500 | 0.4500 | 0.3872 |
| 0.2609 | 4.99 | 46600 | 0.4366 | 0.3846 |
| 0.2609 | 5.0 | 46700 | 0.4653 | 0.3884 |
| 0.2609 | 5.01 | 46800 | 0.4602 | 0.3932 |
| 0.2609 | 5.02 | 46900 | 0.4668 | 0.3854 |
| 0.2472 | 5.03 | 47000 | 0.4616 | 0.3891 |
| 0.2472 | 5.04 | 47100 | 0.4543 | 0.3836 |
| 0.2472 | 5.05 | 47200 | 0.4526 | 0.3822 |
| 0.2472 | 5.06 | 47300 | 0.4539 | 0.3741 |
| 0.2472 | 5.07 | 47400 | 0.4776 | 0.3818 |
| 0.2278 | 5.09 | 47500 | 0.4771 | 0.3794 |
| 0.2278 | 5.1 | 47600 | 0.4662 | 0.3831 |
| 0.2278 | 5.11 | 47700 | 0.4558 | 0.4032 |
| 0.2278 | 5.12 | 47800 | 0.4904 | 0.3918 |
| 0.2278 | 5.13 | 47900 | 0.4765 | 0.3890 |
| 0.2311 | 5.14 | 48000 | 0.4674 | 0.3882 |
| 0.2311 | 5.15 | 48100 | 0.4609 | 0.3947 |
| 0.2311 | 5.16 | 48200 | 0.4588 | 0.3837 |
| 0.2311 | 5.17 | 48300 | 0.4827 | 0.3845 |
| 0.2311 | 5.18 | 48400 | 0.4711 | 0.3839 |
| 0.229 | 5.19 | 48500 | 0.4583 | 0.3873 |
| 0.229 | 5.2 | 48600 | 0.4800 | 0.3858 |
| 0.229 | 5.21 | 48700 | 0.4611 | 0.3800 |
| 0.229 | 5.22 | 48800 | 0.4504 | 0.3889 |
| 0.229 | 5.24 | 48900 | 0.4569 | 0.3761 |
| 0.2313 | 5.25 | 49000 | 0.4732 | 0.3915 |
| 0.2313 | 5.26 | 49100 | 0.4728 | 0.3832 |
| 0.2313 | 5.27 | 49200 | 0.4667 | 0.3815 |
| 0.2313 | 5.28 | 49300 | 0.4912 | 0.3856 |
| 0.2313 | 5.29 | 49400 | 0.4790 | 0.3946 |
| 0.2266 | 5.3 | 49500 | 0.4597 | 0.3763 |
| 0.2266 | 5.31 | 49600 | 0.4580 | 0.3778 |
| 0.2266 | 5.32 | 49700 | 0.4439 | 0.3721 |
| 0.2266 | 5.33 | 49800 | 0.4611 | 0.3704 |
| 0.2266 | 5.34 | 49900 | 0.4599 | 0.3769 |
| 0.235 | 5.35 | 50000 | 0.4543 | 0.3808 |
| 0.235 | 5.36 | 50100 | 0.4555 | 0.3773 |
| 0.235 | 5.37 | 50200 | 0.4525 | 0.3815 |
| 0.235 | 5.39 | 50300 | 0.4557 | 0.3814 |
| 0.235 | 5.4 | 50400 | 0.4604 | 0.3754 |
| 0.2299 | 5.41 | 50500 | 0.4658 | 0.3770 |
| 0.2299 | 5.42 | 50600 | 0.4658 | 0.3884 |
| 0.2299 | 5.43 | 50700 | 0.4701 | 0.3919 |
| 0.2299 | 5.44 | 50800 | 0.4495 | 0.3818 |
| 0.2299 | 5.45 | 50900 | 0.4703 | 0.3886 |
| 0.2307 | 5.46 | 51000 | 0.4395 | 0.3743 |
| 0.2307 | 5.47 | 51100 | 0.4487 | 0.3751 |
| 0.2307 | 5.48 | 51200 | 0.4355 | 0.3733 |
| 0.2307 | 5.49 | 51300 | 0.4622 | 0.3811 |
| 0.2307 | 5.5 | 51400 | 0.4443 | 0.3801 |
| 0.2383 | 5.51 | 51500 | 0.4411 | 0.3743 |
| 0.2383 | 5.52 | 51600 | 0.4438 | 0.3778 |
| 0.2383 | 5.54 | 51700 | 0.4559 | 0.3784 |
| 0.2383 | 5.55 | 51800 | 0.4309 | 0.3656 |
| 0.2383 | 5.56 | 51900 | 0.4455 | 0.3660 |
| 0.23 | 5.57 | 52000 | 0.4436 | 0.3598 |
| 0.23 | 5.58 | 52100 | 0.4344 | 0.3685 |
| 0.23 | 5.59 | 52200 | 0.4282 | 0.3690 |
| 0.23 | 5.6 | 52300 | 0.4464 | 0.3800 |
| 0.23 | 5.61 | 52400 | 0.4458 | 0.3909 |
| 0.2305 | 5.62 | 52500 | 0.4483 | 0.3756 |
| 0.2305 | 5.63 | 52600 | 0.4547 | 0.3785 |
| 0.2305 | 5.64 | 52700 | 0.4671 | 0.3820 |
| 0.2305 | 5.65 | 52800 | 0.4449 | 0.3658 |
| 0.2305 | 5.66 | 52900 | 0.4596 | 0.3716 |
| 0.2237 | 5.67 | 53000 | 0.4399 | 0.3669 |
| 0.2237 | 5.69 | 53100 | 0.4410 | 0.3719 |
| 0.2237 | 5.7 | 53200 | 0.4574 | 0.3619 |
| 0.2237 | 5.71 | 53300 | 0.4443 | 0.3690 |
| 0.2237 | 5.72 | 53400 | 0.4381 | 0.3678 |
| 0.2337 | 5.73 | 53500 | 0.4490 | 0.3687 |
| 0.2337 | 5.74 | 53600 | 0.4427 | 0.3752 |
| 0.2337 | 5.75 | 53700 | 0.4423 | 0.3858 |
| 0.2337 | 5.76 | 53800 | 0.4702 | 0.3825 |
| 0.2337 | 5.77 | 53900 | 0.4724 | 0.3800 |
| 0.23 | 5.78 | 54000 | 0.4476 | 0.3827 |
| 0.23 | 5.79 | 54100 | 0.4508 | 0.3919 |
| 0.23 | 5.8 | 54200 | 0.4564 | 0.3788 |
| 0.23 | 5.81 | 54300 | 0.4602 | 0.3888 |
| 0.23 | 5.82 | 54400 | 0.4538 | 0.3732 |
| 0.2334 | 5.84 | 54500 | 0.4500 | 0.3808 |
| 0.2334 | 5.85 | 54600 | 0.4475 | 0.3705 |
| 0.2334 | 5.86 | 54700 | 0.4415 | 0.3772 |
| 0.2334 | 5.87 | 54800 | 0.4515 | 0.3771 |
| 0.2334 | 5.88 | 54900 | 0.4410 | 0.3677 |
| 0.2259 | 5.89 | 55000 | 0.4555 | 0.3702 |
| 0.2259 | 5.9 | 55100 | 0.4509 | 0.3894 |
| 0.2259 | 5.91 | 55200 | 0.4472 | 0.3692 |
| 0.2259 | 5.92 | 55300 | 0.4438 | 0.3754 |
| 0.2259 | 5.93 | 55400 | 0.4399 | 0.3698 |
| 0.2289 | 5.94 | 55500 | 0.4496 | 0.3753 |
| 0.2289 | 5.95 | 55600 | 0.4506 | 0.3752 |
| 0.2289 | 5.96 | 55700 | 0.4482 | 0.3766 |
| 0.2289 | 5.97 | 55800 | 0.4415 | 0.3772 |
| 0.2289 | 5.98 | 55900 | 0.4447 | 0.3750 |
| 0.2281 | 6.0 | 56000 | 0.4566 | 0.3842 |
| 0.2281 | 6.01 | 56100 | 0.4694 | 0.3774 |
| 0.2281 | 6.02 | 56200 | 0.4454 | 0.3788 |
| 0.2281 | 6.03 | 56300 | 0.4676 | 0.3718 |
| 0.2281 | 6.04 | 56400 | 0.4650 | 0.3751 |
| 0.1979 | 6.05 | 56500 | 0.4601 | 0.3765 |
| 0.1979 | 6.06 | 56600 | 0.4647 | 0.3840 |
| 0.1979 | 6.07 | 56700 | 0.4782 | 0.3756 |
| 0.1979 | 6.08 | 56800 | 0.4709 | 0.3736 |
| 0.1979 | 6.09 | 56900 | 0.4707 | 0.3734 |
| 0.1923 | 6.1 | 57000 | 0.4704 | 0.3751 |
| 0.1923 | 6.11 | 57100 | 0.4542 | 0.3721 |
| 0.1923 | 6.12 | 57200 | 0.4542 | 0.3735 |
| 0.1923 | 6.13 | 57300 | 0.4587 | 0.3804 |
| 0.1923 | 6.15 | 57400 | 0.4428 | 0.3687 |
| 0.2012 | 6.16 | 57500 | 0.4456 | 0.3748 |
| 0.2012 | 6.17 | 57600 | 0.4578 | 0.3762 |
| 0.2012 | 6.18 | 57700 | 0.4699 | 0.3722 |
| 0.2012 | 6.19 | 57800 | 0.4499 | 0.3756 |
| 0.2012 | 6.2 | 57900 | 0.4633 | 0.3680 |
| 0.1951 | 6.21 | 58000 | 0.4548 | 0.3712 |
| 0.1951 | 6.22 | 58100 | 0.4520 | 0.3759 |
| 0.1951 | 6.23 | 58200 | 0.4458 | 0.3616 |
| 0.1951 | 6.24 | 58300 | 0.4307 | 0.3637 |
| 0.1951 | 6.25 | 58400 | 0.4546 | 0.3621 |
| 0.1967 | 6.26 | 58500 | 0.4459 | 0.3623 |
| 0.1967 | 6.27 | 58600 | 0.4535 | 0.3690 |
| 0.1967 | 6.28 | 58700 | 0.4574 | 0.3771 |
| 0.1967 | 6.3 | 58800 | 0.4493 | 0.3744 |
| 0.1967 | 6.31 | 58900 | 0.4494 | 0.3769 |
| 0.1998 | 6.32 | 59000 | 0.4529 | 0.3644 |
| 0.1998 | 6.33 | 59100 | 0.4416 | 0.3662 |
| 0.1998 | 6.34 | 59200 | 0.4468 | 0.3785 |
| 0.1998 | 6.35 | 59300 | 0.4377 | 0.3664 |
| 0.1998 | 6.36 | 59400 | 0.4647 | 0.3755 |
| 0.2009 | 6.37 | 59500 | 0.4700 | 0.3824 |
| 0.2009 | 6.38 | 59600 | 0.4488 | 0.3685 |
| 0.2009 | 6.39 | 59700 | 0.4649 | 0.3804 |
| 0.2009 | 6.4 | 59800 | 0.4389 | 0.3689 |
| 0.2009 | 6.41 | 59900 | 0.4456 | 0.3531 |
| 0.2007 | 6.42 | 60000 | 0.4572 | 0.3658 |
| 0.2007 | 6.43 | 60100 | 0.4464 | 0.3669 |
| 0.2007 | 6.45 | 60200 | 0.4666 | 0.3711 |
| 0.2007 | 6.46 | 60300 | 0.4399 | 0.3660 |
| 0.2007 | 6.47 | 60400 | 0.4445 | 0.3631 |
| 0.2005 | 6.48 | 60500 | 0.4450 | 0.3621 |
| 0.2005 | 6.49 | 60600 | 0.4346 | 0.3571 |
| 0.2005 | 6.5 | 60700 | 0.4358 | 0.3581 |
| 0.2005 | 6.51 | 60800 | 0.4344 | 0.3646 |
| 0.2005 | 6.52 | 60900 | 0.4377 | 0.3621 |
| 0.2038 | 6.53 | 61000 | 0.4262 | 0.3570 |
| 0.2038 | 6.54 | 61100 | 0.4269 | 0.3614 |
| 0.2038 | 6.55 | 61200 | 0.4297 | 0.3592 |
| 0.2038 | 6.56 | 61300 | 0.4433 | 0.3682 |
| 0.2038 | 6.57 | 61400 | 0.4474 | 0.3644 |
| 0.199 | 6.58 | 61500 | 0.4464 | 0.3678 |
| 0.199 | 6.6 | 61600 | 0.4397 | 0.3562 |
| 0.199 | 6.61 | 61700 | 0.4415 | 0.3612 |
| 0.199 | 6.62 | 61800 | 0.4362 | 0.3601 |
| 0.199 | 6.63 | 61900 | 0.4442 | 0.3623 |
| 0.1995 | 6.64 | 62000 | 0.4558 | 0.3662 |
| 0.1995 | 6.65 | 62100 | 0.4477 | 0.3647 |
| 0.1995 | 6.66 | 62200 | 0.4542 | 0.3699 |
| 0.1995 | 6.67 | 62300 | 0.4411 | 0.3632 |
| 0.1995 | 6.68 | 62400 | 0.4408 | 0.3658 |
| 0.2014 | 6.69 | 62500 | 0.4426 | 0.3691 |
| 0.2014 | 6.7 | 62600 | 0.4246 | 0.3645 |
| 0.2014 | 6.71 | 62700 | 0.4466 | 0.3676 |
| 0.2014 | 6.72 | 62800 | 0.4493 | 0.3566 |
| 0.2014 | 6.73 | 62900 | 0.4336 | 0.3621 |
| 0.2015 | 6.75 | 63000 | 0.4367 | 0.3604 |
| 0.2015 | 6.76 | 63100 | 0.4424 | 0.3754 |
| 0.2015 | 6.77 | 63200 | 0.4679 | 0.3733 |
| 0.2015 | 6.78 | 63300 | 0.4483 | 0.3752 |
| 0.2015 | 6.79 | 63400 | 0.4746 | 0.3822 |
| 0.2048 | 6.8 | 63500 | 0.4340 | 0.3731 |
| 0.2048 | 6.81 | 63600 | 0.4346 | 0.3631 |
| 0.2048 | 6.82 | 63700 | 0.4525 | 0.3680 |
| 0.2048 | 6.83 | 63800 | 0.4360 | 0.3641 |
| 0.2048 | 6.84 | 63900 | 0.4299 | 0.3558 |
| 0.2017 | 6.85 | 64000 | 0.4370 | 0.3533 |
| 0.2017 | 6.86 | 64100 | 0.4293 | 0.3617 |
| 0.2017 | 6.87 | 64200 | 0.4431 | 0.3660 |
| 0.2017 | 6.88 | 64300 | 0.4362 | 0.3688 |
| 0.2017 | 6.9 | 64400 | 0.4507 | 0.3648 |
| 0.2045 | 6.91 | 64500 | 0.4439 | 0.3613 |
| 0.2045 | 6.92 | 64600 | 0.4249 | 0.3493 |
| 0.2045 | 6.93 | 64700 | 0.4362 | 0.3612 |
| 0.2045 | 6.94 | 64800 | 0.4336 | 0.3585 |
| 0.2045 | 6.95 | 64900 | 0.4387 | 0.3568 |
| 0.1977 | 6.96 | 65000 | 0.4313 | 0.3542 |
| 0.1977 | 6.97 | 65100 | 0.4287 | 0.3552 |
| 0.1977 | 6.98 | 65200 | 0.4372 | 0.3586 |
| 0.1977 | 6.99 | 65300 | 0.4378 | 0.3629 |
| 0.1977 | 7.0 | 65400 | 0.4518 | 0.3640 |
| 0.1971 | 7.01 | 65500 | 0.4480 | 0.3557 |
| 0.1971 | 7.02 | 65600 | 0.4530 | 0.3560 |
| 0.1971 | 7.03 | 65700 | 0.4581 | 0.3582 |
| 0.1971 | 7.04 | 65800 | 0.4492 | 0.3543 |
| 0.1971 | 7.06 | 65900 | 0.4448 | 0.3608 |
| 0.1672 | 7.07 | 66000 | 0.4469 | 0.3543 |
| 0.1672 | 7.08 | 66100 | 0.4262 | 0.3488 |
| 0.1672 | 7.09 | 66200 | 0.4289 | 0.3570 |
| 0.1672 | 7.1 | 66300 | 0.4455 | 0.3545 |
| 0.1672 | 7.11 | 66400 | 0.4449 | 0.3563 |
| 0.169 | 7.12 | 66500 | 0.4555 | 0.3565 |
| 0.169 | 7.13 | 66600 | 0.4432 | 0.3656 |
| 0.169 | 7.14 | 66700 | 0.4399 | 0.3610 |
| 0.169 | 7.15 | 66800 | 0.4383 | 0.3554 |
| 0.169 | 7.16 | 66900 | 0.4376 | 0.3536 |
| 0.1724 | 7.17 | 67000 | 0.4383 | 0.3572 |
| 0.1724 | 7.18 | 67100 | 0.4452 | 0.3535 |
| 0.1724 | 7.19 | 67200 | 0.4610 | 0.3668 |
| 0.1724 | 7.21 | 67300 | 0.4534 | 0.3546 |
| 0.1724 | 7.22 | 67400 | 0.4506 | 0.3604 |
| 0.1729 | 7.23 | 67500 | 0.4463 | 0.3507 |
| 0.1729 | 7.24 | 67600 | 0.4440 | 0.3630 |
| 0.1729 | 7.25 | 67700 | 0.4361 | 0.3550 |
| 0.1729 | 7.26 | 67800 | 0.4397 | 0.3643 |
| 0.1729 | 7.27 | 67900 | 0.4328 | 0.3548 |
| 0.1736 | 7.28 | 68000 | 0.4546 | 0.3614 |
| 0.1736 | 7.29 | 68100 | 0.4506 | 0.3558 |
| 0.1736 | 7.3 | 68200 | 0.4361 | 0.3513 |
| 0.1736 | 7.31 | 68300 | 0.4223 | 0.3500 |
| 0.1736 | 7.32 | 68400 | 0.4474 | 0.3497 |
| 0.1733 | 7.33 | 68500 | 0.4303 | 0.3549 |
| 0.1733 | 7.34 | 68600 | 0.4265 | 0.3483 |
| 0.1733 | 7.36 | 68700 | 0.4339 | 0.3558 |
| 0.1733 | 7.37 | 68800 | 0.4266 | 0.3491 |
| 0.1733 | 7.38 | 68900 | 0.4423 | 0.3565 |
| 0.1764 | 7.39 | 69000 | 0.4410 | 0.3554 |
| 0.1764 | 7.4 | 69100 | 0.4482 | 0.3703 |
| 0.1764 | 7.41 | 69200 | 0.4480 | 0.3641 |
| 0.1764 | 7.42 | 69300 | 0.4361 | 0.3500 |
| 0.1764 | 7.43 | 69400 | 0.4399 | 0.3632 |
| 0.1711 | 7.44 | 69500 | 0.4383 | 0.3591 |
| 0.1711 | 7.45 | 69600 | 0.4523 | 0.3636 |
| 0.1711 | 7.46 | 69700 | 0.4388 | 0.3502 |
| 0.1711 | 7.47 | 69800 | 0.4305 | 0.3565 |
| 0.1711 | 7.48 | 69900 | 0.4290 | 0.3538 |
| 0.1748 | 7.49 | 70000 | 0.4359 | 0.3511 |
| 0.1748 | 7.51 | 70100 | 0.4315 | 0.3460 |
| 0.1748 | 7.52 | 70200 | 0.4268 | 0.3555 |
| 0.1748 | 7.53 | 70300 | 0.4267 | 0.3455 |
| 0.1748 | 7.54 | 70400 | 0.4359 | 0.3517 |
| 0.1739 | 7.55 | 70500 | 0.4299 | 0.3491 |
| 0.1739 | 7.56 | 70600 | 0.4423 | 0.3409 |
| 0.1739 | 7.57 | 70700 | 0.4251 | 0.3420 |
| 0.1739 | 7.58 | 70800 | 0.4300 | 0.3414 |
| 0.1739 | 7.59 | 70900 | 0.4349 | 0.3422 |
| 0.1763 | 7.6 | 71000 | 0.4328 | 0.3418 |
| 0.1763 | 7.61 | 71100 | 0.4313 | 0.3452 |
| 0.1763 | 7.62 | 71200 | 0.4240 | 0.3534 |
| 0.1763 | 7.63 | 71300 | 0.4274 | 0.3474 |
| 0.1763 | 7.64 | 71400 | 0.4304 | 0.3467 |
| 0.171 | 7.66 | 71500 | 0.4331 | 0.3510 |
| 0.171 | 7.67 | 71600 | 0.4263 | 0.3478 |
| 0.171 | 7.68 | 71700 | 0.4301 | 0.3447 |
| 0.171 | 7.69 | 71800 | 0.4046 | 0.3452 |
| 0.171 | 7.7 | 71900 | 0.4300 | 0.3528 |
| 0.1792 | 7.71 | 72000 | 0.4253 | 0.3492 |
| 0.1792 | 7.72 | 72100 | 0.4296 | 0.3491 |
| 0.1792 | 7.73 | 72200 | 0.4118 | 0.3451 |
| 0.1792 | 7.74 | 72300 | 0.4348 | 0.3345 |
| 0.1792 | 7.75 | 72400 | 0.4283 | 0.3447 |
| 0.1801 | 7.76 | 72500 | 0.4232 | 0.3449 |
| 0.1801 | 7.77 | 72600 | 0.4491 | 0.3486 |
| 0.1801 | 7.78 | 72700 | 0.4261 | 0.3343 |
| 0.1801 | 7.79 | 72800 | 0.4382 | 0.3455 |
| 0.1801 | 7.81 | 72900 | 0.4301 | 0.3415 |
| 0.1731 | 7.82 | 73000 | 0.4236 | 0.3438 |
| 0.1731 | 7.83 | 73100 | 0.4257 | 0.3419 |
| 0.1731 | 7.84 | 73200 | 0.4368 | 0.3410 |
| 0.1731 | 7.85 | 73300 | 0.4207 | 0.3398 |
| 0.1731 | 7.86 | 73400 | 0.4118 | 0.3418 |
| 0.1748 | 7.87 | 73500 | 0.4357 | 0.3429 |
| 0.1748 | 7.88 | 73600 | 0.4277 | 0.3452 |
| 0.1748 | 7.89 | 73700 | 0.4173 | 0.3476 |
| 0.1748 | 7.9 | 73800 | 0.4191 | 0.3478 |
| 0.1748 | 7.91 | 73900 | 0.4197 | 0.3457 |
| 0.1745 | 7.92 | 74000 | 0.4197 | 0.3436 |
| 0.1745 | 7.93 | 74100 | 0.4253 | 0.3512 |
| 0.1745 | 7.94 | 74200 | 0.4217 | 0.3463 |
| 0.1745 | 7.95 | 74300 | 0.4305 | 0.3473 |
| 0.1745 | 7.97 | 74400 | 0.4215 | 0.3507 |
| 0.1743 | 7.98 | 74500 | 0.4127 | 0.3408 |
| 0.1743 | 7.99 | 74600 | 0.4191 | 0.3468 |
| 0.1743 | 8.0 | 74700 | 0.4381 | 0.3491 |
| 0.1743 | 8.01 | 74800 | 0.4510 | 0.3477 |
| 0.1743 | 8.02 | 74900 | 0.4482 | 0.3471 |
| 0.1588 | 8.03 | 75000 | 0.4471 | 0.3430 |
| 0.1588 | 8.04 | 75100 | 0.4296 | 0.3393 |
| 0.1588 | 8.05 | 75200 | 0.4480 | 0.3398 |
| 0.1588 | 8.06 | 75300 | 0.4302 | 0.3452 |
| 0.1588 | 8.07 | 75400 | 0.4410 | 0.3431 |
| 0.144 | 8.08 | 75500 | 0.4263 | 0.3455 |
| 0.144 | 8.09 | 75600 | 0.4523 | 0.3495 |
| 0.144 | 8.1 | 75700 | 0.4455 | 0.3511 |
| 0.144 | 8.12 | 75800 | 0.4379 | 0.3445 |
| 0.144 | 8.13 | 75900 | 0.4418 | 0.3411 |
| 0.1483 | 8.14 | 76000 | 0.4491 | 0.3463 |
| 0.1483 | 8.15 | 76100 | 0.4386 | 0.3467 |
| 0.1483 | 8.16 | 76200 | 0.4327 | 0.3524 |
| 0.1483 | 8.17 | 76300 | 0.4360 | 0.3613 |
| 0.1483 | 8.18 | 76400 | 0.4352 | 0.3498 |
| 0.1541 | 8.19 | 76500 | 0.4376 | 0.3414 |
| 0.1541 | 8.2 | 76600 | 0.4408 | 0.3464 |
| 0.1541 | 8.21 | 76700 | 0.4415 | 0.3445 |
| 0.1541 | 8.22 | 76800 | 0.4455 | 0.3482 |
| 0.1541 | 8.23 | 76900 | 0.4542 | 0.3415 |
| 0.1479 | 8.24 | 77000 | 0.4462 | 0.3426 |
| 0.1479 | 8.25 | 77100 | 0.4460 | 0.3413 |
| 0.1479 | 8.27 | 77200 | 0.4434 | 0.3375 |
| 0.1479 | 8.28 | 77300 | 0.4397 | 0.3473 |
| 0.1479 | 8.29 | 77400 | 0.4379 | 0.3484 |
| 0.1479 | 8.3 | 77500 | 0.4441 | 0.3494 |
| 0.1479 | 8.31 | 77600 | 0.4301 | 0.3466 |
| 0.1479 | 8.32 | 77700 | 0.4420 | 0.3474 |
| 0.1479 | 8.33 | 77800 | 0.4520 | 0.3589 |
| 0.1479 | 8.34 | 77900 | 0.4283 | 0.3482 |
| 0.1531 | 8.35 | 78000 | 0.4325 | 0.3446 |
| 0.1531 | 8.36 | 78100 | 0.4380 | 0.3469 |
| 0.1531 | 8.37 | 78200 | 0.4463 | 0.3503 |
| 0.1531 | 8.38 | 78300 | 0.4479 | 0.3499 |
| 0.1531 | 8.39 | 78400 | 0.4477 | 0.3529 |
| 0.1507 | 8.4 | 78500 | 0.4709 | 0.3551 |
| 0.1507 | 8.42 | 78600 | 0.4533 | 0.3531 |
| 0.1507 | 8.43 | 78700 | 0.4507 | 0.3522 |
| 0.1507 | 8.44 | 78800 | 0.4562 | 0.3583 |
| 0.1507 | 8.45 | 78900 | 0.4421 | 0.3577 |
| 0.1545 | 8.46 | 79000 | 0.4485 | 0.3547 |
| 0.1545 | 8.47 | 79100 | 0.4389 | 0.3465 |
| 0.1545 | 8.48 | 79200 | 0.4397 | 0.3502 |
| 0.1545 | 8.49 | 79300 | 0.4403 | 0.3471 |
| 0.1545 | 8.5 | 79400 | 0.4394 | 0.3482 |
| 0.153 | 8.51 | 79500 | 0.4393 | 0.3474 |
| 0.153 | 8.52 | 79600 | 0.4343 | 0.3495 |
| 0.153 | 8.53 | 79700 | 0.4395 | 0.3539 |
| 0.153 | 8.54 | 79800 | 0.4497 | 0.3535 |
| 0.153 | 8.55 | 79900 | 0.4443 | 0.3540 |
| 0.1558 | 8.57 | 80000 | 0.4495 | 0.3554 |
| 0.1558 | 8.58 | 80100 | 0.4387 | 0.3460 |
| 0.1558 | 8.59 | 80200 | 0.4378 | 0.3520 |
| 0.1558 | 8.6 | 80300 | 0.4446 | 0.3527 |
| 0.1558 | 8.61 | 80400 | 0.4513 | 0.3508 |
| 0.1527 | 8.62 | 80500 | 0.4396 | 0.3537 |
| 0.1527 | 8.63 | 80600 | 0.4405 | 0.3507 |
| 0.1527 | 8.64 | 80700 | 0.4398 | 0.3450 |
| 0.1527 | 8.65 | 80800 | 0.4458 | 0.3508 |
| 0.1527 | 8.66 | 80900 | 0.4380 | 0.3465 |
| 0.1522 | 8.67 | 81000 | 0.4373 | 0.3482 |
| 0.1522 | 8.68 | 81100 | 0.4363 | 0.3410 |
| 0.1522 | 8.69 | 81200 | 0.4290 | 0.3447 |
| 0.1522 | 8.7 | 81300 | 0.4409 | 0.3515 |
| 0.1522 | 8.72 | 81400 | 0.4363 | 0.3433 |
| 0.1502 | 8.73 | 81500 | 0.4313 | 0.3429 |
| 0.1502 | 8.74 | 81600 | 0.4263 | 0.3451 |
| 0.1502 | 8.75 | 81700 | 0.4297 | 0.3452 |
| 0.1502 | 8.76 | 81800 | 0.4449 | 0.3411 |
| 0.1502 | 8.77 | 81900 | 0.4465 | 0.3455 |
| 0.151 | 8.78 | 82000 | 0.4274 | 0.3425 |
| 0.151 | 8.79 | 82100 | 0.4525 | 0.3532 |
| 0.151 | 8.8 | 82200 | 0.4282 | 0.3502 |
| 0.151 | 8.81 | 82300 | 0.4189 | 0.3507 |
| 0.151 | 8.82 | 82400 | 0.4379 | 0.3451 |
| 0.1529 | 8.83 | 82500 | 0.4378 | 0.3419 |
| 0.1529 | 8.84 | 82600 | 0.4283 | 0.3392 |
| 0.1529 | 8.85 | 82700 | 0.4359 | 0.3399 |
| 0.1529 | 8.87 | 82800 | 0.4308 | 0.3358 |
| 0.1529 | 8.88 | 82900 | 0.4296 | 0.3335 |
| 0.151 | 8.89 | 83000 | 0.4387 | 0.3372 |
| 0.151 | 8.9 | 83100 | 0.4335 | 0.3420 |
| 0.151 | 8.91 | 83200 | 0.4329 | 0.3374 |
| 0.151 | 8.92 | 83300 | 0.4353 | 0.3404 |
| 0.151 | 8.93 | 83400 | 0.4384 | 0.3447 |
| 0.1522 | 8.94 | 83500 | 0.4444 | 0.3353 |
| 0.1522 | 8.95 | 83600 | 0.4413 | 0.3481 |
| 0.1522 | 8.96 | 83700 | 0.4247 | 0.3474 |
| 0.1522 | 8.97 | 83800 | 0.4197 | 0.3386 |
| 0.1522 | 8.98 | 83900 | 0.4216 | 0.3384 |
| 0.1511 | 8.99 | 84000 | 0.4159 | 0.3396 |
| 0.1511 | 9.0 | 84100 | 0.4213 | 0.3416 |
| 0.1511 | 9.01 | 84200 | 0.4399 | 0.3379 |
| 0.1511 | 9.03 | 84300 | 0.4318 | 0.3437 |
| 0.1511 | 9.04 | 84400 | 0.4356 | 0.3371 |
| 0.1336 | 9.05 | 84500 | 0.4403 | 0.3373 |
| 0.1336 | 9.06 | 84600 | 0.4545 | 0.3381 |
| 0.1336 | 9.07 | 84700 | 0.4313 | 0.3331 |
| 0.1336 | 9.08 | 84800 | 0.4257 | 0.3360 |
| 0.1336 | 9.09 | 84900 | 0.4285 | 0.3372 |
| 0.1315 | 9.1 | 85000 | 0.4378 | 0.3332 |
| 0.1315 | 9.11 | 85100 | 0.4352 | 0.3282 |
| 0.1315 | 9.12 | 85200 | 0.4360 | 0.3339 |
| 0.1315 | 9.13 | 85300 | 0.4404 | 0.3365 |
| 0.1315 | 9.14 | 85400 | 0.4345 | 0.3356 |
| 0.1272 | 9.15 | 85500 | 0.4468 | 0.3375 |
| 0.1272 | 9.16 | 85600 | 0.4331 | 0.3363 |
| 0.1272 | 9.18 | 85700 | 0.4330 | 0.3309 |
| 0.1272 | 9.19 | 85800 | 0.4424 | 0.3301 |
| 0.1272 | 9.2 | 85900 | 0.4520 | 0.3326 |
| 0.1289 | 9.21 | 86000 | 0.4421 | 0.3326 |
| 0.1289 | 9.22 | 86100 | 0.4480 | 0.3335 |
| 0.1289 | 9.23 | 86200 | 0.4351 | 0.3380 |
| 0.1289 | 9.24 | 86300 | 0.4350 | 0.3427 |
| 0.1289 | 9.25 | 86400 | 0.4362 | 0.3320 |
| 0.1333 | 9.26 | 86500 | 0.4260 | 0.3342 |
| 0.1333 | 9.27 | 86600 | 0.4357 | 0.3360 |
| 0.1333 | 9.28 | 86700 | 0.4505 | 0.3372 |
| 0.1333 | 9.29 | 86800 | 0.4342 | 0.3359 |
| 0.1333 | 9.3 | 86900 | 0.4295 | 0.3367 |
| 0.1318 | 9.31 | 87000 | 0.4320 | 0.3335 |
| 0.1318 | 9.33 | 87100 | 0.4332 | 0.3344 |
| 0.1318 | 9.34 | 87200 | 0.4373 | 0.3330 |
| 0.1318 | 9.35 | 87300 | 0.4490 | 0.3316 |
| 0.1318 | 9.36 | 87400 | 0.4188 | 0.3429 |
| 0.1275 | 9.37 | 87500 | 0.4502 | 0.3383 |
| 0.1275 | 9.38 | 87600 | 0.4463 | 0.3387 |
| 0.1275 | 9.39 | 87700 | 0.4385 | 0.3308 |
| 0.1275 | 9.4 | 87800 | 0.4464 | 0.3414 |
| 0.1275 | 9.41 | 87900 | 0.4563 | 0.3405 |
| 0.1331 | 9.42 | 88000 | 0.4286 | 0.3374 |
| 0.1331 | 9.43 | 88100 | 0.4389 | 0.3352 |
| 0.1331 | 9.44 | 88200 | 0.4301 | 0.3340 |
| 0.1331 | 9.45 | 88300 | 0.4417 | 0.3373 |
| 0.1331 | 9.46 | 88400 | 0.4450 | 0.3425 |
| 0.1266 | 9.48 | 88500 | 0.4456 | 0.3451 |
| 0.1266 | 9.49 | 88600 | 0.4517 | 0.3403 |
| 0.1266 | 9.5 | 88700 | 0.4447 | 0.3419 |
| 0.1266 | 9.51 | 88800 | 0.4486 | 0.3428 |
| 0.1266 | 9.52 | 88900 | 0.4591 | 0.3411 |
| 0.1316 | 9.53 | 89000 | 0.4481 | 0.3387 |
| 0.1316 | 9.54 | 89100 | 0.4308 | 0.3349 |
| 0.1316 | 9.55 | 89200 | 0.4411 | 0.3405 |
| 0.1316 | 9.56 | 89300 | 0.4378 | 0.3390 |
| 0.1316 | 9.57 | 89400 | 0.4448 | 0.3365 |
| 0.1325 | 9.58 | 89500 | 0.4575 | 0.3416 |
| 0.1325 | 9.59 | 89600 | 0.4608 | 0.3422 |
| 0.1325 | 9.6 | 89700 | 0.4396 | 0.3350 |
| 0.1325 | 9.61 | 89800 | 0.4380 | 0.3398 |
| 0.1325 | 9.63 | 89900 | 0.4337 | 0.3388 |
| 0.1324 | 9.64 | 90000 | 0.4376 | 0.3388 |
| 0.1324 | 9.65 | 90100 | 0.4185 | 0.3380 |
| 0.1324 | 9.66 | 90200 | 0.4394 | 0.3384 |
| 0.1324 | 9.67 | 90300 | 0.4472 | 0.3400 |
| 0.1324 | 9.68 | 90400 | 0.4523 | 0.3390 |
| 0.1361 | 9.69 | 90500 | 0.4466 | 0.3389 |
| 0.1361 | 9.7 | 90600 | 0.4414 | 0.3383 |
| 0.1361 | 9.71 | 90700 | 0.4288 | 0.3348 |
| 0.1361 | 9.72 | 90800 | 0.4445 | 0.3374 |
| 0.1361 | 9.73 | 90900 | 0.4252 | 0.3322 |
| 0.1353 | 9.74 | 91000 | 0.4312 | 0.3338 |
| 0.1353 | 9.75 | 91100 | 0.4326 | 0.3319 |
| 0.1353 | 9.76 | 91200 | 0.4212 | 0.3400 |
| 0.1353 | 9.78 | 91300 | 0.4191 | 0.3374 |
| 0.1353 | 9.79 | 91400 | 0.4399 | 0.3332 |
| 0.1308 | 9.8 | 91500 | 0.4340 | 0.3349 |
| 0.1308 | 9.81 | 91600 | 0.4280 | 0.3379 |
| 0.1308 | 9.82 | 91700 | 0.4419 | 0.3376 |
| 0.1308 | 9.83 | 91800 | 0.4309 | 0.3333 |
| 0.1308 | 9.84 | 91900 | 0.4274 | 0.3352 |
| 0.1321 | 9.85 | 92000 | 0.4147 | 0.3337 |
| 0.1321 | 9.86 | 92100 | 0.4252 | 0.3316 |
| 0.1321 | 9.87 | 92200 | 0.4378 | 0.3381 |
| 0.1321 | 9.88 | 92300 | 0.4265 | 0.3355 |
| 0.1321 | 9.89 | 92400 | 0.4247 | 0.3331 |
| 0.1358 | 9.9 | 92500 | 0.4099 | 0.3379 |
| 0.1358 | 9.91 | 92600 | 0.4142 | 0.3356 |
| 0.1358 | 9.93 | 92700 | 0.4220 | 0.3332 |
| 0.1358 | 9.94 | 92800 | 0.4219 | 0.3369 |
| 0.1358 | 9.95 | 92900 | 0.4178 | 0.3332 |
| 0.1331 | 9.96 | 93000 | 0.4305 | 0.3353 |
| 0.1331 | 9.97 | 93100 | 0.4324 | 0.3307 |
| 0.1331 | 9.98 | 93200 | 0.4315 | 0.3344 |
| 0.1331 | 9.99 | 93300 | 0.4212 | 0.3314 |
| 0.1331 | 10.0 | 93400 | 0.4203 | 0.3332 |
| 0.1304 | 10.01 | 93500 | 0.4424 | 0.3351 |
| 0.1304 | 10.02 | 93600 | 0.4474 | 0.3341 |
| 0.1304 | 10.03 | 93700 | 0.4466 | 0.3378 |
| 0.1304 | 10.04 | 93800 | 0.4388 | 0.3327 |
| 0.1304 | 10.05 | 93900 | 0.4312 | 0.3360 |
| 0.1152 | 10.06 | 94000 | 0.4471 | 0.3307 |
| 0.1152 | 10.07 | 94100 | 0.4472 | 0.3316 |
| 0.1152 | 10.09 | 94200 | 0.4462 | 0.3324 |
| 0.1152 | 10.1 | 94300 | 0.4383 | 0.3344 |
| 0.1152 | 10.11 | 94400 | 0.4671 | 0.3365 |
| 0.1097 | 10.12 | 94500 | 0.4596 | 0.3307 |
| 0.1097 | 10.13 | 94600 | 0.4517 | 0.3382 |
| 0.1097 | 10.14 | 94700 | 0.4285 | 0.3380 |
| 0.1097 | 10.15 | 94800 | 0.4628 | 0.3363 |
| 0.1097 | 10.16 | 94900 | 0.4478 | 0.3365 |
| 0.1153 | 10.17 | 95000 | 0.4464 | 0.3346 |
| 0.1153 | 10.18 | 95100 | 0.4432 | 0.3392 |
| 0.1153 | 10.19 | 95200 | 0.4326 | 0.3330 |
| 0.1153 | 10.2 | 95300 | 0.4480 | 0.3327 |
| 0.1153 | 10.21 | 95400 | 0.4436 | 0.3260 |
| 0.1149 | 10.22 | 95500 | 0.4549 | 0.3311 |
| 0.1149 | 10.24 | 95600 | 0.4573 | 0.3353 |
| 0.1149 | 10.25 | 95700 | 0.4373 | 0.3369 |
| 0.1149 | 10.26 | 95800 | 0.4459 | 0.3358 |
| 0.1149 | 10.27 | 95900 | 0.4288 | 0.3270 |
| 0.1169 | 10.28 | 96000 | 0.4474 | 0.3330 |
| 0.1169 | 10.29 | 96100 | 0.4524 | 0.3298 |
| 0.1169 | 10.3 | 96200 | 0.4517 | 0.3258 |
| 0.1169 | 10.31 | 96300 | 0.4366 | 0.3288 |
| 0.1169 | 10.32 | 96400 | 0.4574 | 0.3324 |
| 0.1137 | 10.33 | 96500 | 0.4507 | 0.3343 |
| 0.1137 | 10.34 | 96600 | 0.4414 | 0.3301 |
| 0.1137 | 10.35 | 96700 | 0.4524 | 0.3366 |
| 0.1137 | 10.36 | 96800 | 0.4563 | 0.3435 |
| 0.1137 | 10.37 | 96900 | 0.4315 | 0.3375 |
| 0.1162 | 10.39 | 97000 | 0.4429 | 0.3365 |
| 0.1162 | 10.4 | 97100 | 0.4489 | 0.3380 |
| 0.1162 | 10.41 | 97200 | 0.4352 | 0.3357 |
| 0.1162 | 10.42 | 97300 | 0.4390 | 0.3319 |
| 0.1162 | 10.43 | 97400 | 0.4570 | 0.3303 |
| 0.1151 | 10.44 | 97500 | 0.4692 | 0.3344 |
| 0.1151 | 10.45 | 97600 | 0.4605 | 0.3332 |
| 0.1151 | 10.46 | 97700 | 0.4457 | 0.3238 |
| 0.1151 | 10.47 | 97800 | 0.4298 | 0.3304 |
| 0.1151 | 10.48 | 97900 | 0.4619 | 0.3274 |
| 0.1105 | 10.49 | 98000 | 0.4362 | 0.3244 |
| 0.1105 | 10.5 | 98100 | 0.4568 | 0.3289 |
| 0.1105 | 10.51 | 98200 | 0.4522 | 0.3336 |
| 0.1105 | 10.52 | 98300 | 0.4302 | 0.3257 |
| 0.1105 | 10.54 | 98400 | 0.4505 | 0.3238 |
| 0.1164 | 10.55 | 98500 | 0.4430 | 0.3301 |
| 0.1164 | 10.56 | 98600 | 0.4575 | 0.3283 |
| 0.1164 | 10.57 | 98700 | 0.4447 | 0.3277 |
| 0.1164 | 10.58 | 98800 | 0.4400 | 0.3301 |
| 0.1164 | 10.59 | 98900 | 0.4427 | 0.3288 |
| 0.1113 | 10.6 | 99000 | 0.4538 | 0.3248 |
| 0.1113 | 10.61 | 99100 | 0.4519 | 0.3298 |
| 0.1113 | 10.62 | 99200 | 0.4290 | 0.3249 |
| 0.1113 | 10.63 | 99300 | 0.4501 | 0.3220 |
| 0.1113 | 10.64 | 99400 | 0.4410 | 0.3218 |
| 0.1159 | 10.65 | 99500 | 0.4478 | 0.3211 |
| 0.1159 | 10.66 | 99600 | 0.4462 | 0.3250 |
| 0.1159 | 10.67 | 99700 | 0.4543 | 0.3302 |
| 0.1159 | 10.69 | 99800 | 0.4462 | 0.3301 |
| 0.1159 | 10.7 | 99900 | 0.4468 | 0.3229 |
| 0.1161 | 10.71 | 100000 | 0.4515 | 0.3241 |
| 0.1161 | 10.72 | 100100 | 0.4404 | 0.3276 |
| 0.1161 | 10.73 | 100200 | 0.4439 | 0.3222 |
| 0.1161 | 10.74 | 100300 | 0.4392 | 0.3257 |
| 0.1161 | 10.75 | 100400 | 0.4476 | 0.3314 |
| 0.1199 | 10.76 | 100500 | 0.4493 | 0.3270 |
| 0.1199 | 10.77 | 100600 | 0.4462 | 0.3224 |
| 0.1199 | 10.78 | 100700 | 0.4467 | 0.3311 |
| 0.1199 | 10.79 | 100800 | 0.4198 | 0.3228 |
| 0.1199 | 10.8 | 100900 | 0.4349 | 0.3225 |
| 0.1146 | 10.81 | 101000 | 0.4371 | 0.3272 |
| 0.1146 | 10.82 | 101100 | 0.4525 | 0.3210 |
| 0.1146 | 10.84 | 101200 | 0.4293 | 0.3219 |
| 0.1146 | 10.85 | 101300 | 0.4238 | 0.3216 |
| 0.1146 | 10.86 | 101400 | 0.4377 | 0.3252 |
| 0.118 | 10.87 | 101500 | 0.4371 | 0.3208 |
| 0.118 | 10.88 | 101600 | 0.4216 | 0.3174 |
| 0.118 | 10.89 | 101700 | 0.4312 | 0.3189 |
| 0.118 | 10.9 | 101800 | 0.4317 | 0.3204 |
| 0.118 | 10.91 | 101900 | 0.4303 | 0.3235 |
| 0.114 | 10.92 | 102000 | 0.4416 | 0.3158 |
| 0.114 | 10.93 | 102100 | 0.4240 | 0.3195 |
| 0.114 | 10.94 | 102200 | 0.4340 | 0.3149 |
| 0.114 | 10.95 | 102300 | 0.4311 | 0.3215 |
| 0.114 | 10.96 | 102400 | 0.4261 | 0.3238 |
| 0.1152 | 10.97 | 102500 | 0.4263 | 0.3206 |
| 0.1152 | 10.98 | 102600 | 0.4325 | 0.3294 |
| 0.1152 | 11.0 | 102700 | 0.4327 | 0.3187 |
| 0.1152 | 11.01 | 102800 | 0.4423 | 0.3195 |
| 0.1152 | 11.02 | 102900 | 0.4341 | 0.3277 |
| 0.1084 | 11.03 | 103000 | 0.4232 | 0.3243 |
| 0.1084 | 11.04 | 103100 | 0.4355 | 0.3184 |
| 0.1084 | 11.05 | 103200 | 0.4374 | 0.3274 |
| 0.1084 | 11.06 | 103300 | 0.4484 | 0.3305 |
| 0.1084 | 11.07 | 103400 | 0.4423 | 0.3226 |
| 0.1003 | 11.08 | 103500 | 0.4518 | 0.3224 |
| 0.1003 | 11.09 | 103600 | 0.4518 | 0.3243 |
| 0.1003 | 11.1 | 103700 | 0.4282 | 0.3207 |
| 0.1003 | 11.11 | 103800 | 0.4418 | 0.3220 |
| 0.1003 | 11.12 | 103900 | 0.4411 | 0.3216 |
| 0.1009 | 11.13 | 104000 | 0.4474 | 0.3238 |
| 0.1009 | 11.15 | 104100 | 0.4406 | 0.3245 |
| 0.1009 | 11.16 | 104200 | 0.4384 | 0.3242 |
| 0.1009 | 11.17 | 104300 | 0.4702 | 0.3265 |
| 0.1009 | 11.18 | 104400 | 0.4611 | 0.3266 |
| 0.0992 | 11.19 | 104500 | 0.4425 | 0.3211 |
| 0.0992 | 11.2 | 104600 | 0.4575 | 0.3222 |
| 0.0992 | 11.21 | 104700 | 0.4449 | 0.3208 |
| 0.0992 | 11.22 | 104800 | 0.4715 | 0.3208 |
| 0.0992 | 11.23 | 104900 | 0.4469 | 0.3223 |
| 0.1021 | 11.24 | 105000 | 0.4536 | 0.3225 |
| 0.1021 | 11.25 | 105100 | 0.4629 | 0.3234 |
| 0.1021 | 11.26 | 105200 | 0.4550 | 0.3205 |
| 0.1021 | 11.27 | 105300 | 0.4598 | 0.3213 |
| 0.1021 | 11.28 | 105400 | 0.4522 | 0.3179 |
| 0.1021 | 11.3 | 105500 | 0.4658 | 0.3211 |
| 0.1021 | 11.31 | 105600 | 0.4664 | 0.3196 |
| 0.1021 | 11.32 | 105700 | 0.4736 | 0.3177 |
| 0.1021 | 11.33 | 105800 | 0.4587 | 0.3158 |
| 0.1021 | 11.34 | 105900 | 0.4589 | 0.3194 |
| 0.1025 | 11.35 | 106000 | 0.4692 | 0.3214 |
| 0.1025 | 11.36 | 106100 | 0.4382 | 0.3181 |
| 0.1025 | 11.37 | 106200 | 0.4556 | 0.3185 |
| 0.1025 | 11.38 | 106300 | 0.4445 | 0.3191 |
| 0.1025 | 11.39 | 106400 | 0.4379 | 0.3163 |
| 0.104 | 11.4 | 106500 | 0.4454 | 0.3220 |
| 0.104 | 11.41 | 106600 | 0.4463 | 0.3201 |
| 0.104 | 11.42 | 106700 | 0.4550 | 0.3173 |
| 0.104 | 11.43 | 106800 | 0.4404 | 0.3168 |
| 0.104 | 11.45 | 106900 | 0.4569 | 0.3170 |
| 0.1016 | 11.46 | 107000 | 0.4529 | 0.3168 |
| 0.1016 | 11.47 | 107100 | 0.4587 | 0.3173 |
| 0.1016 | 11.48 | 107200 | 0.4505 | 0.3172 |
| 0.1016 | 11.49 | 107300 | 0.4489 | 0.3159 |
| 0.1016 | 11.5 | 107400 | 0.4528 | 0.3130 |
| 0.1001 | 11.51 | 107500 | 0.4473 | 0.3181 |
| 0.1001 | 11.52 | 107600 | 0.4434 | 0.3176 |
| 0.1001 | 11.53 | 107700 | 0.4597 | 0.3186 |
| 0.1001 | 11.54 | 107800 | 0.4351 | 0.3159 |
| 0.1001 | 11.55 | 107900 | 0.4471 | 0.3185 |
| 0.1005 | 11.56 | 108000 | 0.4457 | 0.3191 |
| 0.1005 | 11.57 | 108100 | 0.4544 | 0.3293 |
| 0.1005 | 11.58 | 108200 | 0.4436 | 0.3221 |
| 0.1005 | 11.6 | 108300 | 0.4642 | 0.3270 |
| 0.1005 | 11.61 | 108400 | 0.4474 | 0.3270 |
| 0.1031 | 11.62 | 108500 | 0.4458 | 0.3196 |
| 0.1031 | 11.63 | 108600 | 0.4723 | 0.3205 |
| 0.1031 | 11.64 | 108700 | 0.4507 | 0.3226 |
| 0.1031 | 11.65 | 108800 | 0.4424 | 0.3213 |
| 0.1031 | 11.66 | 108900 | 0.4511 | 0.3213 |
| 0.1014 | 11.67 | 109000 | 0.4422 | 0.3205 |
| 0.1014 | 11.68 | 109100 | 0.4498 | 0.3180 |
| 0.1014 | 11.69 | 109200 | 0.4303 | 0.3167 |
| 0.1014 | 11.7 | 109300 | 0.4483 | 0.3108 |
| 0.1014 | 11.71 | 109400 | 0.4548 | 0.3169 |
| 0.0981 | 11.72 | 109500 | 0.4406 | 0.3122 |
| 0.0981 | 11.73 | 109600 | 0.4293 | 0.3114 |
| 0.0981 | 11.75 | 109700 | 0.4369 | 0.3159 |
| 0.0981 | 11.76 | 109800 | 0.4364 | 0.3164 |
| 0.0981 | 11.77 | 109900 | 0.4358 | 0.3189 |
| 0.1023 | 11.78 | 110000 | 0.4281 | 0.3183 |
| 0.1023 | 11.79 | 110100 | 0.4404 | 0.3159 |
| 0.1023 | 11.8 | 110200 | 0.4471 | 0.3135 |
| 0.1023 | 11.81 | 110300 | 0.4498 | 0.3201 |
| 0.1023 | 11.82 | 110400 | 0.4527 | 0.3161 |
| 0.0988 | 11.83 | 110500 | 0.4440 | 0.3173 |
| 0.0988 | 11.84 | 110600 | 0.4356 | 0.3136 |
| 0.0988 | 11.85 | 110700 | 0.4308 | 0.3135 |
| 0.0988 | 11.86 | 110800 | 0.4294 | 0.3192 |
| 0.0988 | 11.87 | 110900 | 0.4241 | 0.3168 |
| 0.1022 | 11.88 | 111000 | 0.4420 | 0.3157 |
| 0.1022 | 11.9 | 111100 | 0.4313 | 0.3125 |
| 0.1022 | 11.91 | 111200 | 0.4213 | 0.3168 |
| 0.1022 | 11.92 | 111300 | 0.4352 | 0.3135 |
| 0.1022 | 11.93 | 111400 | 0.4297 | 0.3116 |
| 0.1032 | 11.94 | 111500 | 0.4218 | 0.3137 |
| 0.1032 | 11.95 | 111600 | 0.4334 | 0.3123 |
| 0.1032 | 11.96 | 111700 | 0.4373 | 0.3175 |
| 0.1032 | 11.97 | 111800 | 0.4299 | 0.3160 |
| 0.1032 | 11.98 | 111900 | 0.4326 | 0.3189 |
| 0.0969 | 11.99 | 112000 | 0.4208 | 0.3186 |
| 0.0969 | 12.0 | 112100 | 0.4385 | 0.3169 |
| 0.0969 | 12.01 | 112200 | 0.4453 | 0.3156 |
| 0.0969 | 12.02 | 112300 | 0.4596 | 0.3133 |
| 0.0969 | 12.03 | 112400 | 0.4509 | 0.3093 |
| 0.0901 | 12.04 | 112500 | 0.4535 | 0.3138 |
| 0.0901 | 12.06 | 112600 | 0.4371 | 0.3144 |
| 0.0901 | 12.07 | 112700 | 0.4499 | 0.3154 |
| 0.0901 | 12.08 | 112800 | 0.4615 | 0.3198 |
| 0.0901 | 12.09 | 112900 | 0.4523 | 0.3177 |
| 0.0889 | 12.1 | 113000 | 0.4412 | 0.3130 |
| 0.0889 | 12.11 | 113100 | 0.4471 | 0.3181 |
| 0.0889 | 12.12 | 113200 | 0.4530 | 0.3169 |
| 0.0889 | 12.13 | 113300 | 0.4670 | 0.3149 |
| 0.0889 | 12.14 | 113400 | 0.4594 | 0.3141 |
| 0.0917 | 12.15 | 113500 | 0.4623 | 0.3127 |
| 0.0917 | 12.16 | 113600 | 0.4460 | 0.3133 |
| 0.0917 | 12.17 | 113700 | 0.4512 | 0.3191 |
| 0.0917 | 12.18 | 113800 | 0.4681 | 0.3136 |
| 0.0917 | 12.19 | 113900 | 0.4564 | 0.3129 |
| 0.0906 | 12.21 | 114000 | 0.4482 | 0.3107 |
| 0.0906 | 12.22 | 114100 | 0.4595 | 0.3133 |
| 0.0906 | 12.23 | 114200 | 0.4510 | 0.3118 |
| 0.0906 | 12.24 | 114300 | 0.4472 | 0.3131 |
| 0.0906 | 12.25 | 114400 | 0.4499 | 0.3130 |
| 0.0918 | 12.26 | 114500 | 0.4503 | 0.3138 |
| 0.0918 | 12.27 | 114600 | 0.4518 | 0.3135 |
| 0.0918 | 12.28 | 114700 | 0.4493 | 0.3114 |
| 0.0918 | 12.29 | 114800 | 0.4574 | 0.3133 |
| 0.0918 | 12.3 | 114900 | 0.4683 | 0.3200 |
| 0.0869 | 12.31 | 115000 | 0.4608 | 0.3165 |
| 0.0869 | 12.32 | 115100 | 0.4618 | 0.3183 |
| 0.0869 | 12.33 | 115200 | 0.4689 | 0.3173 |
| 0.0869 | 12.34 | 115300 | 0.4681 | 0.3224 |
| 0.0869 | 12.36 | 115400 | 0.4576 | 0.3231 |
| 0.0885 | 12.37 | 115500 | 0.4831 | 0.3176 |
| 0.0885 | 12.38 | 115600 | 0.4602 | 0.3181 |
| 0.0885 | 12.39 | 115700 | 0.4493 | 0.3168 |
| 0.0885 | 12.4 | 115800 | 0.4564 | 0.3149 |
| 0.0885 | 12.41 | 115900 | 0.4585 | 0.3158 |
| 0.091 | 12.42 | 116000 | 0.4713 | 0.3193 |
| 0.091 | 12.43 | 116100 | 0.4581 | 0.3139 |
| 0.091 | 12.44 | 116200 | 0.4637 | 0.3131 |
| 0.091 | 12.45 | 116300 | 0.4572 | 0.3124 |
| 0.091 | 12.46 | 116400 | 0.4489 | 0.3163 |
| 0.0886 | 12.47 | 116500 | 0.4679 | 0.3159 |
| 0.0886 | 12.48 | 116600 | 0.4712 | 0.3151 |
| 0.0886 | 12.49 | 116700 | 0.4750 | 0.3186 |
| 0.0886 | 12.51 | 116800 | 0.4673 | 0.3176 |
| 0.0886 | 12.52 | 116900 | 0.4601 | 0.3113 |
| 0.0917 | 12.53 | 117000 | 0.4341 | 0.3125 |
| 0.0917 | 12.54 | 117100 | 0.4462 | 0.3077 |
| 0.0917 | 12.55 | 117200 | 0.4502 | 0.3099 |
| 0.0917 | 12.56 | 117300 | 0.4482 | 0.3116 |
| 0.0917 | 12.57 | 117400 | 0.4459 | 0.3131 |
| 0.0881 | 12.58 | 117500 | 0.4464 | 0.3122 |
| 0.0881 | 12.59 | 117600 | 0.4471 | 0.3125 |
| 0.0881 | 12.6 | 117700 | 0.4319 | 0.3122 |
| 0.0881 | 12.61 | 117800 | 0.4421 | 0.3103 |
| 0.0881 | 12.62 | 117900 | 0.4326 | 0.3108 |
| 0.0913 | 12.63 | 118000 | 0.4414 | 0.3068 |
| 0.0913 | 12.64 | 118100 | 0.4421 | 0.3083 |
| 0.0913 | 12.66 | 118200 | 0.4449 | 0.3103 |
| 0.0913 | 12.67 | 118300 | 0.4380 | 0.3128 |
| 0.0913 | 12.68 | 118400 | 0.4390 | 0.3136 |
| 0.0921 | 12.69 | 118500 | 0.4452 | 0.3104 |
| 0.0921 | 12.7 | 118600 | 0.4378 | 0.3122 |
| 0.0921 | 12.71 | 118700 | 0.4459 | 0.3080 |
| 0.0921 | 12.72 | 118800 | 0.4369 | 0.3051 |
| 0.0921 | 12.73 | 118900 | 0.4474 | 0.3076 |
| 0.0886 | 12.74 | 119000 | 0.4508 | 0.3066 |
| 0.0886 | 12.75 | 119100 | 0.4456 | 0.3097 |
| 0.0886 | 12.76 | 119200 | 0.4503 | 0.3078 |
| 0.0886 | 12.77 | 119300 | 0.4460 | 0.3081 |
| 0.0886 | 12.78 | 119400 | 0.4404 | 0.3080 |
| 0.0897 | 12.79 | 119500 | 0.4351 | 0.3100 |
| 0.0897 | 12.81 | 119600 | 0.4446 | 0.3120 |
| 0.0897 | 12.82 | 119700 | 0.4407 | 0.3098 |
| 0.0897 | 12.83 | 119800 | 0.4406 | 0.3084 |
| 0.0897 | 12.84 | 119900 | 0.4492 | 0.3067 |
| 0.09 | 12.85 | 120000 | 0.4546 | 0.3098 |
| 0.09 | 12.86 | 120100 | 0.4547 | 0.3074 |
| 0.09 | 12.87 | 120200 | 0.4517 | 0.3111 |
| 0.09 | 12.88 | 120300 | 0.4320 | 0.3064 |
| 0.09 | 12.89 | 120400 | 0.4294 | 0.3072 |
| 0.0898 | 12.9 | 120500 | 0.4412 | 0.3050 |
| 0.0898 | 12.91 | 120600 | 0.4254 | 0.3074 |
| 0.0898 | 12.92 | 120700 | 0.4409 | 0.3071 |
| 0.0898 | 12.93 | 120800 | 0.4362 | 0.3071 |
| 0.0898 | 12.94 | 120900 | 0.4579 | 0.3090 |
| 0.0892 | 12.95 | 121000 | 0.4492 | 0.3059 |
| 0.0892 | 12.97 | 121100 | 0.4404 | 0.3105 |
| 0.0892 | 12.98 | 121200 | 0.4365 | 0.3066 |
| 0.0892 | 12.99 | 121300 | 0.4368 | 0.3048 |
| 0.0892 | 13.0 | 121400 | 0.4410 | 0.3033 |
| 0.085 | 13.01 | 121500 | 0.4450 | 0.3047 |
| 0.085 | 13.02 | 121600 | 0.4633 | 0.3013 |
| 0.085 | 13.03 | 121700 | 0.4600 | 0.3054 |
| 0.085 | 13.04 | 121800 | 0.4541 | 0.3047 |
| 0.085 | 13.05 | 121900 | 0.4546 | 0.3058 |
| 0.0791 | 13.06 | 122000 | 0.4536 | 0.3045 |
| 0.0791 | 13.07 | 122100 | 0.4589 | 0.3066 |
| 0.0791 | 13.08 | 122200 | 0.4581 | 0.3057 |
| 0.0791 | 13.09 | 122300 | 0.4546 | 0.3048 |
| 0.0791 | 13.1 | 122400 | 0.4673 | 0.3006 |
| 0.0789 | 13.12 | 122500 | 0.4551 | 0.3019 |
| 0.0789 | 13.13 | 122600 | 0.4467 | 0.3025 |
| 0.0789 | 13.14 | 122700 | 0.4593 | 0.3015 |
| 0.0789 | 13.15 | 122800 | 0.4598 | 0.3037 |
| 0.0789 | 13.16 | 122900 | 0.4532 | 0.3038 |
| 0.077 | 13.17 | 123000 | 0.4607 | 0.3015 |
| 0.077 | 13.18 | 123100 | 0.4385 | 0.3005 |
| 0.077 | 13.19 | 123200 | 0.4590 | 0.3041 |
| 0.077 | 13.2 | 123300 | 0.4359 | 0.3047 |
| 0.077 | 13.21 | 123400 | 0.4458 | 0.3039 |
| 0.0771 | 13.22 | 123500 | 0.4506 | 0.3075 |
| 0.0771 | 13.23 | 123600 | 0.4457 | 0.3079 |
| 0.0771 | 13.24 | 123700 | 0.4448 | 0.3048 |
| 0.0771 | 13.25 | 123800 | 0.4398 | 0.3036 |
| 0.0771 | 13.27 | 123900 | 0.4510 | 0.3055 |
| 0.0804 | 13.28 | 124000 | 0.4507 | 0.3059 |
| 0.0804 | 13.29 | 124100 | 0.4544 | 0.3076 |
| 0.0804 | 13.3 | 124200 | 0.4534 | 0.3073 |
| 0.0804 | 13.31 | 124300 | 0.4441 | 0.3061 |
| 0.0804 | 13.32 | 124400 | 0.4391 | 0.3075 |
| 0.0774 | 13.33 | 124500 | 0.4527 | 0.3063 |
| 0.0774 | 13.34 | 124600 | 0.4638 | 0.3057 |
| 0.0774 | 13.35 | 124700 | 0.4541 | 0.3064 |
| 0.0774 | 13.36 | 124800 | 0.4617 | 0.3078 |
| 0.0774 | 13.37 | 124900 | 0.4584 | 0.3041 |
| 0.0795 | 13.38 | 125000 | 0.4663 | 0.3032 |
| 0.0795 | 13.39 | 125100 | 0.4546 | 0.3025 |
| 0.0795 | 13.4 | 125200 | 0.4616 | 0.3021 |
| 0.0795 | 13.42 | 125300 | 0.4603 | 0.3016 |
| 0.0795 | 13.43 | 125400 | 0.4616 | 0.3040 |
| 0.0791 | 13.44 | 125500 | 0.4548 | 0.3021 |
| 0.0791 | 13.45 | 125600 | 0.4560 | 0.3025 |
| 0.0791 | 13.46 | 125700 | 0.4516 | 0.3037 |
| 0.0791 | 13.47 | 125800 | 0.4500 | 0.3013 |
| 0.0791 | 13.48 | 125900 | 0.4540 | 0.3009 |
| 0.0776 | 13.49 | 126000 | 0.4581 | 0.3026 |
| 0.0776 | 13.5 | 126100 | 0.4598 | 0.3028 |
| 0.0776 | 13.51 | 126200 | 0.4587 | 0.3038 |
| 0.0776 | 13.52 | 126300 | 0.4514 | 0.3024 |
| 0.0776 | 13.53 | 126400 | 0.4495 | 0.3036 |
| 0.0793 | 13.54 | 126500 | 0.4556 | 0.3016 |
| 0.0793 | 13.55 | 126600 | 0.4603 | 0.3025 |
| 0.0793 | 13.57 | 126700 | 0.4496 | 0.2995 |
| 0.0793 | 13.58 | 126800 | 0.4483 | 0.2969 |
| 0.0793 | 13.59 | 126900 | 0.4462 | 0.2980 |
| 0.0816 | 13.6 | 127000 | 0.4521 | 0.2982 |
| 0.0816 | 13.61 | 127100 | 0.4580 | 0.3019 |
| 0.0816 | 13.62 | 127200 | 0.4669 | 0.3009 |
| 0.0816 | 13.63 | 127300 | 0.4513 | 0.3017 |
| 0.0816 | 13.64 | 127400 | 0.4602 | 0.3015 |
| 0.0779 | 13.65 | 127500 | 0.4592 | 0.2998 |
| 0.0779 | 13.66 | 127600 | 0.4700 | 0.2981 |
| 0.0779 | 13.67 | 127700 | 0.4727 | 0.2978 |
| 0.0779 | 13.68 | 127800 | 0.4600 | 0.2983 |
| 0.0779 | 13.69 | 127900 | 0.4472 | 0.2978 |
| 0.0779 | 13.7 | 128000 | 0.4483 | 0.2984 |
| 0.0779 | 13.72 | 128100 | 0.4512 | 0.2968 |
| 0.0779 | 13.73 | 128200 | 0.4549 | 0.2988 |
| 0.0779 | 13.74 | 128300 | 0.4576 | 0.2992 |
| 0.0779 | 13.75 | 128400 | 0.4400 | 0.2974 |
| 0.0793 | 13.76 | 128500 | 0.4433 | 0.3009 |
| 0.0793 | 13.77 | 128600 | 0.4456 | 0.2982 |
| 0.0793 | 13.78 | 128700 | 0.4560 | 0.3019 |
| 0.0793 | 13.79 | 128800 | 0.4551 | 0.3008 |
| 0.0793 | 13.8 | 128900 | 0.4513 | 0.3007 |
| 0.0769 | 13.81 | 129000 | 0.4518 | 0.3008 |
| 0.0769 | 13.82 | 129100 | 0.4567 | 0.2981 |
| 0.0769 | 13.83 | 129200 | 0.4437 | 0.2985 |
| 0.0769 | 13.84 | 129300 | 0.4424 | 0.2970 |
| 0.0769 | 13.85 | 129400 | 0.4423 | 0.3010 |
| 0.0785 | 13.87 | 129500 | 0.4495 | 0.2999 |
| 0.0785 | 13.88 | 129600 | 0.4483 | 0.2975 |
| 0.0785 | 13.89 | 129700 | 0.4485 | 0.2982 |
| 0.0785 | 13.9 | 129800 | 0.4429 | 0.2972 |
| 0.0785 | 13.91 | 129900 | 0.4430 | 0.2958 |
| 0.0792 | 13.92 | 130000 | 0.4495 | 0.2954 |
| 0.0792 | 13.93 | 130100 | 0.4485 | 0.2947 |
| 0.0792 | 13.94 | 130200 | 0.4395 | 0.2972 |
| 0.0792 | 13.95 | 130300 | 0.4379 | 0.2973 |
| 0.0792 | 13.96 | 130400 | 0.4428 | 0.2989 |
| 0.0795 | 13.97 | 130500 | 0.4385 | 0.3000 |
| 0.0795 | 13.98 | 130600 | 0.4490 | 0.2983 |
| 0.0795 | 13.99 | 130700 | 0.4568 | 0.2970 |
| 0.0795 | 14.0 | 130800 | 0.4482 | 0.2963 |
| 0.0795 | 14.01 | 130900 | 0.4479 | 0.2962 |
| 0.075 | 14.03 | 131000 | 0.4565 | 0.2968 |
| 0.075 | 14.04 | 131100 | 0.4623 | 0.2962 |
| 0.075 | 14.05 | 131200 | 0.4617 | 0.2965 |
| 0.075 | 14.06 | 131300 | 0.4687 | 0.2949 |
| 0.075 | 14.07 | 131400 | 0.4718 | 0.2929 |
| 0.0709 | 14.08 | 131500 | 0.4720 | 0.2945 |
| 0.0709 | 14.09 | 131600 | 0.4604 | 0.2953 |
| 0.0709 | 14.1 | 131700 | 0.4655 | 0.2955 |
| 0.0709 | 14.11 | 131800 | 0.4695 | 0.2958 |
| 0.0709 | 14.12 | 131900 | 0.4666 | 0.2945 |
| 0.0705 | 14.13 | 132000 | 0.4605 | 0.2959 |
| 0.0705 | 14.14 | 132100 | 0.4581 | 0.2947 |
| 0.0705 | 14.15 | 132200 | 0.4597 | 0.2948 |
| 0.0705 | 14.16 | 132300 | 0.4612 | 0.2943 |
| 0.0705 | 14.18 | 132400 | 0.4611 | 0.2959 |
| 0.0727 | 14.19 | 132500 | 0.4569 | 0.2958 |
| 0.0727 | 14.2 | 132600 | 0.4556 | 0.2951 |
| 0.0727 | 14.21 | 132700 | 0.4597 | 0.2955 |
| 0.0727 | 14.22 | 132800 | 0.4472 | 0.2935 |
| 0.0727 | 14.23 | 132900 | 0.4573 | 0.2943 |
| 0.0723 | 14.24 | 133000 | 0.4572 | 0.2943 |
| 0.0723 | 14.25 | 133100 | 0.4582 | 0.2956 |
| 0.0723 | 14.26 | 133200 | 0.4599 | 0.2968 |
| 0.0723 | 14.27 | 133300 | 0.4633 | 0.2962 |
| 0.0723 | 14.28 | 133400 | 0.4604 | 0.2972 |
| 0.071 | 14.29 | 133500 | 0.4587 | 0.2971 |
| 0.071 | 14.3 | 133600 | 0.4598 | 0.2973 |
| 0.071 | 14.31 | 133700 | 0.4579 | 0.2976 |
| 0.071 | 14.33 | 133800 | 0.4539 | 0.2969 |
| 0.071 | 14.34 | 133900 | 0.4628 | 0.2961 |
| 0.0703 | 14.35 | 134000 | 0.4627 | 0.2974 |
| 0.0703 | 14.36 | 134100 | 0.4611 | 0.2974 |
| 0.0703 | 14.37 | 134200 | 0.4607 | 0.2977 |
| 0.0703 | 14.38 | 134300 | 0.4638 | 0.2983 |
| 0.0703 | 14.39 | 134400 | 0.4628 | 0.2969 |
| 0.0736 | 14.4 | 134500 | 0.4543 | 0.2965 |
| 0.0736 | 14.41 | 134600 | 0.4585 | 0.2963 |
| 0.0736 | 14.42 | 134700 | 0.4636 | 0.2950 |
| 0.0736 | 14.43 | 134800 | 0.4636 | 0.2964 |
| 0.0736 | 14.44 | 134900 | 0.4630 | 0.2958 |
| 0.0715 | 14.45 | 135000 | 0.4611 | 0.2968 |
| 0.0715 | 14.46 | 135100 | 0.4633 | 0.2966 |
| 0.0715 | 14.48 | 135200 | 0.4664 | 0.2954 |
| 0.0715 | 14.49 | 135300 | 0.4670 | 0.2945 |
| 0.0715 | 14.5 | 135400 | 0.4638 | 0.2961 |
| 0.073 | 14.51 | 135500 | 0.4635 | 0.2965 |
| 0.073 | 14.52 | 135600 | 0.4639 | 0.2956 |
| 0.073 | 14.53 | 135700 | 0.4617 | 0.2948 |
| 0.073 | 14.54 | 135800 | 0.4609 | 0.2933 |
| 0.073 | 14.55 | 135900 | 0.4614 | 0.2947 |
| 0.0717 | 14.56 | 136000 | 0.4567 | 0.2958 |
| 0.0717 | 14.57 | 136100 | 0.4615 | 0.2934 |
| 0.0717 | 14.58 | 136200 | 0.4606 | 0.2929 |
| 0.0717 | 14.59 | 136300 | 0.4652 | 0.2934 |
| 0.0717 | 14.6 | 136400 | 0.4664 | 0.2934 |
| 0.0717 | 14.61 | 136500 | 0.4657 | 0.2923 |
| 0.0717 | 14.63 | 136600 | 0.4633 | 0.2931 |
| 0.0717 | 14.64 | 136700 | 0.4624 | 0.2943 |
| 0.0717 | 14.65 | 136800 | 0.4615 | 0.2949 |
| 0.0717 | 14.66 | 136900 | 0.4619 | 0.2930 |
| 0.0707 | 14.67 | 137000 | 0.4608 | 0.2936 |
| 0.0707 | 14.68 | 137100 | 0.4615 | 0.2945 |
| 0.0707 | 14.69 | 137200 | 0.4605 | 0.2941 |
| 0.0707 | 14.7 | 137300 | 0.4598 | 0.2931 |
| 0.0707 | 14.71 | 137400 | 0.4596 | 0.2943 |
| 0.0694 | 14.72 | 137500 | 0.4624 | 0.2927 |
| 0.0694 | 14.73 | 137600 | 0.4614 | 0.2931 |
| 0.0694 | 14.74 | 137700 | 0.4621 | 0.2924 |
| 0.0694 | 14.75 | 137800 | 0.4589 | 0.2920 |
| 0.0694 | 14.76 | 137900 | 0.4590 | 0.2926 |
| 0.0706 | 14.78 | 138000 | 0.4588 | 0.2931 |
| 0.0706 | 14.79 | 138100 | 0.4583 | 0.2928 |
| 0.0706 | 14.8 | 138200 | 0.4552 | 0.2934 |
| 0.0706 | 14.81 | 138300 | 0.4551 | 0.2923 |
| 0.0706 | 14.82 | 138400 | 0.4555 | 0.2927 |
| 0.0717 | 14.83 | 138500 | 0.4547 | 0.2930 |
| 0.0717 | 14.84 | 138600 | 0.4546 | 0.2930 |
| 0.0717 | 14.85 | 138700 | 0.4553 | 0.2934 |
| 0.0717 | 14.86 | 138800 | 0.4554 | 0.2924 |
| 0.0717 | 14.87 | 138900 | 0.4573 | 0.2924 |
| 0.0722 | 14.88 | 139000 | 0.4582 | 0.2927 |
| 0.0722 | 14.89 | 139100 | 0.4586 | 0.2926 |
| 0.0722 | 14.9 | 139200 | 0.4570 | 0.2926 |
| 0.0722 | 14.91 | 139300 | 0.4571 | 0.2923 |
| 0.0722 | 14.93 | 139400 | 0.4564 | 0.2925 |
| 0.0698 | 14.94 | 139500 | 0.4573 | 0.2927 |
| 0.0698 | 14.95 | 139600 | 0.4574 | 0.2927 |
| 0.0698 | 14.96 | 139700 | 0.4573 | 0.2927 |
| 0.0698 | 14.97 | 139800 | 0.4576 | 0.2921 |
| 0.0698 | 14.98 | 139900 | 0.4578 | 0.2923 |
| 0.0705 | 14.99 | 140000 | 0.4579 | 0.2928 |
| 0.0705 | 15.0 | 140100 | 0.4578 | 0.2927 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
bochaowei/t5-small-finetuned-xsum-wei0 | 95438d8769ee6482bb65eb0da78515c71f9e7095 | 2021-10-20T15:10:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bochaowei | null | bochaowei/t5-small-finetuned-xsum-wei0 | 2 | null | transformers | 23,745 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei0
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 25.7398
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6289
- Rouge1: 25.7398
- Rouge2: 6.1361
- Rougel: 19.8262
- Rougelsum: 19.8284
- Gen Len: 18.7984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.858 | 1.0 | 1701 | 2.6289 | 25.7398 | 6.1361 | 19.8262 | 19.8284 | 18.7984 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bochaowei/t5-small-finetuned-xsum-wei1 | 95099d64137cbd722bd284e6a5f73a263d4032b4 | 2021-10-20T18:33:31.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bochaowei | null | bochaowei/t5-small-finetuned-xsum-wei1 | 2 | null | transformers | 23,746 | 20% of the training data
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 27.5875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5287
- Rouge1: 27.5875
- Rouge2: 7.4083
- Rougel: 21.5654
- Rougelsum: 21.5716
- Gen Len: 18.8205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7677 | 1.0 | 3401 | 2.5441 | 27.4235 | 7.2208 | 21.3535 | 21.3636 | 18.8311 |
| 2.735 | 2.0 | 6802 | 2.5287 | 27.5875 | 7.4083 | 21.5654 | 21.5716 | 18.8205 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bookemdan/DialoGPT-small-harrypotter | 4c0b7dc33d592551406706f3f5bed7cd9047e22e | 2021-08-30T19:54:26.000Z | [
"pytorch",
"conversational"
] | conversational | false | bookemdan | null | bookemdan/DialoGPT-small-harrypotter | 2 | null | null | 23,747 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
boran/berkbot | 6fcd660132afad37604066baa16848c5226a9ab1 | 2021-08-27T19:37:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | boran | null | boran/berkbot | 2 | null | transformers | 23,748 | ---
tags:
- conversational
---
#berk |
brimeggi/inexis-bot | 937b226ebb11d3561841e89e59af64dffd07d240 | 2021-10-21T04:40:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | brimeggi | null | brimeggi/inexis-bot | 2 | null | transformers | 23,749 | Entry not found |
briverse/vi-electra-base-cased | bee53523c11e6d8daa4e092237ffac1175e368bb | 2021-02-04T15:28:43.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-base-cased | 2 | null | transformers | 23,750 | Entry not found |
briverse/vi-electra-large-uncased-800 | 19df6287829d37b05efdc3da90b2c4a5f99ad0bc | 2021-02-04T15:22:00.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | briverse | null | briverse/vi-electra-large-uncased-800 | 2 | null | transformers | 23,751 | Entry not found |
brokentx/newbrokiev2 | ff11d73e86f22e1c68e610c901123a0463fb6000 | 2021-06-05T11:14:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | brokentx | null | brokentx/newbrokiev2 | 2 | null | transformers | 23,752 | ---
tags:
- conversational
---
# My Awesome Model |
bryan6aero/wav2vec2-base-timit-demo-colab | 7b515433f658e58d0ea974474db2ae0cbe01e9a4 | 2022-02-17T22:00:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bryan6aero | null | bryan6aero/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 23,753 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4779
- Wer: 0.3453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4307 | 4.0 | 500 | 1.4129 | 0.9980 |
| 0.626 | 8.0 | 1000 | 0.4605 | 0.4499 |
| 0.2199 | 12.0 | 1500 | 0.4457 | 0.3898 |
| 0.1303 | 16.0 | 2000 | 0.4418 | 0.3771 |
| 0.0851 | 20.0 | 2500 | 0.4647 | 0.3548 |
| 0.0604 | 24.0 | 3000 | 0.4603 | 0.3499 |
| 0.0461 | 28.0 | 3500 | 0.4779 | 0.3453 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
btk/gpt2_data_random | 9b5d53aa00caa8751efdfe3cf03b23bc34ebac8d | 2021-05-21T14:30:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | btk | null | btk/gpt2_data_random | 2 | null | transformers | 23,754 | Entry not found |
btk/gpt2jt | 8cfe24765354b62c143afd3e87eff53805059444 | 2021-05-21T14:33:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | btk | null | btk/gpt2jt | 2 | null | transformers | 23,755 | Entry not found |
cahya/wav2vec2-base-artificial | 1f30a9d4bacf4e818664b45f454606161ed4cd7b | 2021-07-05T23:38:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-artificial | 2 | null | transformers | 23,756 | Entry not found |
cahya/wav2vec2-large-xlsr-sundanese | beffe9e905ecd0787c3e87f271f8df7142f23b5b | 2021-07-06T00:00:07.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"su",
"dataset:openslr",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-sundanese | 2 | null | transformers | 23,757 | ---
language: su
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Sundanese by cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR High quality TTS data for Sundanese
type: OpenSLR
args: su
metrics:
- name: Test WER
type: wer
value: 6.19
---
# Wav2Vec2-Large-XLSR-Sundanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from datasets.utils.download_manager import DownloadManager
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from pathlib import Path
import pandas as pd
def load_dataset_sundanese():
urls = [
"https://www.openslr.org/resources/44/su_id_female.zip",
"https://www.openslr.org/resources/44/su_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"su_id_female/wavs",
Path(download_dirs[1])/"su_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"su_id_female/line_index.tsv",
Path(download_dirs[1])/"su_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"]))
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_sundanese()
test_dataset = dataset['test']
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows or using the [notebook](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb).
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets.utils.download_manager import DownloadManager
import re
from pathlib import Path
import pandas as pd
def load_dataset_sundanese():
urls = [
"https://www.openslr.org/resources/44/su_id_female.zip",
"https://www.openslr.org/resources/44/su_id_male.zip"
]
dm = DownloadManager()
download_dirs = dm.download_and_extract(urls)
data_dirs = [
Path(download_dirs[0])/"su_id_female/wavs",
Path(download_dirs[1])/"su_id_male/wavs",
]
filenames = [
Path(download_dirs[0])/"su_id_female/line_index.tsv",
Path(download_dirs[1])/"su_id_male/line_index.tsv",
]
dfs = []
dfs.append(pd.read_csv(filenames[0], sep='\t4?\t', names=["path", "sentence"]))
dfs.append(pd.read_csv(filenames[1], sep='\t\t', names=["path", "sentence"]))
for i, dir in enumerate(data_dirs):
dfs[i]["path"] = dfs[i].apply(lambda row: str(data_dirs[i]) + "/" + row + ".wav", axis=1)
df = pd.concat(dfs)
# df = df.sample(frac=1, random_state=1).reset_index(drop=True)
dataset = Dataset.from_pandas(df)
dataset = dataset.remove_columns('__index_level_0__')
return dataset.train_test_split(test_size=0.1, seed=1)
dataset = load_dataset_sundanese()
test_dataset = dataset['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-sundanese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”_\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 6.19 %
## Training
[OpenSLR High quality TTS data for Sundanese](https://openslr.org/44/) was used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
and to [evaluate it](https://github.com/cahya-wirawan/indonesian-speech-recognition/blob/main/XLSR_Wav2Vec2_for_Indonesian_Evaluation-Sundanese.ipynb)
|
calbert/indic-bert | a4b89055473ad90117d438b4614dcebcb9ce6911 | 2021-10-28T02:17:32.000Z | [
"pytorch",
"albert",
"feature-extraction",
"transformers"
] | feature-extraction | false | calbert | null | calbert/indic-bert | 2 | null | transformers | 23,758 | Entry not found |
cambridgeltl/mirrorwic-roberta-base | 3ff6d724303b5e5f7d16adb2ea44b6ad99fe9fcb | 2021-10-25T19:27:09.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/mirrorwic-roberta-base | 2 | null | transformers | 23,759 | Entry not found |
camille/bert-base-pruned-voc-esw0.1-40000-en-fr-cased | b681787db2d547861cf621392817be03c9bbb9a9 | 2021-05-19T13:49:02.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.1-40000-en-fr-cased | 2 | null | transformers | 23,760 | Entry not found |
camille/bert-base-pruned-voc-esw0.3-40000-en-de-cased | a5ea87bcdce6f4fe00f4551ea8a05b78e5c1d7f6 | 2021-05-19T13:49:57.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.3-40000-en-de-cased | 2 | null | transformers | 23,761 | Entry not found |
camille/bert-base-pruned-voc-esw0.3-40000-en-fr-cased | 9943df7dc4df83e659f27ad3db73e32a0ba25911 | 2021-05-19T13:51:33.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | camille | null | camille/bert-base-pruned-voc-esw0.3-40000-en-fr-cased | 2 | null | transformers | 23,762 | Entry not found |
cammy/bart-large-cnn-finetuned-weaksup-1000 | 570ae9fe39bab349b361a07dbf444dfbec39cb2d | 2022-02-22T06:34:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/bart-large-cnn-finetuned-weaksup-1000 | 2 | null | transformers | 23,763 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6325
- Rouge1: 26.1954
- Rouge2: 10.7128
- Rougel: 19.3873
- Rougelsum: 22.785
- Gen Len: 66.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.3896 | 1.0 | 1000 | 1.6325 | 26.1954 | 10.7128 | 19.3873 | 22.785 | 66.85 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
carlosejimenez/vokenization-bert-small-k-10-v1-epoch0039 | 847457a6639b82abfa646b19f3c2b1d4c3fc2994 | 2021-12-06T00:00:45.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/vokenization-bert-small-k-10-v1-epoch0039 | 2 | null | transformers | 23,764 | Entry not found |
carlosejimenez/vokenization-bert-small-v1-epoch0039 | 9d4af17ebb91687831d6805476c63e5d409718e9 | 2021-12-04T22:40:08.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/vokenization-bert-small-v1-epoch0039 | 2 | null | transformers | 23,765 | Entry not found |
carlosejimenez/wiki103_bert_small_non_visual_only_e27 | e034d188f2bf31f8511e3c30a3b6f97654822b58 | 2021-12-14T17:05:35.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | carlosejimenez | null | carlosejimenez/wiki103_bert_small_non_visual_only_e27 | 2 | null | transformers | 23,766 | Entry not found |
cartyparty/DialoGPT-small-iteration1 | d4e83828658050fe7194b40c37c63835fe78ef20 | 2021-08-30T18:29:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cartyparty | null | cartyparty/DialoGPT-small-iteration1 | 2 | null | transformers | 23,767 | ---
tags:
- conversational
---
# Iteration 1 |
castorini/dkrr-dpr-nq-retriever | 7052adf67b403b0625f0360f3f2f46e7b7abae34 | 2022-02-13T17:46:38.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2012.04584",
"transformers"
] | feature-extraction | false | castorini | null | castorini/dkrr-dpr-nq-retriever | 2 | null | transformers | 23,768 | This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini:
```
@misc{izacard2020distilling,
title={Distilling Knowledge from Reader to Retriever for Question Answering},
author={Gautier Izacard and Edouard Grave},
year={2020},
eprint={2012.04584},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
cd-dvd/testmodel2 | 9d208029d6d8479f34e6048b6a68654bc9fe8f91 | 2022-01-27T19:45:14.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"Text Generation"
] | text-generation | false | cd-dvd | null | cd-dvd/testmodel2 | 2 | null | transformers | 23,769 | ---
tags:
- Text Generation
---
# GIMPLEARN knows modeltest2
# To generate conversation use input such as Human: What should I do?\nAI: |
cestwc/bart-base-concise-baseline | daf2c38e470f67bc12f4b1ab65b6fb5cfdac0e85 | 2022-01-06T10:37:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cestwc | null | cestwc/bart-base-concise-baseline | 2 | null | transformers | 23,770 | Entry not found |
cfinley/punct_restore_fr | a874b865e945131d35c151b7e8b3a779d38a1da4 | 2021-06-27T19:03:56.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | false | cfinley | null | cfinley/punct_restore_fr | 2 | 1 | transformers | 23,771 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: punct_restore_fr
results:
- task:
name: Token Classification
type: token-classification
metric:
name: Accuracy
type: accuracy
value: 0.991500810518732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# punct_restore_fr
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on a raw, French opensubtitles dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0301
- Precision: 0.9601
- Recall: 0.9527
- F1: 0.9564
- Accuracy: 0.9915
## Model description
Classifies tokens based on beginning of French sentences (B-SENT) and everything else (O).
## Intended uses & limitations
This model aims to help punctuation restoration on French YouTube auto-generated subtitles. In doing so, one can measure more in a corpus such as words per sentence, grammar structures per sentence, etc.
## Training and evaluation data
1 million Open Subtitles (French) sentences. 80%/10%/10% training/validation/test split.
The sentences:
- were lower-cased
- had end punctuation (.?!) removed
- were of length between 7 and 70 words
- had beginning word of sentence tagged with B-SENT.
- All other words marked with O.
Token/tag pairs batched together in groups of 64. This helps show variety of positions for B-SENT and O tags. This also keeps training examples from just being one sentence. Otherwise, this leads to having the first word and only the first word in a sequence being labeled B-SENT.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
cgou/fin_RoBERTa-v1-finetuned-squad | 77ad8d601915ce5a7dea5b538827e1cbfae103ef | 2021-12-14T21:36:06.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | cgou | null | cgou/fin_RoBERTa-v1-finetuned-squad | 2 | null | transformers | 23,772 | Entry not found |
chaitanya97/german_trained | 73a78d380a463d8ff8f765f38a1aa974cf0e3ef8 | 2021-10-26T12:37:19.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chaitanya97 | null | chaitanya97/german_trained | 2 | null | transformers | 23,773 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: german_trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german_trained
This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9367
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 12.0352 | 5.0 | 5 | 12.6165 | 1.0 |
| 4.0249 | 10.0 | 10 | 6.6453 | 1.0 |
| 2.6661 | 15.0 | 15 | 5.7873 | 1.0 |
| 2.4123 | 20.0 | 20 | 4.3250 | 1.0 |
| 1.9481 | 25.0 | 25 | 3.9899 | 1.0 |
| 1.7533 | 30.0 | 30 | 3.9367 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-baseline-final | efa8e2706e717f89f1d64df2a54739ac0173ac2d | 2021-12-05T18:45:24.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-baseline-final | 2 | null | transformers | 23,774 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-baseline-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-baseline-final
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6942
- Rouge1: 28.581
- Rouge2: 16.3417
- Rougel: 24.1277
- Rougelsum: 25.9797
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.7514 | 27.911 | 15.7038 | 23.6466 | 25.2111 | 20.0 |
| 2.0585 | 2.0 | 990 | 1.6655 | 28.7581 | 16.4875 | 24.2669 | 26.1676 | 20.0 |
| 1.4173 | 3.0 | 1485 | 1.6942 | 28.581 | 16.3417 | 24.1277 | 25.9797 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8 | 1fe651ad0cc3f73590483f86bcb7e6180fc762c8 | 2021-12-02T09:16:30.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-batch8 | 2 | null | transformers | 23,775 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6409 | 27.9647 | 15.4352 | 23.611 | 25.107 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-fact-corrector-I | 015191523cccb8f3e8b895c4ab850f23ae8f8564 | 2021-12-05T20:45:53.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kaggglenews-fact-corrector-I | 2 | null | transformers | 23,776 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-fact-corrector-I
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-fact-corrector-I
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 432 | 1.5483 | 28.9811 | 16.5711 | 24.7826 | 26.4132 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kagglenews-entityfiltering | dabdfecb4cf49a59cc694b38066696942953d961 | 2021-10-27T01:06:10.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chandank | null | chandank/bart-base-finetuned-kagglenews-entityfiltering | 2 | null | transformers | 23,777 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kagglenews-entityfiltering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kagglenews-entityfiltering
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5703
- Rouge1: 28.2719
- Rouge2: 15.6883
- Rougel: 24.0674
- Rougelsum: 25.616
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9187 | 1.0 | 863 | 1.5703 | 28.2719 | 15.6883 | 24.0674 | 25.616 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
charsiu/en_w2v2_ctc_libris_and_cv | 70f5061463f2927a27236d7e9d309cf0fd5282b3 | 2021-10-03T04:59:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | charsiu | null | charsiu/en_w2v2_ctc_libris_and_cv | 2 | 1 | transformers | 23,778 | Entry not found |
charsiu/en_w2v2_tiny_fc_10ms | 8272e6ec07582696d212c2b15bdd271c92ae64ee | 2021-12-17T02:18:12.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/en_w2v2_tiny_fc_10ms | 2 | 2 | transformers | 23,779 | Entry not found |
charsiu/zh_xlsr_fc_20ms | 292fb78943f6e9390bb60c77df19b59a95c7ae0b | 2021-12-15T18:53:10.000Z | [
"pytorch",
"wav2vec2",
"transformers"
] | null | false | charsiu | null | charsiu/zh_xlsr_fc_20ms | 2 | null | transformers | 23,780 | Entry not found |
chicaaago/coomaa_sensei | 0ac40933a5462a3f0cbd19d5328a1048082ebad5 | 2021-11-12T20:53:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | chicaaago | null | chicaaago/coomaa_sensei | 2 | null | transformers | 23,781 | Entry not found |
chinhon/distilgpt2-sgnews | f574b255e0cfa7d7de905074c20913110f446167 | 2021-10-28T14:12:13.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | chinhon | null | chinhon/distilgpt2-sgnews | 2 | null | transformers | 23,782 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-sgnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-sgnews
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3558 | 1.0 | 23769 | 3.2316 |
| 3.2558 | 2.0 | 47538 | 3.1683 |
| 3.2321 | 3.0 | 71307 | 3.1516 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
chip/DialoGPT-small-chizuru | d79ea54ca63b874a581d9fc7f24b738fabfa6147 | 2021-09-12T07:00:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | chip | null | chip/DialoGPT-small-chizuru | 2 | null | transformers | 23,783 | ---
tags:
- conversational
---
Chizuru Ichinose DialoGPT Model. |
chmanoj/xls-r-1B-te | 09da099798a421404b3a7790982a37c7fdc53865 | 2022-03-24T11:53:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"te",
"dataset:openslr",
"dataset:SLR66",
"transformers",
"openslr_SLR66",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chmanoj | null | chmanoj/xls-r-1B-te | 2 | null | transformers | 23,784 | ---
language:
- te
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr_SLR66
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- openslr
- SLR66
metrics:
- wer
model-index:
- name: xls-r-1B-te
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: openslr
name: Open SLR
args: SLR66
metrics:
- type: wer
value: 20.624
name: Test WER
- type: cer
value: 3.979
name: Test CER
- type: wer
value: 26.14777618364419
name: Test WER (without LM)
- type: cer
value: 4.932543184970369
name: Test CER (without LM)
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3119
- Wer: 0.2613
### Evaluation metrics
| Metric | Split | Decode with LM | Value |
|:------:|:------:|:--------------:|:---------:|
| WER | Train | No | 5.36 |
| CER | Train | No | 1.11 |
| WER | Test | No | 26.14 |
| CER | Test | No | 4.93 |
| WER | Train | Yes | 5.04 |
| CER | Train | Yes | 1.07 |
| WER | Test | Yes | 20.69 |
| CER | Test | Yes | 3.986 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.9038 | 4.8 | 500 | 3.0125 | 1.0 |
| 1.3777 | 9.61 | 1000 | 0.8681 | 0.8753 |
| 1.1436 | 14.42 | 1500 | 0.6256 | 0.7961 |
| 1.0997 | 19.23 | 2000 | 0.5244 | 0.6875 |
| 1.0363 | 24.04 | 2500 | 0.4585 | 0.6276 |
| 0.7996 | 28.84 | 3000 | 0.4072 | 0.5295 |
| 0.825 | 33.65 | 3500 | 0.3590 | 0.5222 |
| 0.8018 | 38.46 | 4000 | 0.3678 | 0.4671 |
| 0.7545 | 43.27 | 4500 | 0.3474 | 0.3962 |
| 0.7375 | 48.08 | 5000 | 0.3224 | 0.3869 |
| 0.6198 | 52.88 | 5500 | 0.3233 | 0.3630 |
| 0.6608 | 57.69 | 6000 | 0.3029 | 0.3308 |
| 0.645 | 62.5 | 6500 | 0.3195 | 0.3722 |
| 0.5249 | 67.31 | 7000 | 0.3004 | 0.3202 |
| 0.4875 | 72.11 | 7500 | 0.2826 | 0.2992 |
| 0.5171 | 76.92 | 8000 | 0.2962 | 0.2976 |
| 0.4974 | 81.73 | 8500 | 0.2990 | 0.2933 |
| 0.4387 | 86.54 | 9000 | 0.2834 | 0.2755 |
| 0.4511 | 91.34 | 9500 | 0.2886 | 0.2787 |
| 0.4112 | 96.15 | 10000 | 0.3093 | 0.2976 |
| 0.4064 | 100.96 | 10500 | 0.3123 | 0.2863 |
| 0.4047 | 105.77 | 11000 | 0.2968 | 0.2719 |
| 0.3519 | 110.57 | 11500 | 0.3106 | 0.2832 |
| 0.3719 | 115.38 | 12000 | 0.3030 | 0.2737 |
| 0.3669 | 120.19 | 12500 | 0.2964 | 0.2714 |
| 0.3386 | 125.0 | 13000 | 0.3101 | 0.2714 |
| 0.3137 | 129.8 | 13500 | 0.3063 | 0.2710 |
| 0.3008 | 134.61 | 14000 | 0.3082 | 0.2617 |
| 0.301 | 139.42 | 14500 | 0.3121 | 0.2628 |
| 0.3291 | 144.23 | 15000 | 0.3105 | 0.2612 |
| 0.3133 | 149.04 | 15500 | 0.3114 | 0.2624 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
chmanoj/xls-r-300m-sv | d62004c679185147fcbf705031c4f7e02d76a96c | 2022-01-26T00:01:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chmanoj | null | chmanoj/xls-r-300m-sv | 2 | null | transformers | 23,785 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8004
- Wer: 0.7139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.6683 | 1.45 | 500 | 1.7698 | 1.0041 |
| 1.9548 | 2.91 | 1000 | 1.0890 | 0.8602 |
| 1.9568 | 4.36 | 1500 | 1.0878 | 0.8680 |
| 1.9497 | 5.81 | 2000 | 1.1501 | 0.8838 |
| 1.8453 | 7.27 | 2500 | 1.0452 | 0.8418 |
| 1.6952 | 8.72 | 3000 | 0.9153 | 0.7823 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
chmanoj/xls-r-300m-ta | fcd1725bd313658bf7746c960e78fbdcfacca62a | 2022-01-29T10:51:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | chmanoj | null | chmanoj/xls-r-300m-ta | 2 | null | transformers | 23,786 | Entry not found |
chmanoj/xls-r-demo-test | 2255388f9298cbd07a56cae9e89bddf3f0b57468 | 2022-01-25T19:44:24.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | chmanoj | null | chmanoj/xls-r-demo-test | 2 | null | transformers | 23,787 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8786
- Wer: 1.3460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
christophalt/test-model | a70320f1b36b0f2f362b86a51dda598ef2b653d5 | 2021-11-19T09:03:01.000Z | [
"pytorch",
"transformers"
] | null | false | christophalt | null | christophalt/test-model | 2 | null | transformers | 23,788 | Entry not found |
cimm-kzn/endr-bert | 69ed91b1dfd0a46cead8f70ec05a9aaab2d64f94 | 2020-12-11T21:35:42.000Z | [
"pytorch",
"ru",
"en",
"arxiv:2004.03659",
"transformers"
] | null | false | cimm-kzn | null | cimm-kzn/endr-bert | 2 | null | transformers | 23,789 | ---
language:
- ru
- en
---
## EnDR-BERT
EnDR-BERT - Multilingual, Cased, which pretrained on the english collection of consumer comments on drug administration from [2]. Pre-training was based on the [original BERT code](https://github.com/google-research/bert) provided by Google. In particular, Multi-BERT was for used for initialization and all the parameters are the same as in Multi-BERT. Training details are described in our paper. \
link: https://yadi.sk/d/-PTn0xhk1PqvgQ
## Citing & Authors
If you find this repository helpful, feel free to cite our publication:
[1] Tutubalina E, Alimova I, Miftahutdinov Z, et al. The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews.//Bioinformatics. - 2020.
preprint: https://arxiv.org/abs/2004.03659
```
@article{10.1093/bioinformatics/btaa675,
author = {Tutubalina, Elena and Alimova, Ilseyar and Miftahutdinov, Zulfat and Sakhovskiy, Andrey and Malykh, Valentin and Nikolenko, Sergey},
title = "{The Russian Drug Reaction Corpus and Neural Models for Drug Reactions and Effectiveness Detection in User Reviews}",
journal = {Bioinformatics},
year = {2020},
month = {07},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btaa675},
url = {https://doi.org/10.1093/bioinformatics/btaa675},
note = {btaa675},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btaa675/33539752/btaa675.pdf},
}
```
[2] Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE Using semantic analysis of texts for the identification of drugs with similar therapeutic effects.//Russian Chemical Bulletin. – 2017. – Т. 66. – №. 11. – С. 2180-2189.
[link to paper](https://www.researchgate.net/profile/Elena_Tutubalina/publication/323751823_Using_semantic_analysis_of_texts_for_the_identification_of_drugs_with_similar_therapeutic_effects/links/5bf7cfc3299bf1a0202cbc1f/Using-semantic-analysis-of-texts-for-the-identification-of-drugs-with-similar-therapeutic-effects.pdf)
```
@article{tutubalina2017using,
title={Using semantic analysis of texts for the identification of drugs with similar therapeutic effects},
author={Tutubalina, EV and Miftahutdinov, Z Sh and Nugmanov, RI and Madzhidov, TI and Nikolenko, SI and Alimova, IS and Tropsha, AE},
journal={Russian Chemical Bulletin},
volume={66},
number={11},
pages={2180--2189},
year={2017},
publisher={Springer}
}
```
|
ck46/camembert-base | 78e0e20263bf80aa2a35201134eb2ccc60fb4122 | 2021-11-07T00:01:45.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ck46 | null | ck46/camembert-base | 2 | null | transformers | 23,790 | Entry not found |
ck46/t5-base-qg-prefix | 5f489e2a5c7715791156dc8ff0216f50667624f4 | 2021-12-24T14:32:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ck46 | null | ck46/t5-base-qg-prefix | 2 | null | transformers | 23,791 | Entry not found |
ck46/t5-base-squad-qa-qg | 11d03e07b88d510d9dcdbbcc2497d9d032792e61 | 2021-12-24T15:02:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ck46 | null | ck46/t5-base-squad-qa-qg | 2 | null | transformers | 23,792 | Entry not found |
cl-nagoya/defsent-bert-base-uncased-max | a962345843e4f744bb6614c226bb471d19792038 | 2021-08-05T05:38:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-bert-base-uncased-max | 2 | null | transformers | 23,793 | Entry not found |
cl-nagoya/defsent-bert-base-uncased-mean | b6d0e0ef9c59461a4a2a2d5b8d76666897ef1aad | 2021-08-05T05:35:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-bert-base-uncased-mean | 2 | null | transformers | 23,794 | Entry not found |
cl-nagoya/defsent-bert-large-uncased-max | b43bd33acb1c94a3648fa401ce660fb06172826d | 2021-08-05T05:47:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-bert-large-uncased-max | 2 | null | transformers | 23,795 | Entry not found |
cl-nagoya/defsent-roberta-base-mean | c2026661dbab8291b5ef9a234ea9e0297398e492 | 2021-08-05T05:47:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-roberta-base-mean | 2 | null | transformers | 23,796 | Entry not found |
cl-nagoya/defsent-roberta-large-max | 76691980e30dbc591180b8dadfa20f4938b3d5d6 | 2021-08-05T05:49:11.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-roberta-large-max | 2 | null | transformers | 23,797 | Entry not found |
cl-nagoya/defsent-roberta-large-mean | d17bca04b03a9b2f1c6cd275060d20f846e736de | 2021-08-05T05:49:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-nagoya | null | cl-nagoya/defsent-roberta-large-mean | 2 | null | transformers | 23,798 | Entry not found |
clairesb/kindness_bot_repo | 0dd4f5cdfea4b1f07e0987d7cda42f004cd7f01e | 2021-10-25T04:33:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | clairesb | null | clairesb/kindness_bot_repo | 2 | null | transformers | 23,799 | ---
tags:
- conversational
---
# Affirmation Bot |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.