modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
osanseviero/dummy-model | 3b13361728535c178e4f14fb9d98dd96eb142a4f | 2021-07-05T16:23:35.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | osanseviero | null | osanseviero/dummy-model | 1 | null | transformers | 30,100 | Entry not found |
osanseviero/just-a-test | 19599b0401f3370a835ed08bcd61e7943608c690 | 2022-07-01T13:51:55.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"sentence-transformers",
"causal-lm",
"license:cc-by-sa-4.0",
"sentence-similarity"
] | sentence-similarity | false | osanseviero | null | osanseviero/just-a-test | 1 | null | sentence-transformers | 30,101 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- causal-lm
license:
- cc-by-sa-4.0
---
# TODO: Name of Model
TODO: Description
## Model Description
TODO: Add relevant content
(0) Base Transformer Type: RobertaModel
(1) Pooling mean
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence"]
model = SentenceTransformer(TODO)
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
# The next step is optional if you want your own pooling function.
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
max_over_time = torch.max(token_embeddings, 1)[0]
return max_over_time
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained(TODO)
model = AutoModel.from_pretrained(TODO)
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt'))
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## TODO: Training Procedure
## TODO: Evaluation Results
## TODO: Citing & Authors
|
osanseviero/my-new-sentence-transformer | a2feea590b30db4c579a39a0a12372f6cb430c29 | 2021-06-28T10:36:13.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | osanseviero | null | osanseviero/my-new-sentence-transformer | 1 | null | sentence-transformers | 30,102 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-xlm-r-multilingual-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-xlm-r-multilingual-v1)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
osyvokon/xslr-commonvoice | b078d4c6985e9277586fb9ee7c7055569d7c8a9d | 2021-11-02T14:56:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | osyvokon | null | osyvokon/xslr-commonvoice | 1 | null | transformers | 30,103 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: xslr-commonvoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xslr-commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3835
- Wer: 0.3450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.92 | 100 | 3.5761 | 1.0 |
| No log | 1.83 | 200 | 3.0512 | 0.9999 |
| No log | 2.75 | 300 | 1.0185 | 0.8188 |
| No log | 3.67 | 400 | 0.5936 | 0.6411 |
| 3.2139 | 4.59 | 500 | 0.4986 | 0.5267 |
| 3.2139 | 5.5 | 600 | 0.4327 | 0.4732 |
| 3.2139 | 6.42 | 700 | 0.4227 | 0.4462 |
| 3.2139 | 7.34 | 800 | 0.4213 | 0.4291 |
| 3.2139 | 8.26 | 900 | 0.4016 | 0.4033 |
| 0.22 | 9.17 | 1000 | 0.3987 | 0.3825 |
| 0.22 | 10.09 | 1100 | 0.4065 | 0.3867 |
| 0.22 | 11.01 | 1200 | 0.3929 | 0.3842 |
| 0.22 | 11.93 | 1300 | 0.3775 | 0.3687 |
| 0.22 | 12.84 | 1400 | 0.3891 | 0.3536 |
| 0.1005 | 13.76 | 1500 | 0.3850 | 0.3492 |
| 0.1005 | 14.68 | 1600 | 0.3823 | 0.3441 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
othrif/wav2vec_test | 7f73f6cff9dcc9fe8b4d144360b4e8600b53b4a0 | 2021-03-29T02:48:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:https://arabicspeech.org/",
"transformers",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | othrif | null | othrif/wav2vec_test | 1 | null | transformers | 30,104 | ---
language: ar
datasets:
- https://arabicspeech.org/
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Egyptian by Zaid Alyafeai and Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: arabicspeech.org MGB-3
type: arabicspeech.org MGB-3
args: ar
metrics:
- name: Test WER
type: wer
value: 55.2
---
# Test Wav2Vec2 with egyptian arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("arabic_speech_corpus", split="test")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec_test")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec_test")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` |
pablouribe/xls-r-ab-test | 7cd3678c4a6a1c381afd67198aefabb8fed5092c | 2022-01-30T05:13:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | pablouribe | null | pablouribe/xls-r-ab-test | 1 | null | transformers | 30,105 | ---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.2596
- Wer: 19.1571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
pashin/DialoGPT-small-ironman-3 | e15befb54d853aab93b5f52b55ea2b7152b1f140 | 2021-10-08T16:53:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pashin | null | pashin/DialoGPT-small-ironman-3 | 1 | null | transformers | 30,106 | ---
tags:
- conversational
---
# iron man 3 |
pashin/DialoGPT-small-ironman1 | 84c5d94ca232d1ea6527d1a12664db7d6b5a7c84 | 2021-10-06T06:19:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pashin | null | pashin/DialoGPT-small-ironman1 | 1 | null | transformers | 30,107 | ---
tags:
- conversational
---
#iron man 1 DialoGPT Model |
patNike/baby_model | 037a09cea683a2b7947a7919b258605b1b445ded | 2021-11-02T14:23:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | patNike | null | patNike/baby_model | 1 | null | transformers | 30,108 | Entry not found |
patrickvonplaten/data2vec-base-960h | 807ee1e1102f40aa8f971a73558e65fddf594c10 | 2022-02-18T18:14:34.000Z | [
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/data2vec-base-960h | 1 | 1 | transformers | 30,109 | Entry not found |
patrickvonplaten/hello_2b | a17fddf29d666cf1b17dfb5fc62999ef6c57c886 | 2021-11-03T19:58:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/hello_2b | 1 | null | transformers | 30,110 | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: hello_2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hello_2b
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2725
- Wer: 0.9531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1646 | 0.92 | 100 | 3.2106 | 1.0 |
| 0.368 | 1.85 | 200 | 2.9963 | 1.0 |
| 0.2252 | 2.77 | 300 | 2.8078 | 0.9999 |
| 0.1546 | 3.7 | 400 | 2.3458 | 0.9996 |
| 0.1468 | 4.63 | 500 | 2.0086 | 0.9986 |
| 0.1261 | 5.55 | 600 | 1.8269 | 0.9985 |
| 0.1206 | 6.48 | 700 | 1.7347 | 0.9956 |
| 0.1959 | 7.4 | 800 | 1.6819 | 0.9955 |
| 0.0502 | 8.33 | 900 | 1.6809 | 0.9965 |
| 0.0811 | 9.26 | 1000 | 1.6674 | 0.9916 |
| 0.0534 | 10.18 | 1100 | 1.5719 | 0.9898 |
| 0.0402 | 11.11 | 1200 | 1.4620 | 0.9821 |
| 0.057 | 12.04 | 1300 | 1.3015 | 0.9554 |
| 0.0385 | 12.96 | 1400 | 1.3798 | 0.9600 |
| 0.0422 | 13.88 | 1500 | 1.3538 | 0.9699 |
| 0.014 | 14.81 | 1600 | 1.2507 | 0.9443 |
| 0.0232 | 15.74 | 1700 | 1.3318 | 0.9465 |
| 0.0554 | 16.66 | 1800 | 1.2784 | 0.9462 |
| 0.0316 | 17.59 | 1900 | 1.2503 | 0.9481 |
| 0.0524 | 18.51 | 2000 | 1.3920 | 0.9604 |
| 0.0142 | 19.44 | 2100 | 1.4224 | 0.9698 |
| 0.0288 | 20.37 | 2200 | 1.3475 | 0.9635 |
| 0.0106 | 21.29 | 2300 | 1.2232 | 0.9264 |
| 0.0396 | 22.22 | 2400 | 1.3323 | 0.9615 |
| 0.0349 | 23.15 | 2500 | 1.2741 | 0.9587 |
| 0.0121 | 24.07 | 2600 | 1.2671 | 0.9586 |
| 0.0224 | 24.99 | 2700 | 1.3001 | 0.9611 |
| 0.0449 | 25.92 | 2800 | 1.2777 | 0.9572 |
| 0.0186 | 26.85 | 2900 | 1.2766 | 0.9607 |
| 0.0365 | 27.77 | 3000 | 1.2935 | 0.9598 |
| 0.0105 | 28.7 | 3100 | 1.2761 | 0.9588 |
| 0.021 | 29.63 | 3200 | 1.2686 | 0.9528 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/hello_2b_2 | bafcbc64e62a7992e26f73889c212e9ab1d39094 | 2021-11-04T05:07:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/hello_2b_2 | 1 | null | transformers | 30,111 | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: hello_2b_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hello_2b_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5324
- Wer: 0.5109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3543 | 0.92 | 100 | 3.4342 | 1.0 |
| 3.0521 | 1.85 | 200 | 3.1243 | 1.0 |
| 1.4905 | 2.77 | 300 | 1.1760 | 0.9876 |
| 0.5852 | 3.7 | 400 | 0.7678 | 0.7405 |
| 0.4442 | 4.63 | 500 | 0.7637 | 0.7179 |
| 0.3816 | 5.55 | 600 | 0.7114 | 0.6726 |
| 0.2923 | 6.48 | 700 | 0.7109 | 0.6837 |
| 0.2771 | 7.4 | 800 | 0.6800 | 0.6530 |
| 0.1643 | 8.33 | 900 | 0.6031 | 0.6089 |
| 0.2931 | 9.26 | 1000 | 0.6467 | 0.6308 |
| 0.1495 | 10.18 | 1100 | 0.6042 | 0.6085 |
| 0.2093 | 11.11 | 1200 | 0.5850 | 0.5889 |
| 0.1329 | 12.04 | 1300 | 0.5557 | 0.5567 |
| 0.1005 | 12.96 | 1400 | 0.5964 | 0.5814 |
| 0.2162 | 13.88 | 1500 | 0.5692 | 0.5626 |
| 0.0923 | 14.81 | 1600 | 0.5508 | 0.5462 |
| 0.075 | 15.74 | 1700 | 0.5477 | 0.5307 |
| 0.2029 | 16.66 | 1800 | 0.5501 | 0.5300 |
| 0.0985 | 17.59 | 1900 | 0.5350 | 0.5303 |
| 0.1674 | 18.51 | 2000 | 0.5429 | 0.5241 |
| 0.1305 | 19.44 | 2100 | 0.5645 | 0.5443 |
| 0.0774 | 20.37 | 2200 | 0.5313 | 0.5216 |
| 0.1372 | 21.29 | 2300 | 0.5644 | 0.5392 |
| 0.1095 | 22.22 | 2400 | 0.5577 | 0.5306 |
| 0.0958 | 23.15 | 2500 | 0.5461 | 0.5273 |
| 0.0544 | 24.07 | 2600 | 0.5290 | 0.5055 |
| 0.0579 | 24.99 | 2700 | 0.5295 | 0.5150 |
| 0.1213 | 25.92 | 2800 | 0.5311 | 0.5221 |
| 0.0691 | 26.85 | 2900 | 0.5228 | 0.5095 |
| 0.1729 | 27.77 | 3000 | 0.5340 | 0.5095 |
| 0.0697 | 28.7 | 3100 | 0.5334 | 0.5139 |
| 0.0734 | 29.63 | 3200 | 0.5323 | 0.5140 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/rag-sequence-gen-prev | 9701fbc93993df55cda5f433d8563ada09500e10 | 2020-09-24T12:42:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | patrickvonplaten | null | patrickvonplaten/rag-sequence-gen-prev | 1 | null | transformers | 30,112 | Entry not found |
patrickvonplaten/roberta2roberta-cnn_dailymail-fp16 | c6a58c60c13bebca223a2d8ed7055dc73c4acc72 | 2020-12-11T21:59:23.000Z | [
"pytorch",
"encoder_decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | patrickvonplaten | null | patrickvonplaten/roberta2roberta-cnn_dailymail-fp16 | 1 | null | transformers | 30,113 | # Roberta2Roberta Summarization with 🤗 EncoderDecoder Framework
This model is a Roberta2Roberta model fine-tuned on summarization.
Roberta2Roberta is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `roberta-base`
RoBERTa models. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
two pretrained models can simply be loaded into the framework via:
```python
roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base")
```
The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal
masking for auto-regressiv generation.
Thus, ``roberta2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
`roberta2roberta-cnn_dailymail-fp16` is uploaded here.
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable summarization results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 EncoderDecoder Framework.
The model can be used as follows:
```python
from transformers import BertTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-cnn_dailymail-fp16")
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David B
oren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 185
6, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confede
rate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking fu
ll membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on t
he fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more invol
ved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members al
legedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a frat
ernity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,
' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloy
d's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing in
cidents."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
# should produce
# Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing racist chants. The fraternity's national chapter has had to close 12 in 18 months over hazing.
# Sigma has had more than 130 chapters in 18 states. University of Oklahoma president says fraternity has been "deteriorated".
```
## Training script:
**IMPORTANT**: In order for this code to work, make sure you checkout to the branch
[more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts
the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840.
The following code shows the complete training script that was used to fine-tune `roberta2roberta-cnn_dailymail-fp16
` for reproducability. The training last ~9h on a standard GPU.
```python
#!/usr/bin/env python3
import nlp
import logging
from transformers import RobertaTokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base", "roberta-base")
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
# load train and validation data
train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train")
val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]")
# load rouge for validation
rouge = nlp.load_metric("rouge", experiment_id=0)
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
encoder_length = 512
decoder_length = 128
batch_size = 16
# map data correctly
def map_to_encoder_decoder_inputs(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at Longformer at 2048
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length)
# force summarization <= 256
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
batch["decoder_attention_mask"] = outputs.attention_mask
assert all([len(x) == encoder_length for x in inputs.input_ids])
assert all([len(x) == decoder_length for x in outputs.input_ids])
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.eos_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# make train dataset ready
train_dataset = train_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
train_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_attention_mask", "decoder_input_ids", "labels"],
)
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "decoder_attention_mask", "attention_mask", "decoder_input_ids", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_from_generate=True,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=3,
fp16=True,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
## Evaluation
The following script evaluates the model on the test set of
CNN/Daily Mail.
```python
#!/usr/bin/env python3
import nlp
from transformers import RobertaTokenizer, EncoderDecoderModel
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/roberta2roberta-cnn_dailymail-fp16")
model.to("cuda")
test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test")
batch_size = 128
# map data correctly
def generate_summary(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at BERT max length 512
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
outputs = model.generate(input_ids, attention_mask=attention_mask)
# all special tokens including will be removed
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch["pred"] = output_str
return batch
results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"])
# load rouge for validation
rouge = nlp.load_metric("rouge")
pred_str = results["pred"]
label_str = results["highlights"]
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
print(rouge_output)
```
The obtained results should be:
| - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
|----------|:-------------:|:------:|:------:|
| **CNN/Daily Mail** | 15.79 | 19.05 | **16.79** |
|
patrickvonplaten/sat-base | 94d99c42b44977b7cef9a6af66005ea306bc1053 | 2021-10-22T17:51:13.000Z | [
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sat-base | 1 | null | transformers | 30,114 | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sat-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sat-base
This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7014
- Wer: 0.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9958 | 0.69 | 100 | 6.7171 | 1.0 |
| 3.0453 | 1.38 | 200 | 3.0374 | 1.0 |
| 2.9989 | 2.07 | 300 | 2.9807 | 1.0 |
| 2.969 | 2.76 | 400 | 2.9579 | 1.0 |
| 2.903 | 3.45 | 500 | 2.9072 | 1.0 |
| 2.8565 | 4.14 | 600 | 2.8804 | 1.0 |
| 2.8195 | 4.83 | 700 | 2.7916 | 1.0 |
| 2.3134 | 5.52 | 800 | 2.1456 | 1.0004 |
| 1.5475 | 6.21 | 900 | 1.4663 | 0.9549 |
| 1.1295 | 6.9 | 1000 | 1.1140 | 0.7227 |
| 1.0181 | 7.59 | 1100 | 0.9258 | 0.6497 |
| 1.0252 | 8.28 | 1200 | 0.8430 | 0.6255 |
| 0.835 | 8.97 | 1300 | 0.8063 | 0.6032 |
| 0.662 | 9.66 | 1400 | 0.7595 | 0.5931 |
| 0.5558 | 10.34 | 1500 | 0.7322 | 0.5819 |
| 0.7596 | 11.03 | 1600 | 0.7120 | 0.5708 |
| 0.6169 | 11.72 | 1700 | 0.7073 | 0.5606 |
| 0.4565 | 12.41 | 1800 | 0.7124 | 0.5586 |
| 0.4554 | 13.1 | 1900 | 0.6880 | 0.5501 |
| 0.6216 | 13.79 | 2000 | 0.6783 | 0.5494 |
| 0.5393 | 14.48 | 2100 | 0.7067 | 0.5499 |
| 0.4095 | 15.17 | 2200 | 0.7014 | 0.5438 |
| 0.3551 | 15.86 | 2300 | 0.7000 | 0.5426 |
| 0.5112 | 16.55 | 2400 | 0.6866 | 0.5426 |
| 0.5139 | 17.24 | 2500 | 0.7134 | 0.5446 |
| 0.3638 | 17.93 | 2600 | 0.7130 | 0.5434 |
| 0.3327 | 18.62 | 2700 | 0.6980 | 0.5377 |
| 0.4385 | 19.31 | 2800 | 0.7017 | 0.5390 |
| 0.4986 | 20.0 | 2900 | 0.7014 | 0.5374 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/unispeech-sat-base-timit-ft | 3722323daab20f755e0cd1f2e5b3497db2aa4ab3 | 2021-10-27T10:51:18.000Z | [
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/unispeech-sat-base-timit-ft | 1 | null | transformers | 30,115 | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-sat-base-timit-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-base-timit-ft
This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6712
- Wer: 0.4101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2582 | 0.69 | 100 | 3.1651 | 1.0 |
| 2.9542 | 1.38 | 200 | 2.9567 | 1.0 |
| 2.9656 | 2.07 | 300 | 2.9195 | 1.0 |
| 2.8946 | 2.76 | 400 | 2.8641 | 1.0 |
| 1.9305 | 3.45 | 500 | 1.7680 | 1.0029 |
| 1.0134 | 4.14 | 600 | 1.0184 | 0.6942 |
| 0.8355 | 4.83 | 700 | 0.7769 | 0.6080 |
| 0.8724 | 5.52 | 800 | 0.7182 | 0.6035 |
| 0.5619 | 6.21 | 900 | 0.6823 | 0.5406 |
| 0.4247 | 6.9 | 1000 | 0.6279 | 0.5237 |
| 0.4257 | 7.59 | 1100 | 0.6056 | 0.5000 |
| 0.5007 | 8.28 | 1200 | 0.5870 | 0.4918 |
| 0.3854 | 8.97 | 1300 | 0.6200 | 0.4804 |
| 0.264 | 9.66 | 1400 | 0.6030 | 0.4600 |
| 0.1989 | 10.34 | 1500 | 0.6049 | 0.4588 |
| 0.3196 | 11.03 | 1600 | 0.5946 | 0.4599 |
| 0.2622 | 11.72 | 1700 | 0.6282 | 0.4422 |
| 0.1697 | 12.41 | 1800 | 0.6559 | 0.4413 |
| 0.1464 | 13.1 | 1900 | 0.6349 | 0.4328 |
| 0.2277 | 13.79 | 2000 | 0.6133 | 0.4284 |
| 0.221 | 14.48 | 2100 | 0.6617 | 0.4219 |
| 0.1391 | 15.17 | 2200 | 0.6705 | 0.4235 |
| 0.112 | 15.86 | 2300 | 0.6207 | 0.4218 |
| 0.1717 | 16.55 | 2400 | 0.6749 | 0.4184 |
| 0.2081 | 17.24 | 2500 | 0.6756 | 0.4169 |
| 0.1244 | 17.93 | 2600 | 0.6750 | 0.4181 |
| 0.0978 | 18.62 | 2700 | 0.6500 | 0.4115 |
| 0.128 | 19.31 | 2800 | 0.6750 | 0.4106 |
| 0.1791 | 20.0 | 2900 | 0.6712 | 0.4101 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-2-bart-base_test | 00d996fa2fcdecf86bd1b8e22c73050c50437cbf | 2021-12-28T12:28:49.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-2-bart-base_test | 1 | null | transformers | 30,116 | Entry not found |
patrickvonplaten/wav2vec2-base-repro-timit | 394cd6beb5ec40b779cf7cdbc954ef18f350cea7 | 2021-10-25T16:17:50.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-repro-timit | 1 | null | transformers | 30,117 | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: wav2vec2-base-repro-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-repro-timit
This model is a fine-tuned version of [patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps](https://huggingface.co/patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8562
- Wer: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.9793 | 0.69 | 100 | 5.4532 | 1.0 |
| 2.9066 | 1.38 | 200 | 2.9070 | 1.0 |
| 2.2562 | 2.07 | 300 | 2.0323 | 1.0 |
| 1.5273 | 2.76 | 400 | 1.1510 | 0.8001 |
| 1.1085 | 3.45 | 500 | 0.9521 | 0.7053 |
| 0.813 | 4.14 | 600 | 0.8617 | 0.6702 |
| 0.8434 | 4.83 | 700 | 0.8068 | 0.6393 |
| 0.9631 | 5.52 | 800 | 0.7863 | 0.6248 |
| 0.707 | 6.21 | 900 | 0.7476 | 0.5973 |
| 0.5568 | 6.9 | 1000 | 0.7350 | 0.5911 |
| 0.6171 | 7.59 | 1100 | 0.7171 | 0.5841 |
| 0.7011 | 8.28 | 1200 | 0.7318 | 0.5798 |
| 0.5546 | 8.97 | 1300 | 0.7447 | 0.5767 |
| 0.4278 | 9.66 | 1400 | 0.7481 | 0.5650 |
| 0.3576 | 10.34 | 1500 | 0.7443 | 0.5713 |
| 0.5506 | 11.03 | 1600 | 0.7574 | 0.5664 |
| 0.4127 | 11.72 | 1700 | 0.8043 | 0.5631 |
| 0.3251 | 12.41 | 1800 | 0.7738 | 0.5550 |
| 0.3119 | 13.1 | 1900 | 0.7829 | 0.5516 |
| 0.4371 | 13.79 | 2000 | 0.8025 | 0.5556 |
| 0.3772 | 14.48 | 2100 | 0.8451 | 0.5559 |
| 0.2942 | 15.17 | 2200 | 0.8300 | 0.5556 |
| 0.2503 | 15.86 | 2300 | 0.8417 | 0.5541 |
| 0.3671 | 16.55 | 2400 | 0.8568 | 0.5528 |
| 0.3867 | 17.24 | 2500 | 0.8521 | 0.5510 |
| 0.2614 | 17.93 | 2600 | 0.8479 | 0.5523 |
| 0.2441 | 18.62 | 2700 | 0.8558 | 0.5494 |
| 0.3059 | 19.31 | 2800 | 0.8553 | 0.5474 |
| 0.3734 | 20.0 | 2900 | 0.8562 | 0.5484 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-repro-960h-libri-120k-steps | 3a62136561cf01744650fdc80d7fa8e79e5d26fa | 2021-10-08T14:12:07.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
] | null | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-repro-960h-libri-120k-steps | 1 | null | transformers | 30,118 | https://wandb.ai/patrickvonplaten/pretraining-wav2vec2/reports/Wav2Vec2-Large--VmlldzoxMTAwODM4?accessToken=wm3qzcnldrwsa31tkvf2pdmilw3f63d4twtffs86ou016xjbyilh55uoi3mo1qzc |
patrickvonplaten/wav2vec2-large-xls-r-300m-turkish-colab | bb5da748d510b47d906ab7c19c40d11fe1e72022 | 2022-05-09T20:22:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-xls-r-300m-turkish-colab | 1 | null | transformers | 30,119 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3864
- Wer: 0.3570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8302 | 3.67 | 400 | 0.6702 | 0.6903 |
| 0.4098 | 7.34 | 800 | 0.4574 | 0.4939 |
| 0.1908 | 11.01 | 1200 | 0.4350 | 0.4557 |
| 0.1279 | 14.68 | 1600 | 0.4204 | 0.4213 |
| 0.0966 | 18.35 | 2000 | 0.4238 | 0.3991 |
| 0.0782 | 22.02 | 2400 | 0.3822 | 0.3906 |
| 0.0613 | 25.69 | 2800 | 0.3982 | 0.3714 |
| 0.0477 | 29.36 | 3200 | 0.3864 | 0.3570 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-53-common_voice-tr-ft | 9a65aae7291f105ca880d7c67c04f4294c8decb2 | 2021-11-14T16:47:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-large-xlsr-53-common_voice-tr-ft | 1 | null | transformers | 30,120 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- Wer: 0.3104
- Cer: 0.0737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
see under *Training Metrics* Tab.
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-random | 9097b448acdad53da3e2741f1d56c300ca149154 | 2021-10-22T17:20:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-random | 1 | null | transformers | 30,121 | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: wav2vec2-random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-random
This model is a fine-tuned version of [patrickvonplaten/wav2vec2-base-random](https://huggingface.co/patrickvonplaten/wav2vec2-base-random) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1593
- Wer: 0.8364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9043 | 0.69 | 100 | 2.9683 | 1.0 |
| 2.8537 | 1.38 | 200 | 2.9281 | 0.9997 |
| 2.7803 | 2.07 | 300 | 2.7330 | 0.9999 |
| 2.6806 | 2.76 | 400 | 2.5792 | 1.0 |
| 2.4136 | 3.45 | 500 | 2.4327 | 0.9948 |
| 2.1682 | 4.14 | 600 | 2.3508 | 0.9877 |
| 2.2577 | 4.83 | 700 | 2.2176 | 0.9773 |
| 2.355 | 5.52 | 800 | 2.1753 | 0.9542 |
| 1.8588 | 6.21 | 900 | 2.0650 | 0.8851 |
| 1.6831 | 6.9 | 1000 | 2.0109 | 0.8618 |
| 1.888 | 7.59 | 1100 | 1.9660 | 0.8418 |
| 2.0066 | 8.28 | 1200 | 1.9847 | 0.8531 |
| 1.7044 | 8.97 | 1300 | 1.9760 | 0.8527 |
| 1.3168 | 9.66 | 1400 | 2.0708 | 0.8327 |
| 1.2143 | 10.34 | 1500 | 2.0601 | 0.8419 |
| 1.6189 | 11.03 | 1600 | 2.0960 | 0.8299 |
| 1.13 | 11.72 | 1700 | 2.2540 | 0.8408 |
| 0.8001 | 12.41 | 1800 | 2.4260 | 0.8306 |
| 0.7769 | 13.1 | 1900 | 2.4182 | 0.8445 |
| 1.2165 | 13.79 | 2000 | 2.3666 | 0.8284 |
| 0.8026 | 14.48 | 2100 | 2.7118 | 0.8662 |
| 0.5148 | 15.17 | 2200 | 2.7957 | 0.8526 |
| 0.4921 | 15.86 | 2300 | 2.8244 | 0.8346 |
| 0.7629 | 16.55 | 2400 | 2.8944 | 0.8370 |
| 0.5762 | 17.24 | 2500 | 3.0335 | 0.8367 |
| 0.4076 | 17.93 | 2600 | 3.0776 | 0.8358 |
| 0.3395 | 18.62 | 2700 | 3.1572 | 0.8261 |
| 0.4862 | 19.31 | 2800 | 3.1319 | 0.8414 |
| 0.5061 | 20.0 | 2900 | 3.1593 | 0.8364 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xlarge-dotdotdot-common_voice-tr-demo | ca52a9779bbd73a9ed34a736e86fbfcf3db8872c | 2021-10-27T10:41:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-xlarge-dotdotdot-common_voice-tr-demo | 1 | 0 | transformers | 30,122 | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xlarge-...-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlarge-...-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-xlarge-xlsr-...](https://huggingface.co/facebook/wav2vec2-xlarge-xlsr-...) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2701
- Wer: 0.2309
- Cer: 0.0527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4388 | 3.7 | 400 | 1.366 | 0.9701 |
| 0.3766 | 7.4 | 800 | 0.4914 | 0.5374 |
| 0.2295 | 11.11 | 1200 | 0.3934 | 0.4125 |
| 0.1121 | 14.81 | 1600 | 0.3264 | 0.2904 |
| 0.1473 | 18.51 | 2000 | 0.3103 | 0.2671 |
| 0.1013 | 22.22 | 2400 | 0.2589 | 0.2324 |
| 0.0704 | 25.92 | 2800 | 0.2826 | 0.2339 |
| 0.0537 | 29.63 | 3200 | 0.2704 | 0.2309 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wavlm-libri-clean-100h-base | 4b5f3af55ea5c6fd93efb929912f3ea6da950474 | 2021-12-20T12:59:09.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"wavlm_libri_finetune",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wavlm-libri-clean-100h-base | 1 | null | transformers | 30,123 | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- wavlm_libri_finetune
model-index:
- name: wavlm-libri-clean-100h-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-libri-clean-100h-base
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0829
- Wer: 0.0675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8805 | 0.34 | 300 | 2.8686 | 1.0 |
| 0.2459 | 0.67 | 600 | 0.1858 | 0.1554 |
| 0.1114 | 1.01 | 900 | 0.1379 | 0.1191 |
| 0.0867 | 1.35 | 1200 | 0.1130 | 0.0961 |
| 0.0698 | 1.68 | 1500 | 0.1032 | 0.0877 |
| 0.0663 | 2.02 | 1800 | 0.0959 | 0.0785 |
| 0.0451 | 2.35 | 2100 | 0.0887 | 0.0748 |
| 0.0392 | 2.69 | 2400 | 0.0859 | 0.0698 |
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/xls-r-300m-sv-phoneme | 34a361c678f773c82fb00c24803141ed2948ee16 | 2021-12-21T11:15:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"mozilla-foundation/common_voice_3_0",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/xls-r-300m-sv-phoneme | 1 | 1 | transformers | 30,124 | ---
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_3_0
- generated_from_trainer
model-index:
- name: xls-r-300m-sv-phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-sv-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the mozilla-foundation/common_voice_3_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4879
- Wer: 0.0997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000075
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/xprophetnet-large-wiki100-cased_old | afbabbb4702c28f7ac1a12a33f957855b8d90cd7 | 2020-10-16T13:05:43.000Z | [
"pytorch",
"xlm-prophetnet",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | patrickvonplaten | null | patrickvonplaten/xprophetnet-large-wiki100-cased_old | 1 | null | transformers | 30,125 | Entry not found |
pcuenq/wav2vec2-large-xlsr-53-eu | 661b20f2b9756e322439f781b753d98dff62aec5 | 2021-03-28T19:35:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"eu",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pcuenq | null | pcuenq/wav2vec2-large-xlsr-53-eu | 1 | null | transformers | 30,126 | ---
language: eu
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Basque by pcuenq
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eu
type: common_voice
args: eu
metrics:
- name: Test WER
type: wer
value: 15.34
---
# Wav2Vec2-Large-XLSR-53-EU
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Basque using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu")
model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Basque test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
model_name = "pcuenq/wav2vec2-large-xlsr-53-eu"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
model.to("cuda")
## Text pre-processing
chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]'
chars_to_ignore_pattern = re.compile(chars_to_ignore_regex)
def remove_special_characters(batch):
batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " "
return batch
## Audio pre-processing
import librosa
def speech_file_to_array_fn(batch):
speech_array, sample_rate = torchaudio.load(batch["path"])
batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000)
return batch
# Text transformation and audio resampling
def cv_prepare(batch):
batch = remove_special_characters(batch)
batch = speech_file_to_array_fn(batch)
return batch
# Number of CPUs or None
num_proc = 16
test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
# WER Metric computation
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 15.34 %
## Training
The Common Voice `train` and `validation` datasets were used for training. Training was performed for 22 + 20 epochs with the following parameters:
- Batch size 16, 2 gradient accumulation steps.
- Learning rate: 2.5e-4
- Activation dropout: 0.05
- Attention dropout: 0.1
- Hidden dropout: 0.05
- Feature proj. dropout: 0.05
- Mask time probability: 0.08
- Layer dropout: 0.05
|
pelican/COMP0087_GPT2 | 0247b59d07b7fbf0e85ca8c6e47cd07f5d2ca941 | 2021-05-30T16:43:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pelican | null | pelican/COMP0087_GPT2 | 1 | null | transformers | 30,127 | Entry not found |
pelican/COMP0087_GPT2_tokenizer | c43919ae6f19e8fa44b24de088aad4f741f7ea14 | 2021-05-30T16:32:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pelican | null | pelican/COMP0087_GPT2_tokenizer | 1 | null | transformers | 30,128 | Entry not found |
pere/norwegian-t5 | e625264abd3cf382acd8f8da5d47bfb0a71aea10 | 2021-09-23T16:19:43.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"dataset:oscar",
"transformers",
"summary",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | pere | null | pere/norwegian-t5 | 1 | null | transformers | 30,129 | ---
language: no
license: cc-by-4.0
tags:
- summary
datasets:
- oscar
widget:
- text: 'translate Bokmål to Nynorsk: Dette er en test!'
---
# Norwegian T5 - small - Oscar
## Description
This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Do not use this model as anything other than a simple reference point. |
peril10/play_time | 3cec9d4d7acb33d38ef51bdbd9ad588aae94308e | 2021-05-23T10:58:48.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | peril10 | null | peril10/play_time | 1 | null | transformers | 30,130 | Entry not found |
peterhsu/dummy-model | 37ca4166c8f7980e930d4fc4d59b3b0d72e7f470 | 2021-12-25T05:56:47.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | peterhsu | null | peterhsu/dummy-model | 1 | null | transformers | 30,131 | Entry not found |
pgperrone/dummy-model | cb4a9d0323a021b18d42fa996961ddcdf0dcacba | 2021-08-13T18:24:39.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pgperrone | null | pgperrone/dummy-model | 1 | null | transformers | 30,132 | Entry not found |
phantomcoder1996/wav2vec2-large-xls-r-300m-arabic | 0b8edcc3c527efbe8d889c7300802b52900de833 | 2022-03-23T18:30:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | phantomcoder1996 | null | phantomcoder1996/wav2vec2-large-xls-r-300m-arabic | 1 | null | transformers | 30,133 | ---
language:
- ar
thumbnail: wav2vec2-large-xls-r-300m-arabic fine-tuned for Modern Standard Arabic
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- WER
model-index:
- name: wav2vec2-large-xls-r-300m-arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ar
metrics:
- name: Test WER
type: wer
value: 57.8
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 95.07
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ar
metrics:
- name: Test WER
type: wer
value: 93.58
---
# XLS-R-300m-Arabic |
phdf33/trialbert-base | bdb4a744431c3bd6ea168b4c621f33e7c57fcbf6 | 2021-09-28T15:40:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | phdf33 | null | phdf33/trialbert-base | 1 | 1 | transformers | 30,134 | Entry not found |
philippelaban/summary_loop24 | e6edf02e703b5e4efe0beb87520271038eb1eb11 | 2022-02-09T22:01:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"license:apache-2.0"
] | summarization | false | philippelaban | null | philippelaban/summary_loop24 | 1 | 2 | transformers | 30,135 | ---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- cnn_dailymail
---
# Try out in the Hosted inference API
In the right panel, you can try to the model (although it only handles a short sequence length).
Enter the document you want to summarize in the panel on the right.
# Model Loading
The model (based on a GPT2 base architecture) can be loaded in the following way:
```
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop46")
tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop46")
```
# Example Use
```
document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?"
tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda()
input_shape = tokenized_document.shape
outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True)
candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary
candidate_scores = outputs.sequences_scores.tolist()
for candidate_tokens, score in zip(candidate_sequences, candidate_scores):
summary = tokenizer.decode(candidate_tokens)
print("[Score: %.3f] %s" % (score, summary[:summary.index("END")]))
```
# Example output
```
[Score: -0.113] These tracks have been spotted elsewhere in the solar system, including on the red planet, and no one is there to see it, does it leave a trace? Yes, and
[Score: -0.119] Now researchers have spotted thousands of tracks on the red planet created by tumbling boulders in Mars, and no one is there to see it, does it leave a trace?
[Score: -0.214] Here are answers to those questions posed by scientists investigating the tracks discovered by scientists examining the tracks discovered by scientists exploring the tracks discovered by scientists exploring the tracks discovered by scientists exploring the
[Score: -0.388] These are the kinds of questions swirling around whether these tracks exist on Mars, and whether they should be noticed sooner rather than later. Here are some answers: -- The tracks detected
```
# Github repo
You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop |
philschmid/distilroberta-base-ner-wikiann-conll2003-4-class | a5e552e199083d66542498375906d35e8a47bd40 | 2021-05-24T18:53:58.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wikiann-conll2003",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | philschmid | null | philschmid/distilroberta-base-ner-wikiann-conll2003-4-class | 1 | null | transformers | 30,136 | ---
license: apache-2.0
tags:
- token-classification
datasets:
- wikiann-conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilroberta-base-ner-wikiann-conll2003-4-class
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann-conll2003
type: wikiann-conll2003
metrics:
- name: Precision
type: precision
value: 0.9492143658810326
- name: Recall
type: recall
value: 0.9585379675103891
- name: F1
type: f1
value: 0.9538533834586467
- name: Accuracy
type: accuracy
value: 0.9882022644288301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-ner-wikiann-conll2003-4-class
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann and conll2003 dataset. It consists out of the classes of conll2003.
O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4) B-LOC (5), I-LOC (6) B-MISC (7), I-MISC (8).
eval F1-Score: **95,39** (merged dataset)
test F1-Score: **90,75** (merged dataset)
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-4-class")
model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann-conll2003-4-class")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Philipp and live in Germany"
nlp(example)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9086903597787154e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.0705
- Precision: 0.9492
- Recall: 0.9585
- F1: 0.9539
- Accuracy: 0.9882
It achieves the following results on the test set:
- Loss: 0.239
- Precision: 0.8984
- Recall: 0.9168
- F1: 0.9075
- Accuracy: 0.9741
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.2
|
phongdtd/wavLM-VLSP-vi-base | 04ba95ee601120c3dad6a5100b7a6d294a172b0d | 2022-02-21T13:01:14.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | phongdtd | null | phongdtd/wavLM-VLSP-vi-base | 1 | null | transformers | 30,137 | ---
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
model-index:
- name: wavLM-VLSP-vi-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavLM-VLSP-vi-base
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the PHONGDTD/VINDATAVLSP - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0390
- Wer: 0.9995
- Cer: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
phongdtd/wavLM-VLSP-vi | d547cbd08350ef490d482b6382e3ce6b085789b8 | 2022-02-19T00:36:24.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"transformers",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | phongdtd | null | phongdtd/wavLM-VLSP-vi | 1 | null | transformers | 30,138 | ---
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
model-index:
- name: wavLM-VLSP-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavLM-VLSP-vi
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the PHONGDTD/VINDATAVLSP - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 45.8892
- Wer: 0.9999
- Cer: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 3.4482 | 9.41 | 40000 | 3.4480 | 0.9999 | 0.9974 |
| 3.4619 | 18.81 | 80000 | 3.4514 | 0.9999 | 0.9974 |
| 3.7961 | 28.22 | 120000 | 3.8732 | 0.9999 | 0.9974 |
| 24.3843 | 37.62 | 160000 | 22.5457 | 0.9999 | 0.9973 |
| 48.5691 | 47.03 | 200000 | 45.8892 | 0.9999 | 0.9973 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
phongdtd/wavlm-vindata-demo-dist | 1b2ddf0f96f95de0faaa6712889bb5b72b45ead1 | 2022-02-17T05:00:57.000Z | [
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"dataset:vin_data_vlsp",
"transformers",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | phongdtd | null | phongdtd/wavlm-vindata-demo-dist | 1 | null | transformers | 30,139 | ---
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
datasets:
- vin_data_vlsp
model-index:
- name: wavlm-vindata-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-vindata-demo-dist
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the PHONGDTD/VINDATAVLSP - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4439
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:---:|
| 4.0704 | 0.01 | 100 | 3.8768 | 1.0 |
| 3.6236 | 0.01 | 200 | 3.4611 | 1.0 |
| 6.597 | 0.02 | 300 | 3.4557 | 1.0 |
| 3.4744 | 0.03 | 400 | 3.4567 | 1.0 |
| 5.3992 | 0.04 | 500 | 3.4631 | 1.0 |
| 4.5348 | 0.04 | 600 | 3.4651 | 1.0 |
| 3.2457 | 0.05 | 700 | 3.4917 | 1.0 |
| 3.9245 | 0.06 | 800 | 3.4680 | 1.0 |
| 3.2904 | 0.07 | 900 | 3.4518 | 1.0 |
| 3.4768 | 0.07 | 1000 | 3.4506 | 1.0 |
| 3.2418 | 0.08 | 1100 | 3.4474 | 1.0 |
| 3.3111 | 0.09 | 1200 | 3.4684 | 1.0 |
| 3.986 | 0.09 | 1300 | 3.4465 | 1.0 |
| 4.3206 | 0.1 | 1400 | 3.4723 | 1.0 |
| 4.682 | 0.11 | 1500 | 3.4732 | 1.0 |
| 4.858 | 0.12 | 1600 | 3.4416 | 1.0 |
| 3.2949 | 0.12 | 1700 | 3.4481 | 1.0 |
| 3.4435 | 0.13 | 1800 | 3.4570 | 1.0 |
| 5.0695 | 0.14 | 1900 | 3.4448 | 1.0 |
| 3.4962 | 0.14 | 2000 | 3.4416 | 1.0 |
| 3.4891 | 0.15 | 2100 | 3.4455 | 1.0 |
| 4.1281 | 0.16 | 2200 | 3.4447 | 1.0 |
| 3.5956 | 0.17 | 2300 | 3.4512 | 1.0 |
| 3.6312 | 0.17 | 2400 | 3.4484 | 1.0 |
| 4.5383 | 0.18 | 2500 | 3.4435 | 1.0 |
| 6.1329 | 0.19 | 2600 | 3.4530 | 1.0 |
| 3.709 | 0.2 | 2700 | 3.4466 | 1.0 |
| 3.289 | 0.2 | 2800 | 3.4463 | 1.0 |
| 4.3301 | 0.21 | 2900 | 3.4418 | 1.0 |
| 4.6656 | 0.22 | 3000 | 3.4447 | 1.0 |
| 3.4288 | 0.22 | 3100 | 3.4715 | 1.0 |
| 3.5506 | 0.23 | 3200 | 3.4437 | 1.0 |
| 3.7497 | 0.24 | 3300 | 3.4910 | 1.0 |
| 3.5198 | 0.25 | 3400 | 3.4574 | 1.0 |
| 3.4183 | 0.25 | 3500 | 3.4607 | 1.0 |
| 4.5573 | 0.26 | 3600 | 3.4421 | 1.0 |
| 3.5737 | 0.27 | 3700 | 3.4481 | 1.0 |
| 4.9008 | 0.28 | 3800 | 3.4411 | 1.0 |
| 4.8725 | 0.28 | 3900 | 3.4422 | 1.0 |
| 3.5799 | 0.29 | 4000 | 3.4659 | 1.0 |
| 3.3257 | 0.3 | 4100 | 3.4519 | 1.0 |
| 3.6887 | 0.3 | 4200 | 3.4827 | 1.0 |
| 3.3037 | 0.31 | 4300 | 3.4632 | 1.0 |
| 5.5543 | 0.32 | 4400 | 3.4480 | 1.0 |
| 3.2898 | 0.33 | 4500 | 3.4404 | 1.0 |
| 3.2794 | 0.33 | 4600 | 3.4633 | 1.0 |
| 3.7896 | 0.34 | 4700 | 3.4439 | 1.0 |
| 3.6662 | 0.35 | 4800 | 3.4587 | 1.0 |
| 3.588 | 0.35 | 4900 | 3.4520 | 1.0 |
| 4.0535 | 0.36 | 5000 | 3.4450 | 1.0 |
| 3.4335 | 0.37 | 5100 | 3.4577 | 1.0 |
| 3.6317 | 0.38 | 5200 | 3.4443 | 1.0 |
| 5.2564 | 0.38 | 5300 | 3.4505 | 1.0 |
| 3.8781 | 0.39 | 5400 | 3.4418 | 1.0 |
| 4.6269 | 0.4 | 5500 | 3.4425 | 1.0 |
| 3.6095 | 0.41 | 5600 | 3.4581 | 1.0 |
| 4.6164 | 0.41 | 5700 | 3.4404 | 1.0 |
| 3.117 | 0.42 | 5800 | 3.4596 | 1.0 |
| 4.3939 | 0.43 | 5900 | 3.4401 | 1.0 |
| 3.5856 | 0.43 | 6000 | 3.4413 | 1.0 |
| 3.5187 | 0.44 | 6100 | 3.4452 | 1.0 |
| 4.7991 | 0.45 | 6200 | 3.4481 | 1.0 |
| 3.3905 | 0.46 | 6300 | 3.4420 | 1.0 |
| 3.5086 | 0.46 | 6400 | 3.4494 | 1.0 |
| 4.8217 | 0.47 | 6500 | 3.4477 | 1.0 |
| 3.3193 | 0.48 | 6600 | 3.4382 | 1.0 |
| 5.3482 | 0.49 | 6700 | 3.4580 | 1.0 |
| 3.3947 | 0.49 | 6800 | 3.4767 | 1.0 |
| 6.3352 | 0.5 | 6900 | 3.4476 | 1.0 |
| 3.4448 | 0.51 | 7000 | 3.4557 | 1.0 |
| 3.5358 | 0.51 | 7100 | 3.4438 | 1.0 |
| 3.3499 | 0.52 | 7200 | 3.4445 | 1.0 |
| 3.6932 | 0.53 | 7300 | 3.4463 | 1.0 |
| 6.9058 | 0.54 | 7400 | 3.4482 | 1.0 |
| 4.5514 | 0.54 | 7500 | 3.4422 | 1.0 |
| 3.517 | 0.55 | 7600 | 3.4505 | 1.0 |
| 7.4479 | 0.56 | 7700 | 3.4461 | 1.0 |
| 3.3761 | 0.56 | 7800 | 3.4511 | 1.0 |
| 4.5925 | 0.57 | 7900 | 3.4389 | 1.0 |
| 5.2682 | 0.58 | 8000 | 3.4563 | 1.0 |
| 5.6748 | 0.59 | 8100 | 3.4601 | 1.0 |
| 4.4335 | 0.59 | 8200 | 3.4439 | 1.0 |
| 5.1686 | 0.6 | 8300 | 3.4444 | 1.0 |
| 3.5245 | 0.61 | 8400 | 3.4629 | 1.0 |
| 4.9426 | 0.62 | 8500 | 3.4389 | 1.0 |
| 4.4654 | 0.62 | 8600 | 3.4427 | 1.0 |
| 3.5626 | 0.63 | 8700 | 3.4521 | 1.0 |
| 4.7086 | 0.64 | 8800 | 3.4489 | 1.0 |
| 3.238 | 0.64 | 8900 | 3.4478 | 1.0 |
| 4.2738 | 0.65 | 9000 | 3.4510 | 1.0 |
| 3.4468 | 0.66 | 9100 | 3.4411 | 1.0 |
| 3.2292 | 0.67 | 9200 | 3.4416 | 1.0 |
| 3.4972 | 0.67 | 9300 | 3.4643 | 1.0 |
| 7.3434 | 0.68 | 9400 | 3.4587 | 1.0 |
| 3.708 | 0.69 | 9500 | 3.4799 | 1.0 |
| 4.6466 | 0.69 | 9600 | 3.4490 | 1.0 |
| 3.3347 | 0.7 | 9700 | 3.4532 | 1.0 |
| 5.1486 | 0.71 | 9800 | 3.4427 | 1.0 |
| 3.6456 | 0.72 | 9900 | 3.4492 | 1.0 |
| 5.3904 | 0.72 | 10000 | 3.4497 | 1.0 |
| 4.8832 | 0.73 | 10100 | 3.4476 | 1.0 |
| 3.4482 | 0.74 | 10200 | 3.4539 | 1.0 |
| 3.617 | 0.75 | 10300 | 3.4547 | 1.0 |
| 5.4691 | 0.75 | 10400 | 3.4663 | 1.0 |
| 4.2759 | 0.76 | 10500 | 3.4401 | 1.0 |
| 8.2106 | 0.77 | 10600 | 3.4404 | 1.0 |
| 3.4894 | 0.77 | 10700 | 3.4426 | 1.0 |
| 3.6875 | 0.78 | 10800 | 3.4439 | 1.0 |
| 3.3277 | 0.79 | 10900 | 3.4446 | 1.0 |
| 4.5175 | 0.8 | 11000 | 3.4456 | 1.0 |
| 5.2161 | 0.8 | 11100 | 3.4388 | 1.0 |
| 3.5234 | 0.81 | 11200 | 3.4418 | 1.0 |
| 4.2212 | 0.82 | 11300 | 3.4392 | 1.0 |
| 3.6923 | 0.83 | 11400 | 3.4494 | 1.0 |
| 3.4863 | 0.83 | 11500 | 3.4572 | 1.0 |
| 6.3201 | 0.84 | 11600 | 3.4377 | 1.0 |
| 3.7543 | 0.85 | 11700 | 3.4533 | 1.0 |
| 3.3959 | 0.85 | 11800 | 3.4600 | 1.0 |
| 3.5691 | 0.86 | 11900 | 3.4673 | 1.0 |
| 3.49 | 0.87 | 12000 | 3.4407 | 1.0 |
| 7.1165 | 0.88 | 12100 | 3.4427 | 1.0 |
| 6.731 | 0.88 | 12200 | 3.4394 | 1.0 |
| 4.4682 | 0.89 | 12300 | 3.4407 | 1.0 |
| 3.3696 | 0.9 | 12400 | 3.4415 | 1.0 |
| 4.0241 | 0.9 | 12500 | 3.4454 | 1.0 |
| 3.521 | 0.91 | 12600 | 3.4379 | 1.0 |
| 5.5273 | 0.92 | 12700 | 3.4423 | 1.0 |
| 3.4781 | 0.93 | 12800 | 3.4635 | 1.0 |
| 3.4542 | 0.93 | 12900 | 3.4411 | 1.0 |
| 3.2363 | 0.94 | 13000 | 3.4396 | 1.0 |
| 5.3009 | 0.95 | 13100 | 3.4458 | 1.0 |
| 3.498 | 0.96 | 13200 | 3.4398 | 1.0 |
| 6.3325 | 0.96 | 13300 | 3.4514 | 1.0 |
| 3.5368 | 0.97 | 13400 | 3.4437 | 1.0 |
| 5.1164 | 0.98 | 13500 | 3.4623 | 1.0 |
| 3.6144 | 0.98 | 13600 | 3.4512 | 1.0 |
| 6.6018 | 0.99 | 13700 | 3.4493 | 1.0 |
| 3.7539 | 1.0 | 13800 | 3.4597 | 1.0 |
| 3.2903 | 1.01 | 13900 | 3.4813 | 1.0 |
| 3.3243 | 1.01 | 14000 | 3.4510 | 1.0 |
| 3.3485 | 1.02 | 14100 | 3.4389 | 1.0 |
| 3.6197 | 1.03 | 14200 | 3.4519 | 1.0 |
| 3.322 | 1.04 | 14300 | 3.4399 | 1.0 |
| 3.2897 | 1.04 | 14400 | 3.4378 | 1.0 |
| 3.3969 | 1.05 | 14500 | 3.4476 | 1.0 |
| 3.3289 | 1.06 | 14600 | 3.4646 | 1.0 |
| 3.3556 | 1.06 | 14700 | 3.4520 | 1.0 |
| 3.2527 | 1.07 | 14800 | 3.4575 | 1.0 |
| 3.4003 | 1.08 | 14900 | 3.4443 | 1.0 |
| 3.3171 | 1.09 | 15000 | 3.4434 | 1.0 |
| 3.4034 | 1.09 | 15100 | 3.4448 | 1.0 |
| 3.4363 | 1.1 | 15200 | 3.4560 | 1.0 |
| 3.3969 | 1.11 | 15300 | 3.4405 | 1.0 |
| 3.4134 | 1.11 | 15400 | 3.4408 | 1.0 |
| 3.5059 | 1.12 | 15500 | 3.4395 | 1.0 |
| 3.3963 | 1.13 | 15600 | 3.4488 | 1.0 |
| 3.2937 | 1.14 | 15700 | 3.4482 | 1.0 |
| 3.5635 | 1.14 | 15800 | 3.4621 | 1.0 |
| 3.4463 | 1.15 | 15900 | 3.4433 | 1.0 |
| 3.2588 | 1.16 | 16000 | 3.4434 | 1.0 |
| 3.3617 | 1.17 | 16100 | 3.4542 | 1.0 |
| 3.3721 | 1.17 | 16200 | 3.4388 | 1.0 |
| 3.3867 | 1.18 | 16300 | 3.4577 | 1.0 |
| 3.34 | 1.19 | 16400 | 3.4510 | 1.0 |
| 3.3676 | 1.19 | 16500 | 3.4434 | 1.0 |
| 3.5519 | 1.2 | 16600 | 3.4410 | 1.0 |
| 3.3129 | 1.21 | 16700 | 3.4507 | 1.0 |
| 3.3368 | 1.22 | 16800 | 3.4718 | 1.0 |
| 3.3107 | 1.22 | 16900 | 3.4439 | 1.0 |
| 3.2987 | 1.23 | 17000 | 3.4471 | 1.0 |
| 3.3102 | 1.24 | 17100 | 3.4435 | 1.0 |
| 3.2089 | 1.25 | 17200 | 3.4432 | 1.0 |
| 3.415 | 1.25 | 17300 | 3.4472 | 1.0 |
| 3.2884 | 1.26 | 17400 | 3.4388 | 1.0 |
| 3.3837 | 1.27 | 17500 | 3.4444 | 1.0 |
| 3.3181 | 1.27 | 17600 | 3.4438 | 1.0 |
| 3.3071 | 1.28 | 17700 | 3.4406 | 1.0 |
| 3.389 | 1.29 | 17800 | 3.4573 | 1.0 |
| 3.3246 | 1.3 | 17900 | 3.4580 | 1.0 |
| 3.3122 | 1.3 | 18000 | 3.4455 | 1.0 |
| 3.282 | 1.31 | 18100 | 3.4606 | 1.0 |
| 3.2671 | 1.32 | 18200 | 3.4378 | 1.0 |
| 3.3441 | 1.32 | 18300 | 3.4432 | 1.0 |
| 3.3115 | 1.33 | 18400 | 3.4458 | 1.0 |
| 3.3542 | 1.34 | 18500 | 3.4617 | 1.0 |
| 3.3924 | 1.35 | 18600 | 3.4549 | 1.0 |
| 3.4895 | 1.35 | 18700 | 3.4557 | 1.0 |
| 3.4071 | 1.36 | 18800 | 3.4462 | 1.0 |
| 3.3373 | 1.37 | 18900 | 3.4606 | 1.0 |
| 3.3497 | 1.38 | 19000 | 3.4458 | 1.0 |
| 3.3088 | 1.38 | 19100 | 3.4712 | 1.0 |
| 3.333 | 1.39 | 19200 | 3.4483 | 1.0 |
| 3.3773 | 1.4 | 19300 | 3.4455 | 1.0 |
| 3.357 | 1.4 | 19400 | 3.4379 | 1.0 |
| 3.3506 | 1.41 | 19500 | 3.4477 | 1.0 |
| 3.2944 | 1.42 | 19600 | 3.4478 | 1.0 |
| 3.241 | 1.43 | 19700 | 3.4492 | 1.0 |
| 3.4317 | 1.43 | 19800 | 3.4441 | 1.0 |
| 3.3478 | 1.44 | 19900 | 3.4385 | 1.0 |
| 3.3952 | 1.45 | 20000 | 3.4437 | 1.0 |
| 3.4808 | 1.46 | 20100 | 3.4644 | 1.0 |
| 3.3625 | 1.46 | 20200 | 3.4529 | 1.0 |
| 3.4842 | 1.47 | 20300 | 3.4524 | 1.0 |
| 3.3887 | 1.48 | 20400 | 3.4551 | 1.0 |
| 3.3198 | 1.48 | 20500 | 3.4433 | 1.0 |
| 3.3397 | 1.49 | 20600 | 3.4448 | 1.0 |
| 3.3173 | 1.5 | 20700 | 3.4590 | 1.0 |
| 3.3687 | 1.51 | 20800 | 3.4720 | 1.0 |
| 3.257 | 1.51 | 20900 | 3.4461 | 1.0 |
| 3.4451 | 1.52 | 21000 | 3.4541 | 1.0 |
| 3.2979 | 1.53 | 21100 | 3.4556 | 1.0 |
| 3.3566 | 1.53 | 21200 | 3.4438 | 1.0 |
| 3.3466 | 1.54 | 21300 | 3.4422 | 1.0 |
| 3.308 | 1.55 | 21400 | 3.4637 | 1.0 |
| 3.3952 | 1.56 | 21500 | 3.4435 | 1.0 |
| 3.4009 | 1.56 | 21600 | 3.4434 | 1.0 |
| 3.7952 | 1.57 | 21700 | 3.4675 | 1.0 |
| 3.3891 | 1.58 | 21800 | 3.4565 | 1.0 |
| 3.31 | 1.59 | 21900 | 3.4538 | 1.0 |
| 3.3186 | 1.59 | 22000 | 3.4492 | 1.0 |
| 3.3512 | 1.6 | 22100 | 3.4381 | 1.0 |
| 3.309 | 1.61 | 22200 | 3.4558 | 1.0 |
| 3.597 | 1.61 | 22300 | 3.4484 | 1.0 |
| 3.4474 | 1.62 | 22400 | 3.4574 | 1.0 |
| 3.3316 | 1.63 | 22500 | 3.4498 | 1.0 |
| 3.3909 | 1.64 | 22600 | 3.4384 | 1.0 |
| 3.6999 | 1.64 | 22700 | 3.4503 | 1.0 |
| 3.6071 | 1.65 | 22800 | 3.4578 | 1.0 |
| 3.2812 | 1.66 | 22900 | 3.4563 | 1.0 |
| 3.2921 | 1.67 | 23000 | 3.4564 | 1.0 |
| 3.3291 | 1.67 | 23100 | 3.4490 | 1.0 |
| 3.3454 | 1.68 | 23200 | 3.4403 | 1.0 |
| 3.4212 | 1.69 | 23300 | 3.4409 | 1.0 |
| 3.5481 | 1.69 | 23400 | 3.4534 | 1.0 |
| 3.2784 | 1.7 | 23500 | 3.4486 | 1.0 |
| 3.4625 | 1.71 | 23600 | 3.4413 | 1.0 |
| 3.2427 | 1.72 | 23700 | 3.4694 | 1.0 |
| 3.8438 | 1.72 | 23800 | 3.4444 | 1.0 |
| 3.4009 | 1.73 | 23900 | 3.4505 | 1.0 |
| 3.8029 | 1.74 | 24000 | 3.4712 | 1.0 |
| 3.36 | 1.74 | 24100 | 3.4552 | 1.0 |
| 3.2751 | 1.75 | 24200 | 3.4511 | 1.0 |
| 3.309 | 1.76 | 24300 | 3.4368 | 1.0 |
| 3.4597 | 1.77 | 24400 | 3.4517 | 1.0 |
| 3.2812 | 1.77 | 24500 | 3.4475 | 1.0 |
| 3.4425 | 1.78 | 24600 | 3.4413 | 1.0 |
| 3.3968 | 1.79 | 24700 | 3.4482 | 1.0 |
| 3.35 | 1.8 | 24800 | 3.4473 | 1.0 |
| 3.3156 | 1.8 | 24900 | 3.4435 | 1.0 |
| 3.3008 | 1.81 | 25000 | 3.4439 | 1.0 |
| 3.3365 | 1.82 | 25100 | 3.4382 | 1.0 |
| 3.5473 | 1.82 | 25200 | 3.4396 | 1.0 |
| 3.3568 | 1.83 | 25300 | 3.4577 | 1.0 |
| 3.28 | 1.84 | 25400 | 3.4458 | 1.0 |
| 3.4389 | 1.85 | 25500 | 3.4436 | 1.0 |
| 3.345 | 1.85 | 25600 | 3.4435 | 1.0 |
| 3.3295 | 1.86 | 25700 | 3.4428 | 1.0 |
| 4.4622 | 1.87 | 25800 | 3.4638 | 1.0 |
| 3.3717 | 1.88 | 25900 | 3.4450 | 1.0 |
| 3.3 | 1.88 | 26000 | 3.4616 | 1.0 |
| 3.3399 | 1.89 | 26100 | 3.4391 | 1.0 |
| 3.4243 | 1.9 | 26200 | 3.4375 | 1.0 |
| 3.326 | 1.9 | 26300 | 3.4533 | 1.0 |
| 3.3337 | 1.91 | 26400 | 3.4538 | 1.0 |
| 3.2655 | 1.92 | 26500 | 3.4460 | 1.0 |
| 3.2963 | 1.93 | 26600 | 3.4443 | 1.0 |
| 3.3967 | 1.93 | 26700 | 3.4392 | 1.0 |
| 3.3203 | 1.94 | 26800 | 3.4609 | 1.0 |
| 3.4581 | 1.95 | 26900 | 3.4388 | 1.0 |
| 3.2519 | 1.95 | 27000 | 3.4434 | 1.0 |
| 3.488 | 1.96 | 27100 | 3.4653 | 1.0 |
| 3.3446 | 1.97 | 27200 | 3.4465 | 1.0 |
| 3.4035 | 1.98 | 27300 | 3.4535 | 1.0 |
| 3.2898 | 1.98 | 27400 | 3.4442 | 1.0 |
| 3.3309 | 1.99 | 27500 | 3.4491 | 1.0 |
| 3.2765 | 2.0 | 27600 | 3.4477 | 1.0 |
| 3.3352 | 2.01 | 27700 | 3.4540 | 1.0 |
| 3.4456 | 2.01 | 27800 | 3.4602 | 1.0 |
| 3.6378 | 2.02 | 27900 | 3.4578 | 1.0 |
| 6.4491 | 2.03 | 28000 | 3.4494 | 1.0 |
| 6.1705 | 2.03 | 28100 | 3.4570 | 1.0 |
| 3.4253 | 2.04 | 28200 | 3.4504 | 1.0 |
| 3.4053 | 2.05 | 28300 | 3.4399 | 1.0 |
| 3.6719 | 2.06 | 28400 | 3.4464 | 1.0 |
| 3.2769 | 2.06 | 28500 | 3.4473 | 1.0 |
| 3.3132 | 2.07 | 28600 | 3.4484 | 1.0 |
| 3.3756 | 2.08 | 28700 | 3.4413 | 1.0 |
| 5.5583 | 2.08 | 28800 | 3.4411 | 1.0 |
| 3.6191 | 2.09 | 28900 | 3.4406 | 1.0 |
| 3.4681 | 2.1 | 29000 | 3.4461 | 1.0 |
| 4.463 | 2.11 | 29100 | 3.4409 | 1.0 |
| 3.4645 | 2.11 | 29200 | 3.4556 | 1.0 |
| 3.6549 | 2.12 | 29300 | 3.4545 | 1.0 |
| 3.437 | 2.13 | 29400 | 3.4410 | 1.0 |
| 3.5002 | 2.14 | 29500 | 3.4370 | 1.0 |
| 3.4375 | 2.14 | 29600 | 3.4407 | 1.0 |
| 3.3798 | 2.15 | 29700 | 3.4390 | 1.0 |
| 3.6778 | 2.16 | 29800 | 3.4386 | 1.0 |
| 3.4647 | 2.16 | 29900 | 3.4600 | 1.0 |
| 3.4328 | 2.17 | 30000 | 3.4492 | 1.0 |
| 3.4381 | 2.18 | 30100 | 3.4406 | 1.0 |
| 3.3253 | 2.19 | 30200 | 3.4461 | 1.0 |
| 3.4112 | 2.19 | 30300 | 3.4478 | 1.0 |
| 3.6158 | 2.2 | 30400 | 3.4482 | 1.0 |
| 3.5541 | 2.21 | 30500 | 3.4424 | 1.0 |
| 4.3339 | 2.22 | 30600 | 3.4432 | 1.0 |
| 3.818 | 2.22 | 30700 | 3.4453 | 1.0 |
| 3.8914 | 2.23 | 30800 | 3.4457 | 1.0 |
| 5.5706 | 2.24 | 30900 | 3.4605 | 1.0 |
| 4.3359 | 2.24 | 31000 | 3.4700 | 1.0 |
| 3.6418 | 2.25 | 31100 | 3.4558 | 1.0 |
| 3.4288 | 2.26 | 31200 | 3.4396 | 1.0 |
| 3.4512 | 2.27 | 31300 | 3.4411 | 1.0 |
| 3.3326 | 2.27 | 31400 | 3.4473 | 1.0 |
| 3.5872 | 2.28 | 31500 | 3.4400 | 1.0 |
| 3.5426 | 2.29 | 31600 | 3.4469 | 1.0 |
| 4.2227 | 2.29 | 31700 | 3.4499 | 1.0 |
| 3.5461 | 2.3 | 31800 | 3.4388 | 1.0 |
| 3.5507 | 2.31 | 31900 | 3.4503 | 1.0 |
| 3.5177 | 2.32 | 32000 | 3.4429 | 1.0 |
| 3.7237 | 2.32 | 32100 | 3.4617 | 1.0 |
| 3.3513 | 2.33 | 32200 | 3.4487 | 1.0 |
| 3.3827 | 2.34 | 32300 | 3.4678 | 1.0 |
| 3.3311 | 2.35 | 32400 | 3.4441 | 1.0 |
| 3.2852 | 2.35 | 32500 | 3.4433 | 1.0 |
| 3.5712 | 2.36 | 32600 | 3.4514 | 1.0 |
| 4.6259 | 2.37 | 32700 | 3.4520 | 1.0 |
| 3.8864 | 2.37 | 32800 | 3.4544 | 1.0 |
| 3.3284 | 2.38 | 32900 | 3.4444 | 1.0 |
| 3.6078 | 2.39 | 33000 | 3.4450 | 1.0 |
| 3.4026 | 2.4 | 33100 | 3.4454 | 1.0 |
| 3.7527 | 2.4 | 33200 | 3.4541 | 1.0 |
| 3.3741 | 2.41 | 33300 | 3.4386 | 1.0 |
| 3.4498 | 2.42 | 33400 | 3.4518 | 1.0 |
| 3.3424 | 2.43 | 33500 | 3.4554 | 1.0 |
| 4.8226 | 2.43 | 33600 | 3.4412 | 1.0 |
| 3.3503 | 2.44 | 33700 | 3.4434 | 1.0 |
| 3.509 | 2.45 | 33800 | 3.4393 | 1.0 |
| 3.586 | 2.45 | 33900 | 3.4375 | 1.0 |
| 3.5242 | 2.46 | 34000 | 3.4402 | 1.0 |
| 3.4351 | 2.47 | 34100 | 3.4389 | 1.0 |
| 3.4445 | 2.48 | 34200 | 3.4416 | 1.0 |
| 6.6676 | 2.48 | 34300 | 3.4571 | 1.0 |
| 4.3937 | 2.49 | 34400 | 3.4560 | 1.0 |
| 3.4177 | 2.5 | 34500 | 3.4482 | 1.0 |
| 3.3966 | 2.5 | 34600 | 3.4640 | 1.0 |
| 3.2845 | 2.51 | 34700 | 3.4538 | 1.0 |
| 3.438 | 2.52 | 34800 | 3.4555 | 1.0 |
| 3.3874 | 2.53 | 34900 | 3.4524 | 1.0 |
| 3.5068 | 2.53 | 35000 | 3.4448 | 1.0 |
| 4.2406 | 2.54 | 35100 | 3.4503 | 1.0 |
| 3.2986 | 2.55 | 35200 | 3.4538 | 1.0 |
| 3.4044 | 2.56 | 35300 | 3.4443 | 1.0 |
| 3.3105 | 2.56 | 35400 | 3.4391 | 1.0 |
| 3.4048 | 2.57 | 35500 | 3.4411 | 1.0 |
| 3.5645 | 2.58 | 35600 | 3.4488 | 1.0 |
| 3.4912 | 2.58 | 35700 | 3.4400 | 1.0 |
| 3.4028 | 2.59 | 35800 | 3.4390 | 1.0 |
| 3.4601 | 2.6 | 35900 | 3.4455 | 1.0 |
| 3.6066 | 2.61 | 36000 | 3.4441 | 1.0 |
| 4.5312 | 2.61 | 36100 | 3.4414 | 1.0 |
| 3.6372 | 2.62 | 36200 | 3.4421 | 1.0 |
| 4.1912 | 2.63 | 36300 | 3.4572 | 1.0 |
| 3.4793 | 2.64 | 36400 | 3.4419 | 1.0 |
| 4.5538 | 2.64 | 36500 | 3.4407 | 1.0 |
| 3.3823 | 2.65 | 36600 | 3.4446 | 1.0 |
| 3.3592 | 2.66 | 36700 | 3.4396 | 1.0 |
| 3.4974 | 2.66 | 36800 | 3.4529 | 1.0 |
| 3.4599 | 2.67 | 36900 | 3.4380 | 1.0 |
| 4.7097 | 2.68 | 37000 | 3.4654 | 1.0 |
| 6.7037 | 2.69 | 37100 | 3.4386 | 1.0 |
| 3.3465 | 2.69 | 37200 | 3.4652 | 1.0 |
| 4.9762 | 2.7 | 37300 | 3.4506 | 1.0 |
| 3.9189 | 2.71 | 37400 | 3.4427 | 1.0 |
| 3.4746 | 2.71 | 37500 | 3.4465 | 1.0 |
| 3.3842 | 2.72 | 37600 | 3.4470 | 1.0 |
| 3.2445 | 2.73 | 37700 | 3.4480 | 1.0 |
| 3.382 | 2.74 | 37800 | 3.4456 | 1.0 |
| 3.7279 | 2.74 | 37900 | 3.4431 | 1.0 |
| 3.4329 | 2.75 | 38000 | 3.4374 | 1.0 |
| 3.4607 | 2.76 | 38100 | 3.4447 | 1.0 |
| 3.2394 | 2.77 | 38200 | 3.4476 | 1.0 |
| 3.7795 | 2.77 | 38300 | 3.4380 | 1.0 |
| 3.4419 | 2.78 | 38400 | 3.4526 | 1.0 |
| 3.6452 | 2.79 | 38500 | 3.4428 | 1.0 |
| 3.3474 | 2.79 | 38600 | 3.4424 | 1.0 |
| 3.4645 | 2.8 | 38700 | 3.4479 | 1.0 |
| 4.1143 | 2.81 | 38800 | 3.4580 | 1.0 |
| 4.6453 | 2.82 | 38900 | 3.4585 | 1.0 |
| 4.022 | 2.82 | 39000 | 3.4567 | 1.0 |
| 4.3049 | 2.83 | 39100 | 3.4377 | 1.0 |
| 3.3382 | 2.84 | 39200 | 3.4413 | 1.0 |
| 3.6022 | 2.85 | 39300 | 3.4548 | 1.0 |
| 4.4217 | 2.85 | 39400 | 3.4411 | 1.0 |
| 3.5139 | 2.86 | 39500 | 3.4552 | 1.0 |
| 3.1215 | 2.87 | 39600 | 3.4471 | 1.0 |
| 3.4514 | 2.87 | 39700 | 3.4378 | 1.0 |
| 4.822 | 2.88 | 39800 | 3.4605 | 1.0 |
| 5.6699 | 2.89 | 39900 | 3.4489 | 1.0 |
| 3.4183 | 2.9 | 40000 | 3.4644 | 1.0 |
| 5.7492 | 2.9 | 40100 | 3.4514 | 1.0 |
| 3.2879 | 2.91 | 40200 | 3.4543 | 1.0 |
| 3.3076 | 2.92 | 40300 | 3.4450 | 1.0 |
| 5.2845 | 2.92 | 40400 | 3.4459 | 1.0 |
| 3.7927 | 2.93 | 40500 | 3.4481 | 1.0 |
| 7.1549 | 2.94 | 40600 | 3.4554 | 1.0 |
| 3.4544 | 2.95 | 40700 | 3.4486 | 1.0 |
| 3.2332 | 2.95 | 40800 | 3.4415 | 1.0 |
| 3.3714 | 2.96 | 40900 | 3.4521 | 1.0 |
| 3.5205 | 2.97 | 41000 | 3.4395 | 1.0 |
| 4.6267 | 2.98 | 41100 | 3.4622 | 1.0 |
| 6.7747 | 2.98 | 41200 | 3.4407 | 1.0 |
| 3.3091 | 2.99 | 41300 | 3.4422 | 1.0 |
| 3.7135 | 3.0 | 41400 | 3.4383 | 1.0 |
| 3.6261 | 3.0 | 41500 | 3.4482 | 1.0 |
| 3.3323 | 3.01 | 41600 | 3.4366 | 1.0 |
| 3.4544 | 3.02 | 41700 | 3.4376 | 1.0 |
| 3.6486 | 3.03 | 41800 | 3.4511 | 1.0 |
| 3.3333 | 3.03 | 41900 | 3.4397 | 1.0 |
| 3.35 | 3.04 | 42000 | 3.4486 | 1.0 |
| 3.3522 | 3.05 | 42100 | 3.4626 | 1.0 |
| 3.4359 | 3.06 | 42200 | 3.4462 | 1.0 |
| 3.4548 | 3.06 | 42300 | 3.4435 | 1.0 |
| 3.2711 | 3.07 | 42400 | 3.4450 | 1.0 |
| 3.2679 | 3.08 | 42500 | 3.4394 | 1.0 |
| 3.3703 | 3.08 | 42600 | 3.4539 | 1.0 |
| 3.3846 | 3.09 | 42700 | 3.4443 | 1.0 |
| 3.334 | 3.1 | 42800 | 3.4384 | 1.0 |
| 3.3429 | 3.11 | 42900 | 3.4625 | 1.0 |
| 3.282 | 3.11 | 43000 | 3.4419 | 1.0 |
| 3.3503 | 3.12 | 43100 | 3.4653 | 1.0 |
| 3.4923 | 3.13 | 43200 | 3.4380 | 1.0 |
| 3.4309 | 3.13 | 43300 | 3.4534 | 1.0 |
| 3.3292 | 3.14 | 43400 | 3.4448 | 1.0 |
| 3.4219 | 3.15 | 43500 | 3.4665 | 1.0 |
| 3.3848 | 3.16 | 43600 | 3.4473 | 1.0 |
| 3.3004 | 3.16 | 43700 | 3.4509 | 1.0 |
| 3.2002 | 3.17 | 43800 | 3.4493 | 1.0 |
| 3.2654 | 3.18 | 43900 | 3.4384 | 1.0 |
| 3.3394 | 3.19 | 44000 | 3.4388 | 1.0 |
| 3.2365 | 3.19 | 44100 | 3.4491 | 1.0 |
| 3.2846 | 3.2 | 44200 | 3.4404 | 1.0 |
| 3.3973 | 3.21 | 44300 | 3.4426 | 1.0 |
| 3.3367 | 3.21 | 44400 | 3.4690 | 1.0 |
| 3.2747 | 3.22 | 44500 | 3.4378 | 1.0 |
| 3.4307 | 3.23 | 44600 | 3.4395 | 1.0 |
| 3.3685 | 3.24 | 44700 | 3.4431 | 1.0 |
| 3.321 | 3.24 | 44800 | 3.4557 | 1.0 |
| 3.3541 | 3.25 | 44900 | 3.4489 | 1.0 |
| 3.2282 | 3.26 | 45000 | 3.4393 | 1.0 |
| 3.3811 | 3.27 | 45100 | 3.4463 | 1.0 |
| 3.3014 | 3.27 | 45200 | 3.4505 | 1.0 |
| 3.3617 | 3.28 | 45300 | 3.4475 | 1.0 |
| 3.3953 | 3.29 | 45400 | 3.4430 | 1.0 |
| 3.3999 | 3.29 | 45500 | 3.4417 | 1.0 |
| 3.4098 | 3.3 | 45600 | 3.4503 | 1.0 |
| 3.1994 | 3.31 | 45700 | 3.4414 | 1.0 |
| 3.2185 | 3.32 | 45800 | 3.4485 | 1.0 |
| 3.2554 | 3.32 | 45900 | 3.4477 | 1.0 |
| 3.4302 | 3.33 | 46000 | 3.4508 | 1.0 |
| 3.366 | 3.34 | 46100 | 3.4440 | 1.0 |
| 3.4143 | 3.34 | 46200 | 3.4382 | 1.0 |
| 4.318 | 3.35 | 46300 | 3.4524 | 1.0 |
| 3.4233 | 3.36 | 46400 | 3.4451 | 1.0 |
| 3.3492 | 3.37 | 46500 | 3.4526 | 1.0 |
| 3.2399 | 3.37 | 46600 | 3.4462 | 1.0 |
| 3.421 | 3.38 | 46700 | 3.4432 | 1.0 |
| 3.2847 | 3.39 | 46800 | 3.4419 | 1.0 |
| 3.4062 | 3.4 | 46900 | 3.4405 | 1.0 |
| 3.3822 | 3.4 | 47000 | 3.4434 | 1.0 |
| 3.2789 | 3.41 | 47100 | 3.4444 | 1.0 |
| 3.2508 | 3.42 | 47200 | 3.4501 | 1.0 |
| 3.3867 | 3.42 | 47300 | 3.4498 | 1.0 |
| 3.3275 | 3.43 | 47400 | 3.4505 | 1.0 |
| 3.424 | 3.44 | 47500 | 3.4448 | 1.0 |
| 3.2418 | 3.45 | 47600 | 3.4450 | 1.0 |
| 3.3037 | 3.45 | 47700 | 3.4493 | 1.0 |
| 3.2562 | 3.46 | 47800 | 3.4466 | 1.0 |
| 3.3241 | 3.47 | 47900 | 3.4385 | 1.0 |
| 3.5569 | 3.47 | 48000 | 3.4427 | 1.0 |
| 3.298 | 3.48 | 48100 | 3.4667 | 1.0 |
| 3.3401 | 3.49 | 48200 | 3.4440 | 1.0 |
| 3.2824 | 3.5 | 48300 | 3.4427 | 1.0 |
| 3.3829 | 3.5 | 48400 | 3.4398 | 1.0 |
| 3.3595 | 3.51 | 48500 | 3.4421 | 1.0 |
| 3.286 | 3.52 | 48600 | 3.4517 | 1.0 |
| 3.3494 | 3.53 | 48700 | 3.4429 | 1.0 |
| 3.3507 | 3.53 | 48800 | 3.4422 | 1.0 |
| 3.3598 | 3.54 | 48900 | 3.4439 | 1.0 |
| 3.3141 | 3.55 | 49000 | 3.4544 | 1.0 |
| 3.4548 | 3.55 | 49100 | 3.4415 | 1.0 |
| 3.3278 | 3.56 | 49200 | 3.4474 | 1.0 |
| 3.4088 | 3.57 | 49300 | 3.4498 | 1.0 |
| 3.4046 | 3.58 | 49400 | 3.4554 | 1.0 |
| 3.2847 | 3.58 | 49500 | 3.4393 | 1.0 |
| 3.3162 | 3.59 | 49600 | 3.4594 | 1.0 |
| 3.2493 | 3.6 | 49700 | 3.4514 | 1.0 |
| 3.3466 | 3.61 | 49800 | 3.4514 | 1.0 |
| 3.3279 | 3.61 | 49900 | 3.4462 | 1.0 |
| 3.29 | 3.62 | 50000 | 3.4466 | 1.0 |
| 3.2374 | 3.63 | 50100 | 3.4575 | 1.0 |
| 3.3499 | 3.63 | 50200 | 3.4392 | 1.0 |
| 3.251 | 3.64 | 50300 | 3.4556 | 1.0 |
| 3.3692 | 3.65 | 50400 | 3.4498 | 1.0 |
| 3.3743 | 3.66 | 50500 | 3.4569 | 1.0 |
| 3.3662 | 3.66 | 50600 | 3.4463 | 1.0 |
| 3.302 | 3.67 | 50700 | 3.4445 | 1.0 |
| 3.2863 | 3.68 | 50800 | 3.4475 | 1.0 |
| 3.4266 | 3.68 | 50900 | 3.4370 | 1.0 |
| 3.2988 | 3.69 | 51000 | 3.4476 | 1.0 |
| 3.9581 | 3.7 | 51100 | 3.4382 | 1.0 |
| 3.4516 | 3.71 | 51200 | 3.4526 | 1.0 |
| 3.4259 | 3.71 | 51300 | 3.4414 | 1.0 |
| 3.3913 | 3.72 | 51400 | 3.4386 | 1.0 |
| 3.3606 | 3.73 | 51500 | 3.4458 | 1.0 |
| 3.4698 | 3.74 | 51600 | 3.4450 | 1.0 |
| 3.4285 | 3.74 | 51700 | 3.4493 | 1.0 |
| 3.265 | 3.75 | 51800 | 3.4369 | 1.0 |
| 3.4819 | 3.76 | 51900 | 3.4472 | 1.0 |
| 3.2869 | 3.76 | 52000 | 3.4580 | 1.0 |
| 3.2663 | 3.77 | 52100 | 3.4469 | 1.0 |
| 3.4325 | 3.78 | 52200 | 3.4423 | 1.0 |
| 3.3355 | 3.79 | 52300 | 3.4411 | 1.0 |
| 3.4324 | 3.79 | 52400 | 3.4456 | 1.0 |
| 3.3105 | 3.8 | 52500 | 3.4389 | 1.0 |
| 3.3588 | 3.81 | 52600 | 3.4403 | 1.0 |
| 3.3524 | 3.82 | 52700 | 3.4458 | 1.0 |
| 3.2466 | 3.82 | 52800 | 3.4447 | 1.0 |
| 3.2375 | 3.83 | 52900 | 3.4448 | 1.0 |
| 3.4006 | 3.84 | 53000 | 3.4456 | 1.0 |
| 3.3572 | 3.84 | 53100 | 3.4427 | 1.0 |
| 3.6162 | 3.85 | 53200 | 3.4379 | 1.0 |
| 3.3351 | 3.86 | 53300 | 3.4482 | 1.0 |
| 3.7101 | 3.87 | 53400 | 3.4393 | 1.0 |
| 3.3836 | 3.87 | 53500 | 3.4474 | 1.0 |
| 3.3357 | 3.88 | 53600 | 3.4573 | 1.0 |
| 3.3434 | 3.89 | 53700 | 3.4475 | 1.0 |
| 3.3349 | 3.89 | 53800 | 3.4659 | 1.0 |
| 3.3474 | 3.9 | 53900 | 3.4411 | 1.0 |
| 3.4007 | 3.91 | 54000 | 3.4446 | 1.0 |
| 3.4218 | 3.92 | 54100 | 3.4406 | 1.0 |
| 3.2115 | 3.92 | 54200 | 3.4422 | 1.0 |
| 3.2726 | 3.93 | 54300 | 3.4383 | 1.0 |
| 3.2999 | 3.94 | 54400 | 3.4423 | 1.0 |
| 3.3657 | 3.95 | 54500 | 3.4377 | 1.0 |
| 3.4015 | 3.95 | 54600 | 3.4433 | 1.0 |
| 3.3373 | 3.96 | 54700 | 3.4457 | 1.0 |
| 4.9872 | 3.97 | 54800 | 3.4420 | 1.0 |
| 3.3221 | 3.97 | 54900 | 3.4501 | 1.0 |
| 3.8059 | 3.98 | 55000 | 3.4501 | 1.0 |
| 3.2628 | 3.99 | 55100 | 3.4511 | 1.0 |
| 3.3822 | 4.0 | 55200 | 3.4409 | 1.0 |
| 3.5464 | 4.0 | 55300 | 3.4527 | 1.0 |
| 3.3661 | 4.01 | 55400 | 3.4436 | 1.0 |
| 3.4146 | 4.02 | 55500 | 3.4458 | 1.0 |
| 3.5756 | 4.03 | 55600 | 3.4409 | 1.0 |
| 3.3945 | 4.03 | 55700 | 3.4378 | 1.0 |
| 4.5275 | 4.04 | 55800 | 3.4558 | 1.0 |
| 3.7913 | 4.05 | 55900 | 3.4523 | 1.0 |
| 3.4445 | 4.05 | 56000 | 3.4446 | 1.0 |
| 3.51 | 4.06 | 56100 | 3.4488 | 1.0 |
| 6.5935 | 4.07 | 56200 | 3.4497 | 1.0 |
| 3.3548 | 4.08 | 56300 | 3.4443 | 1.0 |
| 3.4544 | 4.08 | 56400 | 3.4547 | 1.0 |
| 3.4206 | 4.09 | 56500 | 3.4476 | 1.0 |
| 3.3979 | 4.1 | 56600 | 3.4459 | 1.0 |
| 3.296 | 4.1 | 56700 | 3.4461 | 1.0 |
| 3.7186 | 4.11 | 56800 | 3.4407 | 1.0 |
| 3.8726 | 4.12 | 56900 | 3.4498 | 1.0 |
| 3.6704 | 4.13 | 57000 | 3.4535 | 1.0 |
| 3.4735 | 4.13 | 57100 | 3.4470 | 1.0 |
| 3.399 | 4.14 | 57200 | 3.4461 | 1.0 |
| 3.3507 | 4.15 | 57300 | 3.4405 | 1.0 |
| 3.3948 | 4.16 | 57400 | 3.4582 | 1.0 |
| 3.613 | 4.16 | 57500 | 3.4462 | 1.0 |
| 3.3553 | 4.17 | 57600 | 3.4507 | 1.0 |
| 3.5798 | 4.18 | 57700 | 3.4476 | 1.0 |
| 7.6315 | 4.18 | 57800 | 3.4412 | 1.0 |
| 3.4873 | 4.19 | 57900 | 3.4605 | 1.0 |
| 3.3193 | 4.2 | 58000 | 3.4458 | 1.0 |
| 3.4065 | 4.21 | 58100 | 3.4368 | 1.0 |
| 3.4813 | 4.21 | 58200 | 3.4464 | 1.0 |
| 3.2523 | 4.22 | 58300 | 3.4601 | 1.0 |
| 3.3384 | 4.23 | 58400 | 3.4449 | 1.0 |
| 3.2839 | 4.24 | 58500 | 3.4544 | 1.0 |
| 3.4564 | 4.24 | 58600 | 3.4412 | 1.0 |
| 3.3995 | 4.25 | 58700 | 3.4408 | 1.0 |
| 3.2107 | 4.26 | 58800 | 3.4463 | 1.0 |
| 4.0565 | 4.26 | 58900 | 3.4402 | 1.0 |
| 3.6744 | 4.27 | 59000 | 3.4537 | 1.0 |
| 3.3658 | 4.28 | 59100 | 3.4435 | 1.0 |
| 3.8134 | 4.29 | 59200 | 3.4491 | 1.0 |
| 3.3783 | 4.29 | 59300 | 3.4480 | 1.0 |
| 3.6206 | 4.3 | 59400 | 3.4403 | 1.0 |
| 3.4018 | 4.31 | 59500 | 3.4433 | 1.0 |
| 3.2325 | 4.31 | 59600 | 3.4419 | 1.0 |
| 3.3935 | 4.32 | 59700 | 3.4420 | 1.0 |
| 3.9773 | 4.33 | 59800 | 3.4477 | 1.0 |
| 3.3477 | 4.34 | 59900 | 3.4557 | 1.0 |
| 3.4817 | 4.34 | 60000 | 3.4421 | 1.0 |
| 3.8685 | 4.35 | 60100 | 3.4470 | 1.0 |
| 3.679 | 4.36 | 60200 | 3.4457 | 1.0 |
| 5.3659 | 4.37 | 60300 | 3.4416 | 1.0 |
| 3.2615 | 4.37 | 60400 | 3.4415 | 1.0 |
| 3.6087 | 4.38 | 60500 | 3.4398 | 1.0 |
| 4.1801 | 4.39 | 60600 | 3.4532 | 1.0 |
| 5.013 | 4.39 | 60700 | 3.4465 | 1.0 |
| 3.333 | 4.4 | 60800 | 3.4498 | 1.0 |
| 3.4247 | 4.41 | 60900 | 3.4542 | 1.0 |
| 3.424 | 4.42 | 61000 | 3.4436 | 1.0 |
| 3.317 | 4.42 | 61100 | 3.4405 | 1.0 |
| 3.4018 | 4.43 | 61200 | 3.4467 | 1.0 |
| 7.2156 | 4.44 | 61300 | 3.4436 | 1.0 |
| 3.3726 | 4.45 | 61400 | 3.4473 | 1.0 |
| 3.2895 | 4.45 | 61500 | 3.4400 | 1.0 |
| 3.2293 | 4.46 | 61600 | 3.4536 | 1.0 |
| 3.8397 | 4.47 | 61700 | 3.4489 | 1.0 |
| 3.3358 | 4.47 | 61800 | 3.4443 | 1.0 |
| 3.4085 | 4.48 | 61900 | 3.4472 | 1.0 |
| 3.4413 | 4.49 | 62000 | 3.4421 | 1.0 |
| 3.4222 | 4.5 | 62100 | 3.4480 | 1.0 |
| 3.4665 | 4.5 | 62200 | 3.4435 | 1.0 |
| 3.4058 | 4.51 | 62300 | 3.4399 | 1.0 |
| 3.4228 | 4.52 | 62400 | 3.4457 | 1.0 |
| 3.3362 | 4.52 | 62500 | 3.4453 | 1.0 |
| 4.3383 | 4.53 | 62600 | 3.4564 | 1.0 |
| 3.2802 | 4.54 | 62700 | 3.4392 | 1.0 |
| 5.0224 | 4.55 | 62800 | 3.4491 | 1.0 |
| 4.1092 | 4.55 | 62900 | 3.4400 | 1.0 |
| 3.6467 | 4.56 | 63000 | 3.4454 | 1.0 |
| 3.4197 | 4.57 | 63100 | 3.4411 | 1.0 |
| 3.4549 | 4.58 | 63200 | 3.4464 | 1.0 |
| 3.2333 | 4.58 | 63300 | 3.4454 | 1.0 |
| 3.3108 | 4.59 | 63400 | 3.4437 | 1.0 |
| 3.3897 | 4.6 | 63500 | 3.4382 | 1.0 |
| 3.2956 | 4.6 | 63600 | 3.4478 | 1.0 |
| 3.4244 | 4.61 | 63700 | 3.4439 | 1.0 |
| 4.3236 | 4.62 | 63800 | 3.4400 | 1.0 |
| 3.263 | 4.63 | 63900 | 3.4542 | 1.0 |
| 3.5322 | 4.63 | 64000 | 3.4548 | 1.0 |
| 3.613 | 4.64 | 64100 | 3.4442 | 1.0 |
| 3.7147 | 4.65 | 64200 | 3.4396 | 1.0 |
| 3.6781 | 4.66 | 64300 | 3.4444 | 1.0 |
| 3.1597 | 4.66 | 64400 | 3.4642 | 1.0 |
| 4.8173 | 4.67 | 64500 | 3.4397 | 1.0 |
| 3.7878 | 4.68 | 64600 | 3.4529 | 1.0 |
| 3.3288 | 4.68 | 64700 | 3.4423 | 1.0 |
| 3.3931 | 4.69 | 64800 | 3.4376 | 1.0 |
| 5.6842 | 4.7 | 64900 | 3.4396 | 1.0 |
| 3.62 | 4.71 | 65000 | 3.4419 | 1.0 |
| 3.3742 | 4.71 | 65100 | 3.4419 | 1.0 |
| 3.3207 | 4.72 | 65200 | 3.4392 | 1.0 |
| 3.6216 | 4.73 | 65300 | 3.4369 | 1.0 |
| 3.2954 | 4.73 | 65400 | 3.4461 | 1.0 |
| 3.3943 | 4.74 | 65500 | 3.4442 | 1.0 |
| 3.5041 | 4.75 | 65600 | 3.4433 | 1.0 |
| 3.5168 | 4.76 | 65700 | 3.4529 | 1.0 |
| 3.3715 | 4.76 | 65800 | 3.4446 | 1.0 |
| 3.3734 | 4.77 | 65900 | 3.4507 | 1.0 |
| 10.6923 | 4.78 | 66000 | 3.4468 | 1.0 |
| 3.4432 | 4.79 | 66100 | 3.4400 | 1.0 |
| 3.5521 | 4.79 | 66200 | 3.4573 | 1.0 |
| 4.9372 | 4.8 | 66300 | 3.4400 | 1.0 |
| 3.48 | 4.81 | 66400 | 3.4374 | 1.0 |
| 3.1794 | 4.81 | 66500 | 3.4379 | 1.0 |
| 3.4121 | 4.82 | 66600 | 3.4364 | 1.0 |
| 3.581 | 4.83 | 66700 | 3.4444 | 1.0 |
| 3.1135 | 4.84 | 66800 | 3.4380 | 1.0 |
| 3.4506 | 4.84 | 66900 | 3.4595 | 1.0 |
| 3.3243 | 4.85 | 67000 | 3.4433 | 1.0 |
| 3.3814 | 4.86 | 67100 | 3.4550 | 1.0 |
| 3.3557 | 4.86 | 67200 | 3.4374 | 1.0 |
| 3.2991 | 4.87 | 67300 | 3.4423 | 1.0 |
| 3.8854 | 4.88 | 67400 | 3.4398 | 1.0 |
| 3.7073 | 4.89 | 67500 | 3.4425 | 1.0 |
| 3.3739 | 4.89 | 67600 | 3.4492 | 1.0 |
| 3.435 | 4.9 | 67700 | 3.4512 | 1.0 |
| 10.5515 | 4.91 | 67800 | 3.4512 | 1.0 |
| 3.5227 | 4.92 | 67900 | 3.4493 | 1.0 |
| 3.2475 | 4.92 | 68000 | 3.4413 | 1.0 |
| 3.3387 | 4.93 | 68100 | 3.4474 | 1.0 |
| 3.365 | 4.94 | 68200 | 3.4426 | 1.0 |
| 4.1377 | 4.94 | 68300 | 3.4457 | 1.0 |
| 3.9188 | 4.95 | 68400 | 3.4437 | 1.0 |
| 3.5646 | 4.96 | 68500 | 3.4438 | 1.0 |
| 3.3686 | 4.97 | 68600 | 3.4477 | 1.0 |
| 3.1943 | 4.97 | 68700 | 3.4508 | 1.0 |
| 3.3747 | 4.98 | 68800 | 3.4453 | 1.0 |
| 3.8971 | 4.99 | 68900 | 3.4560 | 1.0 |
| 3.9434 | 5.0 | 69000 | 3.4457 | 1.0 |
| 3.3862 | 5.0 | 69100 | 3.4575 | 1.0 |
| 3.2693 | 5.01 | 69200 | 3.4436 | 1.0 |
| 3.2971 | 5.02 | 69300 | 3.4494 | 1.0 |
| 3.3175 | 5.02 | 69400 | 3.4432 | 1.0 |
| 3.3889 | 5.03 | 69500 | 3.4371 | 1.0 |
| 3.382 | 5.04 | 69600 | 3.4426 | 1.0 |
| 3.3396 | 5.05 | 69700 | 3.4383 | 1.0 |
| 3.5613 | 5.05 | 69800 | 3.4472 | 1.0 |
| 3.4392 | 5.06 | 69900 | 3.4437 | 1.0 |
| 3.2599 | 5.07 | 70000 | 3.4544 | 1.0 |
| 3.2819 | 5.07 | 70100 | 3.4459 | 1.0 |
| 3.3131 | 5.08 | 70200 | 3.4552 | 1.0 |
| 3.3471 | 5.09 | 70300 | 3.4513 | 1.0 |
| 3.4194 | 5.1 | 70400 | 3.4446 | 1.0 |
| 3.3565 | 5.1 | 70500 | 3.4424 | 1.0 |
| 3.3411 | 5.11 | 70600 | 3.4482 | 1.0 |
| 3.3473 | 5.12 | 70700 | 3.4514 | 1.0 |
| 3.3197 | 5.13 | 70800 | 3.4491 | 1.0 |
| 3.3466 | 5.13 | 70900 | 3.4573 | 1.0 |
| 3.3856 | 5.14 | 71000 | 3.4420 | 1.0 |
| 3.1905 | 5.15 | 71100 | 3.4469 | 1.0 |
| 3.3756 | 5.15 | 71200 | 3.4467 | 1.0 |
| 3.3498 | 5.16 | 71300 | 3.4479 | 1.0 |
| 3.3914 | 5.17 | 71400 | 3.4426 | 1.0 |
| 3.3885 | 5.18 | 71500 | 3.4419 | 1.0 |
| 3.4713 | 5.18 | 71600 | 3.4434 | 1.0 |
| 3.4077 | 5.19 | 71700 | 3.4472 | 1.0 |
| 3.3633 | 5.2 | 71800 | 3.4443 | 1.0 |
| 3.3677 | 5.21 | 71900 | 3.4413 | 1.0 |
| 3.3545 | 5.21 | 72000 | 3.4491 | 1.0 |
| 3.3415 | 5.22 | 72100 | 3.4423 | 1.0 |
| 3.3796 | 5.23 | 72200 | 3.4420 | 1.0 |
| 3.4989 | 5.23 | 72300 | 3.4415 | 1.0 |
| 3.3875 | 5.24 | 72400 | 3.4453 | 1.0 |
| 3.3728 | 5.25 | 72500 | 3.4534 | 1.0 |
| 3.3134 | 5.26 | 72600 | 3.4396 | 1.0 |
| 3.3634 | 5.26 | 72700 | 3.4472 | 1.0 |
| 3.2482 | 5.27 | 72800 | 3.4448 | 1.0 |
| 3.299 | 5.28 | 72900 | 3.4571 | 1.0 |
| 3.3579 | 5.28 | 73000 | 3.4440 | 1.0 |
| 3.6011 | 5.29 | 73100 | 3.4507 | 1.0 |
| 3.2451 | 5.3 | 73200 | 3.4430 | 1.0 |
| 3.399 | 5.31 | 73300 | 3.4443 | 1.0 |
| 3.3605 | 5.31 | 73400 | 3.4525 | 1.0 |
| 3.3511 | 5.32 | 73500 | 3.4520 | 1.0 |
| 3.3946 | 5.33 | 73600 | 3.4402 | 1.0 |
| 3.3602 | 5.34 | 73700 | 3.4383 | 1.0 |
| 3.3105 | 5.34 | 73800 | 3.4492 | 1.0 |
| 3.3346 | 5.35 | 73900 | 3.4428 | 1.0 |
| 3.4219 | 5.36 | 74000 | 3.4534 | 1.0 |
| 3.3491 | 5.36 | 74100 | 3.4603 | 1.0 |
| 3.4207 | 5.37 | 74200 | 3.4512 | 1.0 |
| 3.2418 | 5.38 | 74300 | 3.4474 | 1.0 |
| 3.2637 | 5.39 | 74400 | 3.4402 | 1.0 |
| 3.4331 | 5.39 | 74500 | 3.4576 | 1.0 |
| 3.3483 | 5.4 | 74600 | 3.4518 | 1.0 |
| 3.2825 | 5.41 | 74700 | 3.4526 | 1.0 |
| 3.5443 | 5.42 | 74800 | 3.4380 | 1.0 |
| 3.3637 | 5.42 | 74900 | 3.4525 | 1.0 |
| 3.2016 | 5.43 | 75000 | 3.4483 | 1.0 |
| 3.3641 | 5.44 | 75100 | 3.4389 | 1.0 |
| 3.3869 | 5.44 | 75200 | 3.4511 | 1.0 |
| 3.2595 | 5.45 | 75300 | 3.4498 | 1.0 |
| 3.401 | 5.46 | 75400 | 3.4496 | 1.0 |
| 3.4416 | 5.47 | 75500 | 3.4502 | 1.0 |
| 3.3949 | 5.47 | 75600 | 3.4400 | 1.0 |
| 3.279 | 5.48 | 75700 | 3.4461 | 1.0 |
| 3.343 | 5.49 | 75800 | 3.4419 | 1.0 |
| 3.3848 | 5.49 | 75900 | 3.4470 | 1.0 |
| 3.3605 | 5.5 | 76000 | 3.4430 | 1.0 |
| 3.2786 | 5.51 | 76100 | 3.4479 | 1.0 |
| 3.4013 | 5.52 | 76200 | 3.4469 | 1.0 |
| 3.2064 | 5.52 | 76300 | 3.4420 | 1.0 |
| 3.5022 | 5.53 | 76400 | 3.4475 | 1.0 |
| 3.3093 | 5.54 | 76500 | 3.4431 | 1.0 |
| 3.3647 | 5.55 | 76600 | 3.4392 | 1.0 |
| 3.3971 | 5.55 | 76700 | 3.4434 | 1.0 |
| 3.3352 | 5.56 | 76800 | 3.4485 | 1.0 |
| 3.3756 | 5.57 | 76900 | 3.4453 | 1.0 |
| 3.2675 | 5.57 | 77000 | 3.4456 | 1.0 |
| 3.3187 | 5.58 | 77100 | 3.4471 | 1.0 |
| 3.3915 | 5.59 | 77200 | 3.4434 | 1.0 |
| 3.522 | 5.6 | 77300 | 3.4579 | 1.0 |
| 3.3715 | 5.6 | 77400 | 3.4459 | 1.0 |
| 3.2879 | 5.61 | 77500 | 3.4450 | 1.0 |
| 3.4566 | 5.62 | 77600 | 3.4446 | 1.0 |
| 3.3802 | 5.63 | 77700 | 3.4458 | 1.0 |
| 3.3286 | 5.63 | 77800 | 3.4417 | 1.0 |
| 3.3506 | 5.64 | 77900 | 3.4582 | 1.0 |
| 3.3646 | 5.65 | 78000 | 3.4382 | 1.0 |
| 3.3679 | 5.65 | 78100 | 3.4399 | 1.0 |
| 3.2344 | 5.66 | 78200 | 3.4389 | 1.0 |
| 3.362 | 5.67 | 78300 | 3.4528 | 1.0 |
| 3.3598 | 5.68 | 78400 | 3.4411 | 1.0 |
| 3.4368 | 5.68 | 78500 | 3.4416 | 1.0 |
| 3.3668 | 5.69 | 78600 | 3.4501 | 1.0 |
| 3.4889 | 5.7 | 78700 | 3.4469 | 1.0 |
| 3.5421 | 5.7 | 78800 | 3.4499 | 1.0 |
| 3.4562 | 5.71 | 78900 | 3.4489 | 1.0 |
| 3.4175 | 5.72 | 79000 | 3.4456 | 1.0 |
| 3.3624 | 5.73 | 79100 | 3.4457 | 1.0 |
| 3.338 | 5.73 | 79200 | 3.4480 | 1.0 |
| 3.2783 | 5.74 | 79300 | 3.4398 | 1.0 |
| 3.3664 | 5.75 | 79400 | 3.4454 | 1.0 |
| 3.3883 | 5.76 | 79500 | 3.4511 | 1.0 |
| 3.3578 | 5.76 | 79600 | 3.4480 | 1.0 |
| 3.2831 | 5.77 | 79700 | 3.4425 | 1.0 |
| 3.5258 | 5.78 | 79800 | 3.4522 | 1.0 |
| 3.2697 | 5.78 | 79900 | 3.4398 | 1.0 |
| 3.291 | 5.79 | 80000 | 3.4395 | 1.0 |
| 3.3994 | 5.8 | 80100 | 3.4401 | 1.0 |
| 3.3379 | 5.81 | 80200 | 3.4414 | 1.0 |
| 3.334 | 5.81 | 80300 | 3.4576 | 1.0 |
| 3.4343 | 5.82 | 80400 | 3.4524 | 1.0 |
| 3.3857 | 5.83 | 80500 | 3.4445 | 1.0 |
| 3.3657 | 5.84 | 80600 | 3.4437 | 1.0 |
| 3.3229 | 5.84 | 80700 | 3.4539 | 1.0 |
| 3.2913 | 5.85 | 80800 | 3.4466 | 1.0 |
| 3.2929 | 5.86 | 80900 | 3.4471 | 1.0 |
| 3.4581 | 5.86 | 81000 | 3.4367 | 1.0 |
| 3.3521 | 5.87 | 81100 | 3.4395 | 1.0 |
| 3.6423 | 5.88 | 81200 | 3.4395 | 1.0 |
| 3.3993 | 5.89 | 81300 | 3.4488 | 1.0 |
| 3.3382 | 5.89 | 81400 | 3.4626 | 1.0 |
| 3.2858 | 5.9 | 81500 | 3.4393 | 1.0 |
| 3.3802 | 5.91 | 81600 | 3.4430 | 1.0 |
| 3.4808 | 5.91 | 81700 | 3.4421 | 1.0 |
| 3.2911 | 5.92 | 81800 | 3.4458 | 1.0 |
| 3.199 | 5.93 | 81900 | 3.4411 | 1.0 |
| 3.7089 | 5.94 | 82000 | 3.4402 | 1.0 |
| 3.32 | 5.94 | 82100 | 3.4524 | 1.0 |
| 3.2283 | 5.95 | 82200 | 3.4465 | 1.0 |
| 3.3001 | 5.96 | 82300 | 3.4429 | 1.0 |
| 3.33 | 5.97 | 82400 | 3.4535 | 1.0 |
| 3.3269 | 5.97 | 82500 | 3.4445 | 1.0 |
| 3.3572 | 5.98 | 82600 | 3.4459 | 1.0 |
| 3.2905 | 5.99 | 82700 | 3.4475 | 1.0 |
| 3.4236 | 5.99 | 82800 | 3.4455 | 1.0 |
| 4.1378 | 6.0 | 82900 | 3.4454 | 1.0 |
| 3.4648 | 6.01 | 83000 | 3.4569 | 1.0 |
| 3.2289 | 6.02 | 83100 | 3.4562 | 1.0 |
| 3.511 | 6.02 | 83200 | 3.4452 | 1.0 |
| 5.6152 | 6.03 | 83300 | 3.4684 | 1.0 |
| 3.2102 | 6.04 | 83400 | 3.4555 | 1.0 |
| 3.389 | 6.05 | 83500 | 3.4429 | 1.0 |
| 3.773 | 6.05 | 83600 | 3.4436 | 1.0 |
| 3.3612 | 6.06 | 83700 | 3.4383 | 1.0 |
| 3.316 | 6.07 | 83800 | 3.4421 | 1.0 |
| 3.4754 | 6.07 | 83900 | 3.4444 | 1.0 |
| 3.4536 | 6.08 | 84000 | 3.4461 | 1.0 |
| 3.4987 | 6.09 | 84100 | 3.4441 | 1.0 |
| 3.5025 | 6.1 | 84200 | 3.4423 | 1.0 |
| 3.167 | 6.1 | 84300 | 3.4381 | 1.0 |
| 3.3875 | 6.11 | 84400 | 3.4458 | 1.0 |
| 3.3446 | 6.12 | 84500 | 3.4491 | 1.0 |
| 3.4824 | 6.12 | 84600 | 3.4476 | 1.0 |
| 3.4264 | 6.13 | 84700 | 3.4443 | 1.0 |
| 3.3786 | 6.14 | 84800 | 3.4391 | 1.0 |
| 3.3554 | 6.15 | 84900 | 3.4447 | 1.0 |
| 3.2566 | 6.15 | 85000 | 3.4410 | 1.0 |
| 3.7839 | 6.16 | 85100 | 3.4471 | 1.0 |
| 10.7563 | 6.17 | 85200 | 3.4516 | 1.0 |
| 3.501 | 6.18 | 85300 | 3.4458 | 1.0 |
| 3.3805 | 6.18 | 85400 | 3.4441 | 1.0 |
| 3.3758 | 6.19 | 85500 | 3.4384 | 1.0 |
| 3.4565 | 6.2 | 85600 | 3.4457 | 1.0 |
| 3.3889 | 6.2 | 85700 | 3.4542 | 1.0 |
| 3.6664 | 6.21 | 85800 | 3.4572 | 1.0 |
| 3.4372 | 6.22 | 85900 | 3.4442 | 1.0 |
| 3.3461 | 6.23 | 86000 | 3.4430 | 1.0 |
| 3.3446 | 6.23 | 86100 | 3.4410 | 1.0 |
| 4.1477 | 6.24 | 86200 | 3.4521 | 1.0 |
| 3.2528 | 6.25 | 86300 | 3.4441 | 1.0 |
| 5.4615 | 6.25 | 86400 | 3.4386 | 1.0 |
| 3.3977 | 6.26 | 86500 | 3.4507 | 1.0 |
| 3.3648 | 6.27 | 86600 | 3.4488 | 1.0 |
| 3.875 | 6.28 | 86700 | 3.4477 | 1.0 |
| 3.8437 | 6.28 | 86800 | 3.4421 | 1.0 |
| 3.2904 | 6.29 | 86900 | 3.4458 | 1.0 |
| 3.6029 | 6.3 | 87000 | 3.4536 | 1.0 |
| 3.2774 | 6.31 | 87100 | 3.4452 | 1.0 |
| 3.3557 | 6.31 | 87200 | 3.4491 | 1.0 |
| 3.344 | 6.32 | 87300 | 3.4550 | 1.0 |
| 3.1771 | 6.33 | 87400 | 3.4414 | 1.0 |
| 3.2468 | 6.33 | 87500 | 3.4407 | 1.0 |
| 3.3878 | 6.34 | 87600 | 3.4409 | 1.0 |
| 3.3175 | 6.35 | 87700 | 3.4402 | 1.0 |
| 3.3398 | 6.36 | 87800 | 3.4422 | 1.0 |
| 3.3925 | 6.36 | 87900 | 3.4480 | 1.0 |
| 3.2327 | 6.37 | 88000 | 3.4380 | 1.0 |
| 3.5039 | 6.38 | 88100 | 3.4449 | 1.0 |
| 4.6598 | 6.39 | 88200 | 3.4443 | 1.0 |
| 3.2816 | 6.39 | 88300 | 3.4471 | 1.0 |
| 3.2072 | 6.4 | 88400 | 3.4370 | 1.0 |
| 3.2164 | 6.41 | 88500 | 3.4455 | 1.0 |
| 3.1742 | 6.41 | 88600 | 3.4416 | 1.0 |
| 3.298 | 6.42 | 88700 | 3.4424 | 1.0 |
| 4.2488 | 6.43 | 88800 | 3.4485 | 1.0 |
| 3.3554 | 6.44 | 88900 | 3.4421 | 1.0 |
| 3.469 | 6.44 | 89000 | 3.4442 | 1.0 |
| 3.7796 | 6.45 | 89100 | 3.4478 | 1.0 |
| 3.357 | 6.46 | 89200 | 3.4493 | 1.0 |
| 3.3099 | 6.46 | 89300 | 3.4422 | 1.0 |
| 3.343 | 6.47 | 89400 | 3.4484 | 1.0 |
| 3.1808 | 6.48 | 89500 | 3.4493 | 1.0 |
| 3.3544 | 6.49 | 89600 | 3.4404 | 1.0 |
| 3.2563 | 6.49 | 89700 | 3.4427 | 1.0 |
| 4.8257 | 6.5 | 89800 | 3.4409 | 1.0 |
| 3.3544 | 6.51 | 89900 | 3.4435 | 1.0 |
| 3.3013 | 6.52 | 90000 | 3.4442 | 1.0 |
| 3.4374 | 6.52 | 90100 | 3.4389 | 1.0 |
| 3.3702 | 6.53 | 90200 | 3.4461 | 1.0 |
| 3.8491 | 6.54 | 90300 | 3.4469 | 1.0 |
| 3.3713 | 6.54 | 90400 | 3.4456 | 1.0 |
| 3.36 | 6.55 | 90500 | 3.4600 | 1.0 |
| 3.4559 | 6.56 | 90600 | 3.4541 | 1.0 |
| 3.9838 | 6.57 | 90700 | 3.4411 | 1.0 |
| 3.3675 | 6.57 | 90800 | 3.4448 | 1.0 |
| 3.3384 | 6.58 | 90900 | 3.4437 | 1.0 |
| 3.3098 | 6.59 | 91000 | 3.4401 | 1.0 |
| 3.344 | 6.6 | 91100 | 3.4412 | 1.0 |
| 3.3974 | 6.6 | 91200 | 3.4383 | 1.0 |
| 3.3255 | 6.61 | 91300 | 3.4468 | 1.0 |
| 3.3193 | 6.62 | 91400 | 3.4410 | 1.0 |
| 3.3432 | 6.62 | 91500 | 3.4429 | 1.0 |
| 3.5861 | 6.63 | 91600 | 3.4501 | 1.0 |
| 3.4078 | 6.64 | 91700 | 3.4466 | 1.0 |
| 3.4045 | 6.65 | 91800 | 3.4507 | 1.0 |
| 3.2148 | 6.65 | 91900 | 3.4440 | 1.0 |
| 3.446 | 6.66 | 92000 | 3.4431 | 1.0 |
| 3.2581 | 6.67 | 92100 | 3.4421 | 1.0 |
| 3.4569 | 6.67 | 92200 | 3.4477 | 1.0 |
| 3.3271 | 6.68 | 92300 | 3.4384 | 1.0 |
| 3.3428 | 6.69 | 92400 | 3.4379 | 1.0 |
| 5.7004 | 6.7 | 92500 | 3.4444 | 1.0 |
| 3.3441 | 6.7 | 92600 | 3.4525 | 1.0 |
| 3.4577 | 6.71 | 92700 | 3.4529 | 1.0 |
| 3.2188 | 6.72 | 92800 | 3.4386 | 1.0 |
| 3.3738 | 6.73 | 92900 | 3.4421 | 1.0 |
| 3.309 | 6.73 | 93000 | 3.4421 | 1.0 |
| 3.6994 | 6.74 | 93100 | 3.4476 | 1.0 |
| 3.4694 | 6.75 | 93200 | 3.4479 | 1.0 |
| 3.6629 | 6.75 | 93300 | 3.4433 | 1.0 |
| 3.2603 | 6.76 | 93400 | 3.4455 | 1.0 |
| 3.5258 | 6.77 | 93500 | 3.4466 | 1.0 |
| 3.3443 | 6.78 | 93600 | 3.4444 | 1.0 |
| 3.3363 | 6.78 | 93700 | 3.4389 | 1.0 |
| 3.8168 | 6.79 | 93800 | 3.4411 | 1.0 |
| 3.4222 | 6.8 | 93900 | 3.4447 | 1.0 |
| 3.6458 | 6.81 | 94000 | 3.4432 | 1.0 |
| 3.246 | 6.81 | 94100 | 3.4473 | 1.0 |
| 3.5288 | 6.82 | 94200 | 3.4468 | 1.0 |
| 3.4141 | 6.83 | 94300 | 3.4379 | 1.0 |
| 3.3348 | 6.83 | 94400 | 3.4394 | 1.0 |
| 3.3027 | 6.84 | 94500 | 3.4433 | 1.0 |
| 3.7383 | 6.85 | 94600 | 3.4431 | 1.0 |
| 3.2835 | 6.86 | 94700 | 3.4385 | 1.0 |
| 3.3132 | 6.86 | 94800 | 3.4435 | 1.0 |
| 3.5486 | 6.87 | 94900 | 3.4457 | 1.0 |
| 3.2407 | 6.88 | 95000 | 3.4401 | 1.0 |
| 5.9865 | 6.88 | 95100 | 3.4526 | 1.0 |
| 3.7244 | 6.89 | 95200 | 3.4456 | 1.0 |
| 3.4583 | 6.9 | 95300 | 3.4419 | 1.0 |
| 3.3585 | 6.91 | 95400 | 3.4406 | 1.0 |
| 3.3433 | 6.91 | 95500 | 3.4582 | 1.0 |
| 3.3487 | 6.92 | 95600 | 3.4446 | 1.0 |
| 3.2941 | 6.93 | 95700 | 3.4538 | 1.0 |
| 3.4637 | 6.94 | 95800 | 3.4380 | 1.0 |
| 3.6811 | 6.94 | 95900 | 3.4385 | 1.0 |
| 3.3364 | 6.95 | 96000 | 3.4476 | 1.0 |
| 3.3127 | 6.96 | 96100 | 3.4376 | 1.0 |
| 3.301 | 6.96 | 96200 | 3.4442 | 1.0 |
| 3.407 | 6.97 | 96300 | 3.4419 | 1.0 |
| 3.3103 | 6.98 | 96400 | 3.4444 | 1.0 |
| 3.514 | 6.99 | 96500 | 3.4496 | 1.0 |
| 3.257 | 6.99 | 96600 | 3.4499 | 1.0 |
| 3.4131 | 7.0 | 96700 | 3.4408 | 1.0 |
| 3.3395 | 7.01 | 96800 | 3.4395 | 1.0 |
| 3.3651 | 7.02 | 96900 | 3.4373 | 1.0 |
| 3.4559 | 7.02 | 97000 | 3.4431 | 1.0 |
| 3.8799 | 7.03 | 97100 | 3.4419 | 1.0 |
| 3.4603 | 7.04 | 97200 | 3.4411 | 1.0 |
| 3.3208 | 7.04 | 97300 | 3.4413 | 1.0 |
| 3.3491 | 7.05 | 97400 | 3.4389 | 1.0 |
| 3.3667 | 7.06 | 97500 | 3.4447 | 1.0 |
| 3.3628 | 7.07 | 97600 | 3.4418 | 1.0 |
| 3.322 | 7.07 | 97700 | 3.4448 | 1.0 |
| 3.4562 | 7.08 | 97800 | 3.4479 | 1.0 |
| 3.2331 | 7.09 | 97900 | 3.4522 | 1.0 |
| 3.4535 | 7.09 | 98000 | 3.4465 | 1.0 |
| 3.3035 | 7.1 | 98100 | 3.4444 | 1.0 |
| 3.3541 | 7.11 | 98200 | 3.4380 | 1.0 |
| 3.2874 | 7.12 | 98300 | 3.4413 | 1.0 |
| 3.4224 | 7.12 | 98400 | 3.4519 | 1.0 |
| 3.4403 | 7.13 | 98500 | 3.4447 | 1.0 |
| 3.2964 | 7.14 | 98600 | 3.4424 | 1.0 |
| 3.297 | 7.15 | 98700 | 3.4403 | 1.0 |
| 3.3279 | 7.15 | 98800 | 3.4469 | 1.0 |
| 3.3393 | 7.16 | 98900 | 3.4477 | 1.0 |
| 3.3377 | 7.17 | 99000 | 3.4437 | 1.0 |
| 3.3256 | 7.17 | 99100 | 3.4376 | 1.0 |
| 3.383 | 7.18 | 99200 | 3.4397 | 1.0 |
| 3.3298 | 7.19 | 99300 | 3.4414 | 1.0 |
| 5.1176 | 7.2 | 99400 | 3.4438 | 1.0 |
| 3.2854 | 7.2 | 99500 | 3.4463 | 1.0 |
| 3.3177 | 7.21 | 99600 | 3.4558 | 1.0 |
| 3.3946 | 7.22 | 99700 | 3.4420 | 1.0 |
| 3.3175 | 7.23 | 99800 | 3.4485 | 1.0 |
| 3.3535 | 7.23 | 99900 | 3.4416 | 1.0 |
| 3.332 | 7.24 | 100000 | 3.4375 | 1.0 |
| 3.2779 | 7.25 | 100100 | 3.4437 | 1.0 |
| 3.2977 | 7.25 | 100200 | 3.4438 | 1.0 |
| 3.3777 | 7.26 | 100300 | 3.4448 | 1.0 |
| 3.3096 | 7.27 | 100400 | 3.4414 | 1.0 |
| 3.3538 | 7.28 | 100500 | 3.4464 | 1.0 |
| 3.3164 | 7.28 | 100600 | 3.4456 | 1.0 |
| 3.4028 | 7.29 | 100700 | 3.4494 | 1.0 |
| 3.4322 | 7.3 | 100800 | 3.4554 | 1.0 |
| 3.2851 | 7.3 | 100900 | 3.4499 | 1.0 |
| 3.3666 | 7.31 | 101000 | 3.4394 | 1.0 |
| 3.2821 | 7.32 | 101100 | 3.4396 | 1.0 |
| 3.3335 | 7.33 | 101200 | 3.4454 | 1.0 |
| 3.3327 | 7.33 | 101300 | 3.4484 | 1.0 |
| 3.2771 | 7.34 | 101400 | 3.4416 | 1.0 |
| 3.2928 | 7.35 | 101500 | 3.4433 | 1.0 |
| 3.3341 | 7.36 | 101600 | 3.4482 | 1.0 |
| 3.2928 | 7.36 | 101700 | 3.4420 | 1.0 |
| 3.2428 | 7.37 | 101800 | 3.4428 | 1.0 |
| 3.3266 | 7.38 | 101900 | 3.4455 | 1.0 |
| 3.3004 | 7.38 | 102000 | 3.4481 | 1.0 |
| 3.3588 | 7.39 | 102100 | 3.4414 | 1.0 |
| 3.3312 | 7.4 | 102200 | 3.4510 | 1.0 |
| 3.4165 | 7.41 | 102300 | 3.4375 | 1.0 |
| 3.3087 | 7.41 | 102400 | 3.4522 | 1.0 |
| 3.353 | 7.42 | 102500 | 3.4400 | 1.0 |
| 3.1741 | 7.43 | 102600 | 3.4413 | 1.0 |
| 3.2123 | 7.44 | 102700 | 3.4472 | 1.0 |
| 3.1993 | 7.44 | 102800 | 3.4452 | 1.0 |
| 3.239 | 7.45 | 102900 | 3.4418 | 1.0 |
| 3.3241 | 7.46 | 103000 | 3.4496 | 1.0 |
| 3.2586 | 7.46 | 103100 | 3.4498 | 1.0 |
| 3.5903 | 7.47 | 103200 | 3.4465 | 1.0 |
| 3.3286 | 7.48 | 103300 | 3.4488 | 1.0 |
| 3.4615 | 7.49 | 103400 | 3.4486 | 1.0 |
| 3.3855 | 7.49 | 103500 | 3.4440 | 1.0 |
| 3.3819 | 7.5 | 103600 | 3.4534 | 1.0 |
| 3.3003 | 7.51 | 103700 | 3.4502 | 1.0 |
| 3.4232 | 7.51 | 103800 | 3.4429 | 1.0 |
| 3.2926 | 7.52 | 103900 | 3.4442 | 1.0 |
| 3.7337 | 7.53 | 104000 | 3.4516 | 1.0 |
| 3.3338 | 7.54 | 104100 | 3.4469 | 1.0 |
| 3.32 | 7.54 | 104200 | 3.4545 | 1.0 |
| 3.6807 | 7.55 | 104300 | 3.4449 | 1.0 |
| 3.3397 | 7.56 | 104400 | 3.4479 | 1.0 |
| 3.2993 | 7.57 | 104500 | 3.4424 | 1.0 |
| 3.3652 | 7.57 | 104600 | 3.4507 | 1.0 |
| 3.2885 | 7.58 | 104700 | 3.4437 | 1.0 |
| 3.4006 | 7.59 | 104800 | 3.4403 | 1.0 |
| 3.3361 | 7.59 | 104900 | 3.4432 | 1.0 |
| 3.4084 | 7.6 | 105000 | 3.4423 | 1.0 |
| 3.3251 | 7.61 | 105100 | 3.4418 | 1.0 |
| 3.3079 | 7.62 | 105200 | 3.4398 | 1.0 |
| 3.4738 | 7.62 | 105300 | 3.4497 | 1.0 |
| 3.5048 | 7.63 | 105400 | 3.4429 | 1.0 |
| 3.4189 | 7.64 | 105500 | 3.4410 | 1.0 |
| 3.3132 | 7.64 | 105600 | 3.4437 | 1.0 |
| 3.2738 | 7.65 | 105700 | 3.4457 | 1.0 |
| 3.2876 | 7.66 | 105800 | 3.4404 | 1.0 |
| 3.3413 | 7.67 | 105900 | 3.4458 | 1.0 |
| 3.3014 | 7.67 | 106000 | 3.4535 | 1.0 |
| 3.2244 | 7.68 | 106100 | 3.4436 | 1.0 |
| 3.2715 | 7.69 | 106200 | 3.4470 | 1.0 |
| 3.3593 | 7.7 | 106300 | 3.4410 | 1.0 |
| 3.334 | 7.7 | 106400 | 3.4525 | 1.0 |
| 3.3547 | 7.71 | 106500 | 3.4513 | 1.0 |
| 3.9896 | 7.72 | 106600 | 3.4381 | 1.0 |
| 3.4202 | 7.72 | 106700 | 3.4395 | 1.0 |
| 3.34 | 7.73 | 106800 | 3.4426 | 1.0 |
| 3.3778 | 7.74 | 106900 | 3.4508 | 1.0 |
| 3.3374 | 7.75 | 107000 | 3.4464 | 1.0 |
| 3.4008 | 7.75 | 107100 | 3.4365 | 1.0 |
| 3.2595 | 7.76 | 107200 | 3.4496 | 1.0 |
| 3.3261 | 7.77 | 107300 | 3.4543 | 1.0 |
| 3.2551 | 7.78 | 107400 | 3.4490 | 1.0 |
| 3.2967 | 7.78 | 107500 | 3.4404 | 1.0 |
| 3.4232 | 7.79 | 107600 | 3.4492 | 1.0 |
| 3.3992 | 7.8 | 107700 | 3.4448 | 1.0 |
| 3.3268 | 7.8 | 107800 | 3.4465 | 1.0 |
| 3.283 | 7.81 | 107900 | 3.4424 | 1.0 |
| 3.3488 | 7.82 | 108000 | 3.4446 | 1.0 |
| 3.3232 | 7.83 | 108100 | 3.4432 | 1.0 |
| 3.5081 | 7.83 | 108200 | 3.4460 | 1.0 |
| 3.2686 | 7.84 | 108300 | 3.4499 | 1.0 |
| 3.2465 | 7.85 | 108400 | 3.4429 | 1.0 |
| 3.5602 | 7.85 | 108500 | 3.4398 | 1.0 |
| 3.299 | 7.86 | 108600 | 3.4376 | 1.0 |
| 3.3437 | 7.87 | 108700 | 3.4428 | 1.0 |
| 3.3221 | 7.88 | 108800 | 3.4492 | 1.0 |
| 3.5462 | 7.88 | 108900 | 3.4414 | 1.0 |
| 3.3901 | 7.89 | 109000 | 3.4506 | 1.0 |
| 3.3598 | 7.9 | 109100 | 3.4421 | 1.0 |
| 3.3946 | 7.91 | 109200 | 3.4389 | 1.0 |
| 3.3013 | 7.91 | 109300 | 3.4444 | 1.0 |
| 3.3094 | 7.92 | 109400 | 3.4464 | 1.0 |
| 3.4829 | 7.93 | 109500 | 3.4379 | 1.0 |
| 3.2769 | 7.93 | 109600 | 3.4401 | 1.0 |
| 3.3359 | 7.94 | 109700 | 3.4437 | 1.0 |
| 3.3079 | 7.95 | 109800 | 3.4455 | 1.0 |
| 3.3623 | 7.96 | 109900 | 3.4447 | 1.0 |
| 3.3439 | 7.96 | 110000 | 3.4404 | 1.0 |
| 3.3045 | 7.97 | 110100 | 3.4520 | 1.0 |
| 3.2657 | 7.98 | 110200 | 3.4409 | 1.0 |
| 3.3187 | 7.99 | 110300 | 3.4430 | 1.0 |
| 3.349 | 7.99 | 110400 | 3.4430 | 1.0 |
| 3.3262 | 8.0 | 110500 | 3.4412 | 1.0 |
| 3.2603 | 8.01 | 110600 | 3.4440 | 1.0 |
| 3.4284 | 8.01 | 110700 | 3.4456 | 1.0 |
| 3.5993 | 8.02 | 110800 | 3.4518 | 1.0 |
| 5.6854 | 8.03 | 110900 | 3.4411 | 1.0 |
| 3.3856 | 8.04 | 111000 | 3.4430 | 1.0 |
| 3.5339 | 8.04 | 111100 | 3.4394 | 1.0 |
| 3.2691 | 8.05 | 111200 | 3.4425 | 1.0 |
| 3.3462 | 8.06 | 111300 | 3.4422 | 1.0 |
| 3.3469 | 8.06 | 111400 | 3.4458 | 1.0 |
| 3.3598 | 8.07 | 111500 | 3.4429 | 1.0 |
| 3.554 | 8.08 | 111600 | 3.4438 | 1.0 |
| 3.3207 | 8.09 | 111700 | 3.4480 | 1.0 |
| 3.2963 | 8.09 | 111800 | 3.4434 | 1.0 |
| 3.4644 | 8.1 | 111900 | 3.4417 | 1.0 |
| 3.4265 | 8.11 | 112000 | 3.4404 | 1.0 |
| 3.3026 | 8.12 | 112100 | 3.4442 | 1.0 |
| 3.2747 | 8.12 | 112200 | 3.4433 | 1.0 |
| 7.3735 | 8.13 | 112300 | 3.4403 | 1.0 |
| 3.4803 | 8.14 | 112400 | 3.4464 | 1.0 |
| 4.9879 | 8.14 | 112500 | 3.4454 | 1.0 |
| 3.4249 | 8.15 | 112600 | 3.4421 | 1.0 |
| 3.3493 | 8.16 | 112700 | 3.4403 | 1.0 |
| 3.3514 | 8.17 | 112800 | 3.4445 | 1.0 |
| 3.262 | 8.17 | 112900 | 3.4457 | 1.0 |
| 3.3517 | 8.18 | 113000 | 3.4479 | 1.0 |
| 3.2408 | 8.19 | 113100 | 3.4413 | 1.0 |
| 3.2346 | 8.2 | 113200 | 3.4415 | 1.0 |
| 3.2397 | 8.2 | 113300 | 3.4414 | 1.0 |
| 3.3794 | 8.21 | 113400 | 3.4502 | 1.0 |
| 3.516 | 8.22 | 113500 | 3.4507 | 1.0 |
| 3.4129 | 8.22 | 113600 | 3.4455 | 1.0 |
| 3.3381 | 8.23 | 113700 | 3.4540 | 1.0 |
| 3.3172 | 8.24 | 113800 | 3.4473 | 1.0 |
| 3.5307 | 8.25 | 113900 | 3.4431 | 1.0 |
| 3.3424 | 8.25 | 114000 | 3.4511 | 1.0 |
| 3.4004 | 8.26 | 114100 | 3.4434 | 1.0 |
| 3.4061 | 8.27 | 114200 | 3.4435 | 1.0 |
| 3.5333 | 8.27 | 114300 | 3.4415 | 1.0 |
| 3.2974 | 8.28 | 114400 | 3.4472 | 1.0 |
| 3.3827 | 8.29 | 114500 | 3.4469 | 1.0 |
| 3.5697 | 8.3 | 114600 | 3.4427 | 1.0 |
| 3.4561 | 8.3 | 114700 | 3.4433 | 1.0 |
| 3.5205 | 8.31 | 114800 | 3.4474 | 1.0 |
| 3.2541 | 8.32 | 114900 | 3.4475 | 1.0 |
| 3.4251 | 8.33 | 115000 | 3.4394 | 1.0 |
| 3.2477 | 8.33 | 115100 | 3.4524 | 1.0 |
| 3.4003 | 8.34 | 115200 | 3.4438 | 1.0 |
| 3.3378 | 8.35 | 115300 | 3.4447 | 1.0 |
| 3.2828 | 8.35 | 115400 | 3.4493 | 1.0 |
| 3.6974 | 8.36 | 115500 | 3.4507 | 1.0 |
| 3.3466 | 8.37 | 115600 | 3.4384 | 1.0 |
| 3.2601 | 8.38 | 115700 | 3.4538 | 1.0 |
| 3.8384 | 8.38 | 115800 | 3.4408 | 1.0 |
| 3.5255 | 8.39 | 115900 | 3.4446 | 1.0 |
| 3.3517 | 8.4 | 116000 | 3.4445 | 1.0 |
| 3.37 | 8.41 | 116100 | 3.4530 | 1.0 |
| 3.4486 | 8.41 | 116200 | 3.4446 | 1.0 |
| 3.4104 | 8.42 | 116300 | 3.4447 | 1.0 |
| 3.5267 | 8.43 | 116400 | 3.4410 | 1.0 |
| 3.4422 | 8.43 | 116500 | 3.4546 | 1.0 |
| 3.1616 | 8.44 | 116600 | 3.4400 | 1.0 |
| 3.3557 | 8.45 | 116700 | 3.4458 | 1.0 |
| 3.4674 | 8.46 | 116800 | 3.4443 | 1.0 |
| 3.3114 | 8.46 | 116900 | 3.4390 | 1.0 |
| 3.4986 | 8.47 | 117000 | 3.4405 | 1.0 |
| 3.4579 | 8.48 | 117100 | 3.4459 | 1.0 |
| 3.3369 | 8.48 | 117200 | 3.4403 | 1.0 |
| 3.4802 | 8.49 | 117300 | 3.4480 | 1.0 |
| 3.3244 | 8.5 | 117400 | 3.4447 | 1.0 |
| 3.3096 | 8.51 | 117500 | 3.4525 | 1.0 |
| 3.3415 | 8.51 | 117600 | 3.4516 | 1.0 |
| 3.416 | 8.52 | 117700 | 3.4396 | 1.0 |
| 3.3363 | 8.53 | 117800 | 3.4510 | 1.0 |
| 3.2588 | 8.54 | 117900 | 3.4439 | 1.0 |
| 3.4127 | 8.54 | 118000 | 3.4370 | 1.0 |
| 3.4268 | 8.55 | 118100 | 3.4472 | 1.0 |
| 3.3877 | 8.56 | 118200 | 3.4437 | 1.0 |
| 3.386 | 8.56 | 118300 | 3.4448 | 1.0 |
| 3.9643 | 8.57 | 118400 | 3.4500 | 1.0 |
| 3.2205 | 8.58 | 118500 | 3.4410 | 1.0 |
| 3.3372 | 8.59 | 118600 | 3.4486 | 1.0 |
| 3.3919 | 8.59 | 118700 | 3.4485 | 1.0 |
| 3.3279 | 8.6 | 118800 | 3.4408 | 1.0 |
| 3.3251 | 8.61 | 118900 | 3.4379 | 1.0 |
| 3.2832 | 8.62 | 119000 | 3.4388 | 1.0 |
| 3.2708 | 8.62 | 119100 | 3.4522 | 1.0 |
| 4.0701 | 8.63 | 119200 | 3.4436 | 1.0 |
| 3.5261 | 8.64 | 119300 | 3.4475 | 1.0 |
| 3.2695 | 8.64 | 119400 | 3.4411 | 1.0 |
| 3.4095 | 8.65 | 119500 | 3.4451 | 1.0 |
| 3.2641 | 8.66 | 119600 | 3.4527 | 1.0 |
| 3.6962 | 8.67 | 119700 | 3.4495 | 1.0 |
| 3.407 | 8.67 | 119800 | 3.4523 | 1.0 |
| 3.5073 | 8.68 | 119900 | 3.4612 | 1.0 |
| 3.4697 | 8.69 | 120000 | 3.4491 | 1.0 |
| 3.4643 | 8.69 | 120100 | 3.4427 | 1.0 |
| 3.5253 | 8.7 | 120200 | 3.4457 | 1.0 |
| 3.2562 | 8.71 | 120300 | 3.4545 | 1.0 |
| 3.2946 | 8.72 | 120400 | 3.4570 | 1.0 |
| 3.393 | 8.72 | 120500 | 3.4432 | 1.0 |
| 3.2528 | 8.73 | 120600 | 3.4391 | 1.0 |
| 3.4529 | 8.74 | 120700 | 3.4530 | 1.0 |
| 3.506 | 8.75 | 120800 | 3.4425 | 1.0 |
| 3.3464 | 8.75 | 120900 | 3.4420 | 1.0 |
| 3.3287 | 8.76 | 121000 | 3.4463 | 1.0 |
| 3.3165 | 8.77 | 121100 | 3.4509 | 1.0 |
| 3.3102 | 8.77 | 121200 | 3.4418 | 1.0 |
| 3.4206 | 8.78 | 121300 | 3.4495 | 1.0 |
| 3.5963 | 8.79 | 121400 | 3.4432 | 1.0 |
| 3.2621 | 8.8 | 121500 | 3.4455 | 1.0 |
| 3.3275 | 8.8 | 121600 | 3.4483 | 1.0 |
| 3.3654 | 8.81 | 121700 | 3.4476 | 1.0 |
| 3.4913 | 8.82 | 121800 | 3.4525 | 1.0 |
| 3.4162 | 8.83 | 121900 | 3.4409 | 1.0 |
| 3.221 | 8.83 | 122000 | 3.4415 | 1.0 |
| 3.3024 | 8.84 | 122100 | 3.4385 | 1.0 |
| 3.3451 | 8.85 | 122200 | 3.4428 | 1.0 |
| 3.3909 | 8.85 | 122300 | 3.4417 | 1.0 |
| 3.3237 | 8.86 | 122400 | 3.4472 | 1.0 |
| 3.2992 | 8.87 | 122500 | 3.4406 | 1.0 |
| 3.2422 | 8.88 | 122600 | 3.4492 | 1.0 |
| 3.3713 | 8.88 | 122700 | 3.4411 | 1.0 |
| 3.4062 | 8.89 | 122800 | 3.4412 | 1.0 |
| 3.3616 | 8.9 | 122900 | 3.4464 | 1.0 |
| 3.3811 | 8.9 | 123000 | 3.4382 | 1.0 |
| 3.3592 | 8.91 | 123100 | 3.4442 | 1.0 |
| 3.8331 | 8.92 | 123200 | 3.4423 | 1.0 |
| 3.3764 | 8.93 | 123300 | 3.4492 | 1.0 |
| 3.3964 | 8.93 | 123400 | 3.4390 | 1.0 |
| 3.5063 | 8.94 | 123500 | 3.4411 | 1.0 |
| 3.3627 | 8.95 | 123600 | 3.4467 | 1.0 |
| 4.1315 | 8.96 | 123700 | 3.4409 | 1.0 |
| 3.7114 | 8.96 | 123800 | 3.4456 | 1.0 |
| 3.3446 | 8.97 | 123900 | 3.4413 | 1.0 |
| 3.3777 | 8.98 | 124000 | 3.4464 | 1.0 |
| 3.6232 | 8.98 | 124100 | 3.4478 | 1.0 |
| 3.3275 | 8.99 | 124200 | 3.4474 | 1.0 |
| 3.5736 | 9.0 | 124300 | 3.4427 | 1.0 |
| 3.2052 | 9.01 | 124400 | 3.4455 | 1.0 |
| 3.3101 | 9.01 | 124500 | 3.4485 | 1.0 |
| 3.3523 | 9.02 | 124600 | 3.4389 | 1.0 |
| 3.3095 | 9.03 | 124700 | 3.4433 | 1.0 |
| 3.3152 | 9.03 | 124800 | 3.4402 | 1.0 |
| 3.2351 | 9.04 | 124900 | 3.4452 | 1.0 |
| 3.5137 | 9.05 | 125000 | 3.4458 | 1.0 |
| 3.3489 | 9.06 | 125100 | 3.4431 | 1.0 |
| 3.3822 | 9.06 | 125200 | 3.4370 | 1.0 |
| 3.3842 | 9.07 | 125300 | 3.4359 | 1.0 |
| 3.306 | 9.08 | 125400 | 3.4439 | 1.0 |
| 3.3784 | 9.09 | 125500 | 3.4538 | 1.0 |
| 3.3313 | 9.09 | 125600 | 3.4410 | 1.0 |
| 3.2891 | 9.1 | 125700 | 3.4397 | 1.0 |
| 3.321 | 9.11 | 125800 | 3.4457 | 1.0 |
| 3.2479 | 9.11 | 125900 | 3.4448 | 1.0 |
| 3.3723 | 9.12 | 126000 | 3.4409 | 1.0 |
| 3.3203 | 9.13 | 126100 | 3.4439 | 1.0 |
| 3.2906 | 9.14 | 126200 | 3.4388 | 1.0 |
| 3.2164 | 9.14 | 126300 | 3.4427 | 1.0 |
| 3.2608 | 9.15 | 126400 | 3.4396 | 1.0 |
| 3.3739 | 9.16 | 126500 | 3.4536 | 1.0 |
| 3.3479 | 9.17 | 126600 | 3.4533 | 1.0 |
| 3.4664 | 9.17 | 126700 | 3.4491 | 1.0 |
| 3.326 | 9.18 | 126800 | 3.4402 | 1.0 |
| 3.3056 | 9.19 | 126900 | 3.4398 | 1.0 |
| 3.3528 | 9.19 | 127000 | 3.4424 | 1.0 |
| 3.2717 | 9.2 | 127100 | 3.4409 | 1.0 |
| 3.3564 | 9.21 | 127200 | 3.4497 | 1.0 |
| 3.4015 | 9.22 | 127300 | 3.4435 | 1.0 |
| 3.3325 | 9.22 | 127400 | 3.4478 | 1.0 |
| 3.4459 | 9.23 | 127500 | 3.4479 | 1.0 |
| 3.2151 | 9.24 | 127600 | 3.4519 | 1.0 |
| 3.2456 | 9.24 | 127700 | 3.4408 | 1.0 |
| 3.3108 | 9.25 | 127800 | 3.4430 | 1.0 |
| 3.3965 | 9.26 | 127900 | 3.4427 | 1.0 |
| 3.4911 | 9.27 | 128000 | 3.4430 | 1.0 |
| 3.3996 | 9.27 | 128100 | 3.4458 | 1.0 |
| 3.3408 | 9.28 | 128200 | 3.4435 | 1.0 |
| 3.353 | 9.29 | 128300 | 3.4468 | 1.0 |
| 3.5449 | 9.3 | 128400 | 3.4401 | 1.0 |
| 3.3564 | 9.3 | 128500 | 3.4481 | 1.0 |
| 3.4768 | 9.31 | 128600 | 3.4450 | 1.0 |
| 3.3972 | 9.32 | 128700 | 3.4467 | 1.0 |
| 3.3295 | 9.32 | 128800 | 3.4385 | 1.0 |
| 3.3181 | 9.33 | 128900 | 3.4435 | 1.0 |
| 3.3224 | 9.34 | 129000 | 3.4467 | 1.0 |
| 3.3471 | 9.35 | 129100 | 3.4415 | 1.0 |
| 3.3379 | 9.35 | 129200 | 3.4458 | 1.0 |
| 3.3991 | 9.36 | 129300 | 3.4420 | 1.0 |
| 3.4037 | 9.37 | 129400 | 3.4433 | 1.0 |
| 3.3157 | 9.38 | 129500 | 3.4450 | 1.0 |
| 3.3739 | 9.38 | 129600 | 3.4426 | 1.0 |
| 3.2556 | 9.39 | 129700 | 3.4473 | 1.0 |
| 3.3451 | 9.4 | 129800 | 3.4413 | 1.0 |
| 3.3694 | 9.4 | 129900 | 3.4462 | 1.0 |
| 3.343 | 9.41 | 130000 | 3.4408 | 1.0 |
| 3.4286 | 9.42 | 130100 | 3.4495 | 1.0 |
| 3.4468 | 9.43 | 130200 | 3.4450 | 1.0 |
| 3.3417 | 9.43 | 130300 | 3.4457 | 1.0 |
| 3.4661 | 9.44 | 130400 | 3.4409 | 1.0 |
| 3.2859 | 9.45 | 130500 | 3.4412 | 1.0 |
| 3.3164 | 9.45 | 130600 | 3.4495 | 1.0 |
| 3.3542 | 9.46 | 130700 | 3.4428 | 1.0 |
| 3.2783 | 9.47 | 130800 | 3.4398 | 1.0 |
| 3.421 | 9.48 | 130900 | 3.4408 | 1.0 |
| 3.3765 | 9.48 | 131000 | 3.4443 | 1.0 |
| 3.3822 | 9.49 | 131100 | 3.4458 | 1.0 |
| 3.2261 | 9.5 | 131200 | 3.4437 | 1.0 |
| 3.362 | 9.51 | 131300 | 3.4388 | 1.0 |
| 3.3203 | 9.51 | 131400 | 3.4498 | 1.0 |
| 3.2326 | 9.52 | 131500 | 3.4415 | 1.0 |
| 3.3897 | 9.53 | 131600 | 3.4556 | 1.0 |
| 3.3434 | 9.53 | 131700 | 3.4421 | 1.0 |
| 3.3297 | 9.54 | 131800 | 3.4394 | 1.0 |
| 3.4889 | 9.55 | 131900 | 3.4420 | 1.0 |
| 3.3502 | 9.56 | 132000 | 3.4425 | 1.0 |
| 3.4079 | 9.56 | 132100 | 3.4370 | 1.0 |
| 3.213 | 9.57 | 132200 | 3.4479 | 1.0 |
| 3.3935 | 9.58 | 132300 | 3.4433 | 1.0 |
| 3.2598 | 9.59 | 132400 | 3.4431 | 1.0 |
| 3.3968 | 9.59 | 132500 | 3.4442 | 1.0 |
| 3.338 | 9.6 | 132600 | 3.4433 | 1.0 |
| 3.3268 | 9.61 | 132700 | 3.4447 | 1.0 |
| 3.3656 | 9.61 | 132800 | 3.4394 | 1.0 |
| 3.3782 | 9.62 | 132900 | 3.4397 | 1.0 |
| 3.3787 | 9.63 | 133000 | 3.4440 | 1.0 |
| 5.5557 | 9.64 | 133100 | 3.4396 | 1.0 |
| 3.4011 | 9.64 | 133200 | 3.4448 | 1.0 |
| 3.7319 | 9.65 | 133300 | 3.4447 | 1.0 |
| 3.5717 | 9.66 | 133400 | 3.4387 | 1.0 |
| 3.3051 | 9.66 | 133500 | 3.4460 | 1.0 |
| 3.3485 | 9.67 | 133600 | 3.4513 | 1.0 |
| 3.4845 | 9.68 | 133700 | 3.4506 | 1.0 |
| 3.335 | 9.69 | 133800 | 3.4415 | 1.0 |
| 3.2942 | 9.69 | 133900 | 3.4439 | 1.0 |
| 3.2748 | 9.7 | 134000 | 3.4390 | 1.0 |
| 3.392 | 9.71 | 134100 | 3.4490 | 1.0 |
| 3.3396 | 9.72 | 134200 | 3.4463 | 1.0 |
| 3.3097 | 9.72 | 134300 | 3.4440 | 1.0 |
| 3.3421 | 9.73 | 134400 | 3.4498 | 1.0 |
| 3.5204 | 9.74 | 134500 | 3.4514 | 1.0 |
| 3.8217 | 9.74 | 134600 | 3.4463 | 1.0 |
| 3.3094 | 9.75 | 134700 | 3.4402 | 1.0 |
| 3.3267 | 9.76 | 134800 | 3.4425 | 1.0 |
| 3.3396 | 9.77 | 134900 | 3.4429 | 1.0 |
| 3.3117 | 9.77 | 135000 | 3.4415 | 1.0 |
| 3.4302 | 9.78 | 135100 | 3.4406 | 1.0 |
| 3.2691 | 9.79 | 135200 | 3.4405 | 1.0 |
| 3.337 | 9.8 | 135300 | 3.4416 | 1.0 |
| 3.3437 | 9.8 | 135400 | 3.4427 | 1.0 |
| 3.3744 | 9.81 | 135500 | 3.4477 | 1.0 |
| 3.3151 | 9.82 | 135600 | 3.4388 | 1.0 |
| 3.3742 | 9.82 | 135700 | 3.4448 | 1.0 |
| 3.3093 | 9.83 | 135800 | 3.4462 | 1.0 |
| 3.4145 | 9.84 | 135900 | 3.4413 | 1.0 |
| 3.3858 | 9.85 | 136000 | 3.4459 | 1.0 |
| 3.3464 | 9.85 | 136100 | 3.4432 | 1.0 |
| 3.3831 | 9.86 | 136200 | 3.4467 | 1.0 |
| 3.2715 | 9.87 | 136300 | 3.4442 | 1.0 |
| 3.3594 | 9.87 | 136400 | 3.4444 | 1.0 |
| 3.3679 | 9.88 | 136500 | 3.4498 | 1.0 |
| 3.346 | 9.89 | 136600 | 3.4380 | 1.0 |
| 3.3156 | 9.9 | 136700 | 3.4501 | 1.0 |
| 3.3689 | 9.9 | 136800 | 3.4403 | 1.0 |
| 3.3157 | 9.91 | 136900 | 3.4461 | 1.0 |
| 3.2955 | 9.92 | 137000 | 3.4460 | 1.0 |
| 3.2288 | 9.93 | 137100 | 3.4429 | 1.0 |
| 3.3068 | 9.93 | 137200 | 3.4442 | 1.0 |
| 3.3965 | 9.94 | 137300 | 3.4400 | 1.0 |
| 3.3238 | 9.95 | 137400 | 3.4464 | 1.0 |
| 3.3469 | 9.95 | 137500 | 3.4496 | 1.0 |
| 3.3818 | 9.96 | 137600 | 3.4446 | 1.0 |
| 3.3677 | 9.97 | 137700 | 3.4487 | 1.0 |
| 3.4811 | 9.98 | 137800 | 3.4441 | 1.0 |
| 3.3636 | 9.98 | 137900 | 3.4456 | 1.0 |
| 3.3305 | 9.99 | 138000 | 3.4417 | 1.0 |
| 3.4025 | 10.0 | 138100 | 3.4401 | 1.0 |
| 3.4951 | 10.01 | 138200 | 3.4392 | 1.0 |
| 3.2803 | 10.01 | 138300 | 3.4411 | 1.0 |
| 4.6095 | 10.02 | 138400 | 3.4446 | 1.0 |
| 3.3677 | 10.03 | 138500 | 3.4465 | 1.0 |
| 3.4183 | 10.03 | 138600 | 3.4434 | 1.0 |
| 3.3482 | 10.04 | 138700 | 3.4430 | 1.0 |
| 3.2795 | 10.05 | 138800 | 3.4449 | 1.0 |
| 3.282 | 10.06 | 138900 | 3.4455 | 1.0 |
| 3.2617 | 10.06 | 139000 | 3.4442 | 1.0 |
| 3.5404 | 10.07 | 139100 | 3.4375 | 1.0 |
| 3.3432 | 10.08 | 139200 | 3.4447 | 1.0 |
| 3.3643 | 10.08 | 139300 | 3.4429 | 1.0 |
| 3.3022 | 10.09 | 139400 | 3.4415 | 1.0 |
| 3.4062 | 10.1 | 139500 | 3.4415 | 1.0 |
| 3.374 | 10.11 | 139600 | 3.4405 | 1.0 |
| 3.2843 | 10.11 | 139700 | 3.4435 | 1.0 |
| 3.6033 | 10.12 | 139800 | 3.4473 | 1.0 |
| 3.3374 | 10.13 | 139900 | 3.4428 | 1.0 |
| 3.3877 | 10.14 | 140000 | 3.4513 | 1.0 |
| 3.3533 | 10.14 | 140100 | 3.4484 | 1.0 |
| 3.3678 | 10.15 | 140200 | 3.4481 | 1.0 |
| 3.276 | 10.16 | 140300 | 3.4416 | 1.0 |
| 3.3052 | 10.16 | 140400 | 3.4483 | 1.0 |
| 3.4821 | 10.17 | 140500 | 3.4390 | 1.0 |
| 3.2748 | 10.18 | 140600 | 3.4389 | 1.0 |
| 3.2742 | 10.19 | 140700 | 3.4482 | 1.0 |
| 3.2824 | 10.19 | 140800 | 3.4416 | 1.0 |
| 3.37 | 10.2 | 140900 | 3.4435 | 1.0 |
| 3.3768 | 10.21 | 141000 | 3.4458 | 1.0 |
| 3.2652 | 10.22 | 141100 | 3.4454 | 1.0 |
| 3.4041 | 10.22 | 141200 | 3.4425 | 1.0 |
| 3.4062 | 10.23 | 141300 | 3.4465 | 1.0 |
| 3.2338 | 10.24 | 141400 | 3.4438 | 1.0 |
| 3.4214 | 10.24 | 141500 | 3.4425 | 1.0 |
| 3.3741 | 10.25 | 141600 | 3.4389 | 1.0 |
| 3.3156 | 10.26 | 141700 | 3.4468 | 1.0 |
| 3.43 | 10.27 | 141800 | 3.4430 | 1.0 |
| 3.3447 | 10.27 | 141900 | 3.4456 | 1.0 |
| 3.2682 | 10.28 | 142000 | 3.4517 | 1.0 |
| 3.3296 | 10.29 | 142100 | 3.4484 | 1.0 |
| 3.2508 | 10.29 | 142200 | 3.4420 | 1.0 |
| 3.3328 | 10.3 | 142300 | 3.4472 | 1.0 |
| 3.2838 | 10.31 | 142400 | 3.4439 | 1.0 |
| 3.3274 | 10.32 | 142500 | 3.4408 | 1.0 |
| 3.4848 | 10.32 | 142600 | 3.4448 | 1.0 |
| 3.5383 | 10.33 | 142700 | 3.4423 | 1.0 |
| 3.231 | 10.34 | 142800 | 3.4463 | 1.0 |
| 3.1536 | 10.35 | 142900 | 3.4437 | 1.0 |
| 3.281 | 10.35 | 143000 | 3.4436 | 1.0 |
| 3.2452 | 10.36 | 143100 | 3.4393 | 1.0 |
| 3.5728 | 10.37 | 143200 | 3.4406 | 1.0 |
| 3.3216 | 10.37 | 143300 | 3.4403 | 1.0 |
| 3.3496 | 10.38 | 143400 | 3.4397 | 1.0 |
| 3.3177 | 10.39 | 143500 | 3.4559 | 1.0 |
| 3.3153 | 10.4 | 143600 | 3.4460 | 1.0 |
| 3.4076 | 10.4 | 143700 | 3.4441 | 1.0 |
| 3.4137 | 10.41 | 143800 | 3.4397 | 1.0 |
| 3.3806 | 10.42 | 143900 | 3.4488 | 1.0 |
| 3.366 | 10.42 | 144000 | 3.4462 | 1.0 |
| 3.4151 | 10.43 | 144100 | 3.4446 | 1.0 |
| 3.3399 | 10.44 | 144200 | 3.4447 | 1.0 |
| 3.3705 | 10.45 | 144300 | 3.4392 | 1.0 |
| 3.5029 | 10.45 | 144400 | 3.4513 | 1.0 |
| 3.3149 | 10.46 | 144500 | 3.4458 | 1.0 |
| 3.3677 | 10.47 | 144600 | 3.4442 | 1.0 |
| 3.408 | 10.48 | 144700 | 3.4403 | 1.0 |
| 3.3738 | 10.48 | 144800 | 3.4405 | 1.0 |
| 3.2886 | 10.49 | 144900 | 3.4447 | 1.0 |
| 3.3321 | 10.5 | 145000 | 3.4455 | 1.0 |
| 3.4341 | 10.5 | 145100 | 3.4476 | 1.0 |
| 3.4789 | 10.51 | 145200 | 3.4436 | 1.0 |
| 3.4361 | 10.52 | 145300 | 3.4488 | 1.0 |
| 3.3073 | 10.53 | 145400 | 3.4495 | 1.0 |
| 3.3372 | 10.53 | 145500 | 3.4461 | 1.0 |
| 3.31 | 10.54 | 145600 | 3.4512 | 1.0 |
| 3.4571 | 10.55 | 145700 | 3.4473 | 1.0 |
| 3.3517 | 10.56 | 145800 | 3.4435 | 1.0 |
| 3.4304 | 10.56 | 145900 | 3.4428 | 1.0 |
| 3.4364 | 10.57 | 146000 | 3.4369 | 1.0 |
| 3.5522 | 10.58 | 146100 | 3.4431 | 1.0 |
| 3.421 | 10.58 | 146200 | 3.4426 | 1.0 |
| 3.3087 | 10.59 | 146300 | 3.4436 | 1.0 |
| 3.2905 | 10.6 | 146400 | 3.4417 | 1.0 |
| 3.4746 | 10.61 | 146500 | 3.4419 | 1.0 |
| 3.3347 | 10.61 | 146600 | 3.4396 | 1.0 |
| 3.2969 | 10.62 | 146700 | 3.4471 | 1.0 |
| 3.3403 | 10.63 | 146800 | 3.4453 | 1.0 |
| 3.8747 | 10.63 | 146900 | 3.4447 | 1.0 |
| 3.3049 | 10.64 | 147000 | 3.4458 | 1.0 |
| 3.3451 | 10.65 | 147100 | 3.4441 | 1.0 |
| 3.4467 | 10.66 | 147200 | 3.4439 | 1.0 |
| 3.3037 | 10.66 | 147300 | 3.4425 | 1.0 |
| 3.3891 | 10.67 | 147400 | 3.4427 | 1.0 |
| 3.2158 | 10.68 | 147500 | 3.4436 | 1.0 |
| 3.3726 | 10.69 | 147600 | 3.4438 | 1.0 |
| 3.3391 | 10.69 | 147700 | 3.4548 | 1.0 |
| 3.2352 | 10.7 | 147800 | 3.4414 | 1.0 |
| 3.3604 | 10.71 | 147900 | 3.4408 | 1.0 |
| 3.3056 | 10.71 | 148000 | 3.4407 | 1.0 |
| 3.3201 | 10.72 | 148100 | 3.4404 | 1.0 |
| 3.4137 | 10.73 | 148200 | 3.4423 | 1.0 |
| 3.3336 | 10.74 | 148300 | 3.4455 | 1.0 |
| 3.317 | 10.74 | 148400 | 3.4426 | 1.0 |
| 3.2644 | 10.75 | 148500 | 3.4427 | 1.0 |
| 3.4462 | 10.76 | 148600 | 3.4429 | 1.0 |
| 3.448 | 10.77 | 148700 | 3.4479 | 1.0 |
| 3.8269 | 10.77 | 148800 | 3.4428 | 1.0 |
| 3.2383 | 10.78 | 148900 | 3.4400 | 1.0 |
| 3.4066 | 10.79 | 149000 | 3.4412 | 1.0 |
| 3.2348 | 10.79 | 149100 | 3.4491 | 1.0 |
| 3.2971 | 10.8 | 149200 | 3.4464 | 1.0 |
| 3.2493 | 10.81 | 149300 | 3.4509 | 1.0 |
| 3.4274 | 10.82 | 149400 | 3.4420 | 1.0 |
| 3.4327 | 10.82 | 149500 | 3.4441 | 1.0 |
| 3.7189 | 10.83 | 149600 | 3.4377 | 1.0 |
| 3.3102 | 10.84 | 149700 | 3.4484 | 1.0 |
| 3.4991 | 10.84 | 149800 | 3.4460 | 1.0 |
| 3.2776 | 10.85 | 149900 | 3.4428 | 1.0 |
| 3.4605 | 10.86 | 150000 | 3.4469 | 1.0 |
| 3.8307 | 10.87 | 150100 | 3.4500 | 1.0 |
| 3.3874 | 10.87 | 150200 | 3.4454 | 1.0 |
| 3.3007 | 10.88 | 150300 | 3.4433 | 1.0 |
| 3.4145 | 10.89 | 150400 | 3.4434 | 1.0 |
| 3.1793 | 10.9 | 150500 | 3.4401 | 1.0 |
| 3.27 | 10.9 | 150600 | 3.4459 | 1.0 |
| 3.3434 | 10.91 | 150700 | 3.4400 | 1.0 |
| 3.3301 | 10.92 | 150800 | 3.4389 | 1.0 |
| 3.622 | 10.92 | 150900 | 3.4451 | 1.0 |
| 3.2369 | 10.93 | 151000 | 3.4417 | 1.0 |
| 3.4093 | 10.94 | 151100 | 3.4520 | 1.0 |
| 3.3885 | 10.95 | 151200 | 3.4448 | 1.0 |
| 3.4032 | 10.95 | 151300 | 3.4453 | 1.0 |
| 3.4659 | 10.96 | 151400 | 3.4445 | 1.0 |
| 5.0434 | 10.97 | 151500 | 3.4457 | 1.0 |
| 3.5397 | 10.98 | 151600 | 3.4409 | 1.0 |
| 3.4057 | 10.98 | 151700 | 3.4426 | 1.0 |
| 3.2813 | 10.99 | 151800 | 3.4471 | 1.0 |
| 3.2432 | 11.0 | 151900 | 3.4465 | 1.0 |
| 3.3493 | 11.0 | 152000 | 3.4466 | 1.0 |
| 3.4295 | 11.01 | 152100 | 3.4379 | 1.0 |
| 3.2836 | 11.02 | 152200 | 3.4421 | 1.0 |
| 3.3436 | 11.03 | 152300 | 3.4429 | 1.0 |
| 3.2982 | 11.03 | 152400 | 3.4473 | 1.0 |
| 3.3687 | 11.04 | 152500 | 3.4428 | 1.0 |
| 3.362 | 11.05 | 152600 | 3.4387 | 1.0 |
| 3.3621 | 11.05 | 152700 | 3.4410 | 1.0 |
| 3.4442 | 11.06 | 152800 | 3.4392 | 1.0 |
| 3.247 | 11.07 | 152900 | 3.4536 | 1.0 |
| 3.3843 | 11.08 | 153000 | 3.4479 | 1.0 |
| 3.3548 | 11.08 | 153100 | 3.4425 | 1.0 |
| 3.376 | 11.09 | 153200 | 3.4394 | 1.0 |
| 3.3866 | 11.1 | 153300 | 3.4389 | 1.0 |
| 3.3348 | 11.11 | 153400 | 3.4484 | 1.0 |
| 3.3206 | 11.11 | 153500 | 3.4468 | 1.0 |
| 3.4335 | 11.12 | 153600 | 3.4445 | 1.0 |
| 3.3921 | 11.13 | 153700 | 3.4456 | 1.0 |
| 3.434 | 11.13 | 153800 | 3.4422 | 1.0 |
| 3.3742 | 11.14 | 153900 | 3.4434 | 1.0 |
| 3.3157 | 11.15 | 154000 | 3.4444 | 1.0 |
| 3.4209 | 11.16 | 154100 | 3.4411 | 1.0 |
| 3.3413 | 11.16 | 154200 | 3.4457 | 1.0 |
| 3.3626 | 11.17 | 154300 | 3.4451 | 1.0 |
| 3.3541 | 11.18 | 154400 | 3.4391 | 1.0 |
| 3.2927 | 11.19 | 154500 | 3.4515 | 1.0 |
| 3.3222 | 11.19 | 154600 | 3.4498 | 1.0 |
| 3.2971 | 11.2 | 154700 | 3.4521 | 1.0 |
| 3.3817 | 11.21 | 154800 | 3.4482 | 1.0 |
| 3.3806 | 11.21 | 154900 | 3.4467 | 1.0 |
| 3.2959 | 11.22 | 155000 | 3.4417 | 1.0 |
| 3.4212 | 11.23 | 155100 | 3.4438 | 1.0 |
| 3.3606 | 11.24 | 155200 | 3.4382 | 1.0 |
| 3.3119 | 11.24 | 155300 | 3.4381 | 1.0 |
| 3.4004 | 11.25 | 155400 | 3.4403 | 1.0 |
| 3.2865 | 11.26 | 155500 | 3.4469 | 1.0 |
| 3.3606 | 11.26 | 155600 | 3.4492 | 1.0 |
| 3.2771 | 11.27 | 155700 | 3.4407 | 1.0 |
| 3.3281 | 11.28 | 155800 | 3.4461 | 1.0 |
| 3.3006 | 11.29 | 155900 | 3.4505 | 1.0 |
| 3.3116 | 11.29 | 156000 | 3.4440 | 1.0 |
| 3.4326 | 11.3 | 156100 | 3.4475 | 1.0 |
| 3.2976 | 11.31 | 156200 | 3.4517 | 1.0 |
| 3.3424 | 11.32 | 156300 | 3.4429 | 1.0 |
| 3.5005 | 11.32 | 156400 | 3.4398 | 1.0 |
| 3.2623 | 11.33 | 156500 | 3.4382 | 1.0 |
| 3.331 | 11.34 | 156600 | 3.4472 | 1.0 |
| 3.3657 | 11.34 | 156700 | 3.4413 | 1.0 |
| 3.3101 | 11.35 | 156800 | 3.4496 | 1.0 |
| 3.3516 | 11.36 | 156900 | 3.4465 | 1.0 |
| 3.752 | 11.37 | 157000 | 3.4419 | 1.0 |
| 3.2446 | 11.37 | 157100 | 3.4416 | 1.0 |
| 3.2753 | 11.38 | 157200 | 3.4406 | 1.0 |
| 3.2386 | 11.39 | 157300 | 3.4420 | 1.0 |
| 3.3541 | 11.4 | 157400 | 3.4409 | 1.0 |
| 3.4276 | 11.4 | 157500 | 3.4430 | 1.0 |
| 3.2635 | 11.41 | 157600 | 3.4442 | 1.0 |
| 3.4478 | 11.42 | 157700 | 3.4413 | 1.0 |
| 3.3043 | 11.42 | 157800 | 3.4491 | 1.0 |
| 3.3014 | 11.43 | 157900 | 3.4413 | 1.0 |
| 3.3542 | 11.44 | 158000 | 3.4436 | 1.0 |
| 3.3745 | 11.45 | 158100 | 3.4465 | 1.0 |
| 3.3318 | 11.45 | 158200 | 3.4463 | 1.0 |
| 3.3373 | 11.46 | 158300 | 3.4444 | 1.0 |
| 3.4279 | 11.47 | 158400 | 3.4386 | 1.0 |
| 3.3588 | 11.47 | 158500 | 3.4449 | 1.0 |
| 3.338 | 11.48 | 158600 | 3.4399 | 1.0 |
| 3.4119 | 11.49 | 158700 | 3.4376 | 1.0 |
| 3.2989 | 11.5 | 158800 | 3.4462 | 1.0 |
| 3.1883 | 11.5 | 158900 | 3.4398 | 1.0 |
| 3.277 | 11.51 | 159000 | 3.4457 | 1.0 |
| 3.2838 | 11.52 | 159100 | 3.4481 | 1.0 |
| 3.3205 | 11.53 | 159200 | 3.4496 | 1.0 |
| 3.2713 | 11.53 | 159300 | 3.4435 | 1.0 |
| 3.3927 | 11.54 | 159400 | 3.4441 | 1.0 |
| 3.5806 | 11.55 | 159500 | 3.4466 | 1.0 |
| 3.3704 | 11.55 | 159600 | 3.4462 | 1.0 |
| 3.3217 | 11.56 | 159700 | 3.4444 | 1.0 |
| 3.2637 | 11.57 | 159800 | 3.4481 | 1.0 |
| 3.2525 | 11.58 | 159900 | 3.4456 | 1.0 |
| 3.3364 | 11.58 | 160000 | 3.4445 | 1.0 |
| 3.3219 | 11.59 | 160100 | 3.4431 | 1.0 |
| 3.3982 | 11.6 | 160200 | 3.4489 | 1.0 |
| 3.2253 | 11.61 | 160300 | 3.4409 | 1.0 |
| 3.2497 | 11.61 | 160400 | 3.4427 | 1.0 |
| 3.3137 | 11.62 | 160500 | 3.4454 | 1.0 |
| 3.566 | 11.63 | 160600 | 3.4419 | 1.0 |
| 3.3203 | 11.63 | 160700 | 3.4460 | 1.0 |
| 3.3048 | 11.64 | 160800 | 3.4439 | 1.0 |
| 3.371 | 11.65 | 160900 | 3.4432 | 1.0 |
| 3.249 | 11.66 | 161000 | 3.4412 | 1.0 |
| 3.2731 | 11.66 | 161100 | 3.4430 | 1.0 |
| 3.3787 | 11.67 | 161200 | 3.4426 | 1.0 |
| 3.2696 | 11.68 | 161300 | 3.4479 | 1.0 |
| 3.7056 | 11.68 | 161400 | 3.4417 | 1.0 |
| 3.3999 | 11.69 | 161500 | 3.4455 | 1.0 |
| 3.292 | 11.7 | 161600 | 3.4458 | 1.0 |
| 3.2673 | 11.71 | 161700 | 3.4398 | 1.0 |
| 3.4488 | 11.71 | 161800 | 3.4445 | 1.0 |
| 3.2858 | 11.72 | 161900 | 3.4422 | 1.0 |
| 3.4464 | 11.73 | 162000 | 3.4466 | 1.0 |
| 3.2651 | 11.74 | 162100 | 3.4460 | 1.0 |
| 3.2518 | 11.74 | 162200 | 3.4520 | 1.0 |
| 3.4483 | 11.75 | 162300 | 3.4447 | 1.0 |
| 3.2609 | 11.76 | 162400 | 3.4373 | 1.0 |
| 3.398 | 11.76 | 162500 | 3.4432 | 1.0 |
| 3.5529 | 11.77 | 162600 | 3.4435 | 1.0 |
| 3.3348 | 11.78 | 162700 | 3.4452 | 1.0 |
| 3.398 | 11.79 | 162800 | 3.4393 | 1.0 |
| 3.5933 | 11.79 | 162900 | 3.4418 | 1.0 |
| 3.3373 | 11.8 | 163000 | 3.4434 | 1.0 |
| 3.3553 | 11.81 | 163100 | 3.4463 | 1.0 |
| 3.3234 | 11.81 | 163200 | 3.4421 | 1.0 |
| 3.3678 | 11.82 | 163300 | 3.4417 | 1.0 |
| 3.2942 | 11.83 | 163400 | 3.4454 | 1.0 |
| 3.5065 | 11.84 | 163500 | 3.4490 | 1.0 |
| 3.2952 | 11.84 | 163600 | 3.4468 | 1.0 |
| 3.7354 | 11.85 | 163700 | 3.4450 | 1.0 |
| 3.3021 | 11.86 | 163800 | 3.4439 | 1.0 |
| 3.3754 | 11.87 | 163900 | 3.4455 | 1.0 |
| 3.2568 | 11.87 | 164000 | 3.4400 | 1.0 |
| 3.3191 | 11.88 | 164100 | 3.4391 | 1.0 |
| 3.379 | 11.89 | 164200 | 3.4435 | 1.0 |
| 3.3221 | 11.89 | 164300 | 3.4440 | 1.0 |
| 3.3765 | 11.9 | 164400 | 3.4452 | 1.0 |
| 3.2364 | 11.91 | 164500 | 3.4445 | 1.0 |
| 3.6366 | 11.92 | 164600 | 3.4424 | 1.0 |
| 3.3871 | 11.92 | 164700 | 3.4398 | 1.0 |
| 3.3257 | 11.93 | 164800 | 3.4414 | 1.0 |
| 3.298 | 11.94 | 164900 | 3.4388 | 1.0 |
| 3.2322 | 11.95 | 165000 | 3.4410 | 1.0 |
| 3.4019 | 11.95 | 165100 | 3.4453 | 1.0 |
| 3.5989 | 11.96 | 165200 | 3.4435 | 1.0 |
| 3.3113 | 11.97 | 165300 | 3.4439 | 1.0 |
| 3.3364 | 11.97 | 165400 | 3.4416 | 1.0 |
| 3.3256 | 11.98 | 165500 | 3.4465 | 1.0 |
| 3.3355 | 11.99 | 165600 | 3.4434 | 1.0 |
| 3.3243 | 12.0 | 165700 | 3.4420 | 1.0 |
| 3.277 | 12.0 | 165800 | 3.4429 | 1.0 |
| 3.3413 | 12.01 | 165900 | 3.4418 | 1.0 |
| 3.3576 | 12.02 | 166000 | 3.4432 | 1.0 |
| 3.2624 | 12.02 | 166100 | 3.4493 | 1.0 |
| 3.4131 | 12.03 | 166200 | 3.4429 | 1.0 |
| 3.3717 | 12.04 | 166300 | 3.4460 | 1.0 |
| 3.4403 | 12.05 | 166400 | 3.4413 | 1.0 |
| 3.3418 | 12.05 | 166500 | 3.4425 | 1.0 |
| 3.2016 | 12.06 | 166600 | 3.4429 | 1.0 |
| 3.2851 | 12.07 | 166700 | 3.4427 | 1.0 |
| 3.3627 | 12.08 | 166800 | 3.4436 | 1.0 |
| 3.176 | 12.08 | 166900 | 3.4473 | 1.0 |
| 3.3159 | 12.09 | 167000 | 3.4431 | 1.0 |
| 3.335 | 12.1 | 167100 | 3.4425 | 1.0 |
| 3.2585 | 12.1 | 167200 | 3.4438 | 1.0 |
| 3.311 | 12.11 | 167300 | 3.4420 | 1.0 |
| 3.2594 | 12.12 | 167400 | 3.4402 | 1.0 |
| 3.3877 | 12.13 | 167500 | 3.4427 | 1.0 |
| 3.3837 | 12.13 | 167600 | 3.4468 | 1.0 |
| 3.4012 | 12.14 | 167700 | 3.4431 | 1.0 |
| 3.3258 | 12.15 | 167800 | 3.4405 | 1.0 |
| 3.5918 | 12.16 | 167900 | 3.4420 | 1.0 |
| 3.1809 | 12.16 | 168000 | 3.4487 | 1.0 |
| 3.2878 | 12.17 | 168100 | 3.4453 | 1.0 |
| 3.3626 | 12.18 | 168200 | 3.4469 | 1.0 |
| 3.3128 | 12.18 | 168300 | 3.4452 | 1.0 |
| 3.3257 | 12.19 | 168400 | 3.4466 | 1.0 |
| 3.3226 | 12.2 | 168500 | 3.4416 | 1.0 |
| 3.5412 | 12.21 | 168600 | 3.4479 | 1.0 |
| 3.2933 | 12.21 | 168700 | 3.4476 | 1.0 |
| 3.5552 | 12.22 | 168800 | 3.4431 | 1.0 |
| 3.3288 | 12.23 | 168900 | 3.4424 | 1.0 |
| 3.4587 | 12.23 | 169000 | 3.4423 | 1.0 |
| 3.3286 | 12.24 | 169100 | 3.4449 | 1.0 |
| 3.2894 | 12.25 | 169200 | 3.4432 | 1.0 |
| 4.5148 | 12.26 | 169300 | 3.4424 | 1.0 |
| 3.3809 | 12.26 | 169400 | 3.4472 | 1.0 |
| 3.2641 | 12.27 | 169500 | 3.4456 | 1.0 |
| 3.3429 | 12.28 | 169600 | 3.4443 | 1.0 |
| 3.2988 | 12.29 | 169700 | 3.4423 | 1.0 |
| 3.3795 | 12.29 | 169800 | 3.4408 | 1.0 |
| 3.2812 | 12.3 | 169900 | 3.4468 | 1.0 |
| 3.2393 | 12.31 | 170000 | 3.4415 | 1.0 |
| 3.3997 | 12.31 | 170100 | 3.4426 | 1.0 |
| 3.3112 | 12.32 | 170200 | 3.4424 | 1.0 |
| 3.4299 | 12.33 | 170300 | 3.4434 | 1.0 |
| 3.486 | 12.34 | 170400 | 3.4454 | 1.0 |
| 3.2899 | 12.34 | 170500 | 3.4451 | 1.0 |
| 3.4311 | 12.35 | 170600 | 3.4456 | 1.0 |
| 3.2727 | 12.36 | 170700 | 3.4472 | 1.0 |
| 3.3182 | 12.37 | 170800 | 3.4409 | 1.0 |
| 3.5047 | 12.37 | 170900 | 3.4412 | 1.0 |
| 3.3801 | 12.38 | 171000 | 3.4403 | 1.0 |
| 3.3643 | 12.39 | 171100 | 3.4400 | 1.0 |
| 3.3132 | 12.39 | 171200 | 3.4417 | 1.0 |
| 3.3558 | 12.4 | 171300 | 3.4440 | 1.0 |
| 3.4187 | 12.41 | 171400 | 3.4470 | 1.0 |
| 3.3376 | 12.42 | 171500 | 3.4450 | 1.0 |
| 3.3095 | 12.42 | 171600 | 3.4456 | 1.0 |
| 3.3304 | 12.43 | 171700 | 3.4465 | 1.0 |
| 3.4092 | 12.44 | 171800 | 3.4500 | 1.0 |
| 3.4149 | 12.44 | 171900 | 3.4459 | 1.0 |
| 5.8155 | 12.45 | 172000 | 3.4422 | 1.0 |
| 3.3086 | 12.46 | 172100 | 3.4405 | 1.0 |
| 3.2699 | 12.47 | 172200 | 3.4439 | 1.0 |
| 3.2727 | 12.47 | 172300 | 3.4465 | 1.0 |
| 3.4084 | 12.48 | 172400 | 3.4495 | 1.0 |
| 3.3246 | 12.49 | 172500 | 3.4451 | 1.0 |
| 3.4584 | 12.5 | 172600 | 3.4404 | 1.0 |
| 3.3491 | 12.5 | 172700 | 3.4407 | 1.0 |
| 3.3103 | 12.51 | 172800 | 3.4417 | 1.0 |
| 3.3413 | 12.52 | 172900 | 3.4452 | 1.0 |
| 3.3625 | 12.52 | 173000 | 3.4437 | 1.0 |
| 3.3988 | 12.53 | 173100 | 3.4452 | 1.0 |
| 3.3915 | 12.54 | 173200 | 3.4428 | 1.0 |
| 3.2812 | 12.55 | 173300 | 3.4445 | 1.0 |
| 3.2952 | 12.55 | 173400 | 3.4450 | 1.0 |
| 3.4923 | 12.56 | 173500 | 3.4419 | 1.0 |
| 3.4275 | 12.57 | 173600 | 3.4420 | 1.0 |
| 3.8005 | 12.58 | 173700 | 3.4465 | 1.0 |
| 3.5748 | 12.58 | 173800 | 3.4437 | 1.0 |
| 3.283 | 12.59 | 173900 | 3.4441 | 1.0 |
| 3.3727 | 12.6 | 174000 | 3.4444 | 1.0 |
| 3.285 | 12.6 | 174100 | 3.4443 | 1.0 |
| 3.4836 | 12.61 | 174200 | 3.4422 | 1.0 |
| 3.5803 | 12.62 | 174300 | 3.4426 | 1.0 |
| 3.2655 | 12.63 | 174400 | 3.4420 | 1.0 |
| 3.3653 | 12.63 | 174500 | 3.4463 | 1.0 |
| 3.3581 | 12.64 | 174600 | 3.4464 | 1.0 |
| 3.2738 | 12.65 | 174700 | 3.4435 | 1.0 |
| 3.3552 | 12.65 | 174800 | 3.4409 | 1.0 |
| 3.3571 | 12.66 | 174900 | 3.4467 | 1.0 |
| 3.3093 | 12.67 | 175000 | 3.4423 | 1.0 |
| 3.6147 | 12.68 | 175100 | 3.4444 | 1.0 |
| 3.2892 | 12.68 | 175200 | 3.4420 | 1.0 |
| 3.4071 | 12.69 | 175300 | 3.4455 | 1.0 |
| 3.3201 | 12.7 | 175400 | 3.4502 | 1.0 |
| 3.308 | 12.71 | 175500 | 3.4428 | 1.0 |
| 3.3885 | 12.71 | 175600 | 3.4452 | 1.0 |
| 3.3285 | 12.72 | 175700 | 3.4418 | 1.0 |
| 3.3647 | 12.73 | 175800 | 3.4446 | 1.0 |
| 3.2559 | 12.73 | 175900 | 3.4433 | 1.0 |
| 3.4547 | 12.74 | 176000 | 3.4484 | 1.0 |
| 3.395 | 12.75 | 176100 | 3.4464 | 1.0 |
| 3.4244 | 12.76 | 176200 | 3.4468 | 1.0 |
| 3.4961 | 12.76 | 176300 | 3.4441 | 1.0 |
| 3.4281 | 12.77 | 176400 | 3.4419 | 1.0 |
| 3.4241 | 12.78 | 176500 | 3.4407 | 1.0 |
| 3.2563 | 12.79 | 176600 | 3.4430 | 1.0 |
| 3.3779 | 12.79 | 176700 | 3.4437 | 1.0 |
| 3.3268 | 12.8 | 176800 | 3.4457 | 1.0 |
| 3.4255 | 12.81 | 176900 | 3.4437 | 1.0 |
| 3.3086 | 12.81 | 177000 | 3.4422 | 1.0 |
| 3.3619 | 12.82 | 177100 | 3.4447 | 1.0 |
| 3.2334 | 12.83 | 177200 | 3.4457 | 1.0 |
| 3.4318 | 12.84 | 177300 | 3.4413 | 1.0 |
| 3.2553 | 12.84 | 177400 | 3.4425 | 1.0 |
| 3.225 | 12.85 | 177500 | 3.4435 | 1.0 |
| 3.3984 | 12.86 | 177600 | 3.4518 | 1.0 |
| 3.5566 | 12.86 | 177700 | 3.4481 | 1.0 |
| 4.3006 | 12.87 | 177800 | 3.4463 | 1.0 |
| 3.2232 | 12.88 | 177900 | 3.4454 | 1.0 |
| 3.2224 | 12.89 | 178000 | 3.4452 | 1.0 |
| 3.3974 | 12.89 | 178100 | 3.4430 | 1.0 |
| 3.4688 | 12.9 | 178200 | 3.4441 | 1.0 |
| 3.293 | 12.91 | 178300 | 3.4422 | 1.0 |
| 3.7722 | 12.92 | 178400 | 3.4459 | 1.0 |
| 3.3155 | 12.92 | 178500 | 3.4451 | 1.0 |
| 3.3955 | 12.93 | 178600 | 3.4438 | 1.0 |
| 3.2985 | 12.94 | 178700 | 3.4411 | 1.0 |
| 3.3729 | 12.94 | 178800 | 3.4415 | 1.0 |
| 3.3966 | 12.95 | 178900 | 3.4433 | 1.0 |
| 3.2917 | 12.96 | 179000 | 3.4422 | 1.0 |
| 3.3772 | 12.97 | 179100 | 3.4426 | 1.0 |
| 3.2921 | 12.97 | 179200 | 3.4458 | 1.0 |
| 3.2751 | 12.98 | 179300 | 3.4429 | 1.0 |
| 3.4227 | 12.99 | 179400 | 3.4429 | 1.0 |
| 3.3031 | 13.0 | 179500 | 3.4463 | 1.0 |
| 3.3257 | 13.0 | 179600 | 3.4496 | 1.0 |
| 3.3472 | 13.01 | 179700 | 3.4436 | 1.0 |
| 3.4014 | 13.02 | 179800 | 3.4484 | 1.0 |
| 3.4494 | 13.02 | 179900 | 3.4418 | 1.0 |
| 3.559 | 13.03 | 180000 | 3.4425 | 1.0 |
| 3.3253 | 13.04 | 180100 | 3.4412 | 1.0 |
| 3.2797 | 13.05 | 180200 | 3.4464 | 1.0 |
| 3.3854 | 13.05 | 180300 | 3.4484 | 1.0 |
| 3.24 | 13.06 | 180400 | 3.4446 | 1.0 |
| 3.2406 | 13.07 | 180500 | 3.4453 | 1.0 |
| 3.3609 | 13.07 | 180600 | 3.4425 | 1.0 |
| 3.3496 | 13.08 | 180700 | 3.4465 | 1.0 |
| 3.2963 | 13.09 | 180800 | 3.4437 | 1.0 |
| 3.2781 | 13.1 | 180900 | 3.4481 | 1.0 |
| 3.1707 | 13.1 | 181000 | 3.4465 | 1.0 |
| 3.5305 | 13.11 | 181100 | 3.4460 | 1.0 |
| 3.3498 | 13.12 | 181200 | 3.4423 | 1.0 |
| 3.276 | 13.13 | 181300 | 3.4402 | 1.0 |
| 3.2264 | 13.13 | 181400 | 3.4432 | 1.0 |
| 3.2517 | 13.14 | 181500 | 3.4408 | 1.0 |
| 3.3312 | 13.15 | 181600 | 3.4455 | 1.0 |
| 3.4057 | 13.15 | 181700 | 3.4476 | 1.0 |
| 3.34 | 13.16 | 181800 | 3.4415 | 1.0 |
| 3.2458 | 13.17 | 181900 | 3.4409 | 1.0 |
| 3.3949 | 13.18 | 182000 | 3.4405 | 1.0 |
| 3.289 | 13.18 | 182100 | 3.4431 | 1.0 |
| 3.4016 | 13.19 | 182200 | 3.4393 | 1.0 |
| 3.256 | 13.2 | 182300 | 3.4410 | 1.0 |
| 3.2597 | 13.2 | 182400 | 3.4391 | 1.0 |
| 3.2483 | 13.21 | 182500 | 3.4387 | 1.0 |
| 3.3637 | 13.22 | 182600 | 3.4409 | 1.0 |
| 3.2936 | 13.23 | 182700 | 3.4399 | 1.0 |
| 3.2666 | 13.23 | 182800 | 3.4458 | 1.0 |
| 3.3675 | 13.24 | 182900 | 3.4494 | 1.0 |
| 3.3538 | 13.25 | 183000 | 3.4430 | 1.0 |
| 3.3276 | 13.26 | 183100 | 3.4442 | 1.0 |
| 3.3851 | 13.26 | 183200 | 3.4425 | 1.0 |
| 3.3579 | 13.27 | 183300 | 3.4410 | 1.0 |
| 3.2882 | 13.28 | 183400 | 3.4400 | 1.0 |
| 3.3541 | 13.28 | 183500 | 3.4436 | 1.0 |
| 3.392 | 13.29 | 183600 | 3.4445 | 1.0 |
| 3.3857 | 13.3 | 183700 | 3.4477 | 1.0 |
| 3.3084 | 13.31 | 183800 | 3.4463 | 1.0 |
| 3.327 | 13.31 | 183900 | 3.4451 | 1.0 |
| 3.3967 | 13.32 | 184000 | 3.4483 | 1.0 |
| 3.3657 | 13.33 | 184100 | 3.4471 | 1.0 |
| 3.3732 | 13.34 | 184200 | 3.4465 | 1.0 |
| 3.366 | 13.34 | 184300 | 3.4459 | 1.0 |
| 3.2545 | 13.35 | 184400 | 3.4451 | 1.0 |
| 4.2873 | 13.36 | 184500 | 3.4425 | 1.0 |
| 3.6525 | 13.36 | 184600 | 3.4432 | 1.0 |
| 3.2921 | 13.37 | 184700 | 3.4437 | 1.0 |
| 3.273 | 13.38 | 184800 | 3.4420 | 1.0 |
| 3.267 | 13.39 | 184900 | 3.4445 | 1.0 |
| 3.3585 | 13.39 | 185000 | 3.4459 | 1.0 |
| 3.3271 | 13.4 | 185100 | 3.4424 | 1.0 |
| 3.3752 | 13.41 | 185200 | 3.4406 | 1.0 |
| 3.2715 | 13.41 | 185300 | 3.4424 | 1.0 |
| 3.2668 | 13.42 | 185400 | 3.4440 | 1.0 |
| 3.4546 | 13.43 | 185500 | 3.4464 | 1.0 |
| 3.2931 | 13.44 | 185600 | 3.4444 | 1.0 |
| 3.4428 | 13.44 | 185700 | 3.4443 | 1.0 |
| 3.4004 | 13.45 | 185800 | 3.4475 | 1.0 |
| 3.3416 | 13.46 | 185900 | 3.4447 | 1.0 |
| 3.3598 | 13.47 | 186000 | 3.4458 | 1.0 |
| 3.3348 | 13.47 | 186100 | 3.4420 | 1.0 |
| 3.2879 | 13.48 | 186200 | 3.4410 | 1.0 |
| 3.3791 | 13.49 | 186300 | 3.4481 | 1.0 |
| 3.3066 | 13.49 | 186400 | 3.4440 | 1.0 |
| 3.2824 | 13.5 | 186500 | 3.4447 | 1.0 |
| 3.4092 | 13.51 | 186600 | 3.4447 | 1.0 |
| 3.2679 | 13.52 | 186700 | 3.4443 | 1.0 |
| 3.3921 | 13.52 | 186800 | 3.4447 | 1.0 |
| 3.3348 | 13.53 | 186900 | 3.4424 | 1.0 |
| 3.2365 | 13.54 | 187000 | 3.4392 | 1.0 |
| 3.3355 | 13.55 | 187100 | 3.4387 | 1.0 |
| 3.2654 | 13.55 | 187200 | 3.4393 | 1.0 |
| 3.3085 | 13.56 | 187300 | 3.4404 | 1.0 |
| 3.3127 | 13.57 | 187400 | 3.4400 | 1.0 |
| 3.219 | 13.57 | 187500 | 3.4422 | 1.0 |
| 3.3733 | 13.58 | 187600 | 3.4391 | 1.0 |
| 3.2622 | 13.59 | 187700 | 3.4420 | 1.0 |
| 3.2188 | 13.6 | 187800 | 3.4445 | 1.0 |
| 3.2977 | 13.6 | 187900 | 3.4437 | 1.0 |
| 3.2994 | 13.61 | 188000 | 3.4463 | 1.0 |
| 3.2897 | 13.62 | 188100 | 3.4438 | 1.0 |
| 3.3194 | 13.62 | 188200 | 3.4452 | 1.0 |
| 3.3566 | 13.63 | 188300 | 3.4446 | 1.0 |
| 3.3442 | 13.64 | 188400 | 3.4509 | 1.0 |
| 3.58 | 13.65 | 188500 | 3.4509 | 1.0 |
| 3.4537 | 13.65 | 188600 | 3.4479 | 1.0 |
| 3.342 | 13.66 | 188700 | 3.4428 | 1.0 |
| 3.2765 | 13.67 | 188800 | 3.4410 | 1.0 |
| 3.2765 | 13.68 | 188900 | 3.4422 | 1.0 |
| 3.3381 | 13.68 | 189000 | 3.4400 | 1.0 |
| 3.2883 | 13.69 | 189100 | 3.4411 | 1.0 |
| 3.2861 | 13.7 | 189200 | 3.4417 | 1.0 |
| 3.3049 | 13.7 | 189300 | 3.4431 | 1.0 |
| 3.7184 | 13.71 | 189400 | 3.4446 | 1.0 |
| 3.3307 | 13.72 | 189500 | 3.4449 | 1.0 |
| 3.3274 | 13.73 | 189600 | 3.4456 | 1.0 |
| 3.3481 | 13.73 | 189700 | 3.4417 | 1.0 |
| 3.3763 | 13.74 | 189800 | 3.4439 | 1.0 |
| 3.3005 | 13.75 | 189900 | 3.4442 | 1.0 |
| 3.3775 | 13.76 | 190000 | 3.4458 | 1.0 |
| 3.284 | 13.76 | 190100 | 3.4427 | 1.0 |
| 3.2496 | 13.77 | 190200 | 3.4465 | 1.0 |
| 3.4141 | 13.78 | 190300 | 3.4422 | 1.0 |
| 3.3689 | 13.78 | 190400 | 3.4441 | 1.0 |
| 3.2925 | 13.79 | 190500 | 3.4446 | 1.0 |
| 3.334 | 13.8 | 190600 | 3.4447 | 1.0 |
| 3.4054 | 13.81 | 190700 | 3.4442 | 1.0 |
| 3.5985 | 13.81 | 190800 | 3.4418 | 1.0 |
| 3.307 | 13.82 | 190900 | 3.4437 | 1.0 |
| 3.2475 | 13.83 | 191000 | 3.4418 | 1.0 |
| 3.4217 | 13.83 | 191100 | 3.4429 | 1.0 |
| 3.2629 | 13.84 | 191200 | 3.4417 | 1.0 |
| 3.4471 | 13.85 | 191300 | 3.4420 | 1.0 |
| 3.3174 | 13.86 | 191400 | 3.4400 | 1.0 |
| 3.3505 | 13.86 | 191500 | 3.4430 | 1.0 |
| 3.4601 | 13.87 | 191600 | 3.4409 | 1.0 |
| 3.2617 | 13.88 | 191700 | 3.4439 | 1.0 |
| 3.4259 | 13.89 | 191800 | 3.4451 | 1.0 |
| 3.4135 | 13.89 | 191900 | 3.4424 | 1.0 |
| 3.2713 | 13.9 | 192000 | 3.4425 | 1.0 |
| 3.3399 | 13.91 | 192100 | 3.4450 | 1.0 |
| 3.375 | 13.91 | 192200 | 3.4440 | 1.0 |
| 3.2318 | 13.92 | 192300 | 3.4449 | 1.0 |
| 3.2925 | 13.93 | 192400 | 3.4430 | 1.0 |
| 3.416 | 13.94 | 192500 | 3.4440 | 1.0 |
| 3.283 | 13.94 | 192600 | 3.4441 | 1.0 |
| 3.249 | 13.95 | 192700 | 3.4436 | 1.0 |
| 3.3415 | 13.96 | 192800 | 3.4435 | 1.0 |
| 3.3123 | 13.97 | 192900 | 3.4427 | 1.0 |
| 3.3019 | 13.97 | 193000 | 3.4414 | 1.0 |
| 3.3949 | 13.98 | 193100 | 3.4409 | 1.0 |
| 3.3118 | 13.99 | 193200 | 3.4413 | 1.0 |
| 3.4302 | 13.99 | 193300 | 3.4431 | 1.0 |
| 3.382 | 14.0 | 193400 | 3.4439 | 1.0 |
| 3.4496 | 14.01 | 193500 | 3.4429 | 1.0 |
| 3.2643 | 14.02 | 193600 | 3.4454 | 1.0 |
| 3.2298 | 14.02 | 193700 | 3.4439 | 1.0 |
| 3.3804 | 14.03 | 193800 | 3.4429 | 1.0 |
| 3.2049 | 14.04 | 193900 | 3.4429 | 1.0 |
| 3.3818 | 14.04 | 194000 | 3.4420 | 1.0 |
| 3.2901 | 14.05 | 194100 | 3.4433 | 1.0 |
| 3.2989 | 14.06 | 194200 | 3.4419 | 1.0 |
| 3.2548 | 14.07 | 194300 | 3.4434 | 1.0 |
| 3.454 | 14.07 | 194400 | 3.4432 | 1.0 |
| 3.3365 | 14.08 | 194500 | 3.4433 | 1.0 |
| 3.3799 | 14.09 | 194600 | 3.4443 | 1.0 |
| 3.3536 | 14.1 | 194700 | 3.4438 | 1.0 |
| 3.5929 | 14.1 | 194800 | 3.4441 | 1.0 |
| 4.2116 | 14.11 | 194900 | 3.4433 | 1.0 |
| 3.4121 | 14.12 | 195000 | 3.4437 | 1.0 |
| 3.3715 | 14.12 | 195100 | 3.4442 | 1.0 |
| 3.4325 | 14.13 | 195200 | 3.4467 | 1.0 |
| 3.3585 | 14.14 | 195300 | 3.4450 | 1.0 |
| 3.3374 | 14.15 | 195400 | 3.4421 | 1.0 |
| 3.3519 | 14.15 | 195500 | 3.4421 | 1.0 |
| 3.4128 | 14.16 | 195600 | 3.4416 | 1.0 |
| 3.3448 | 14.17 | 195700 | 3.4412 | 1.0 |
| 3.4239 | 14.18 | 195800 | 3.4418 | 1.0 |
| 3.6124 | 14.18 | 195900 | 3.4440 | 1.0 |
| 3.3607 | 14.19 | 196000 | 3.4444 | 1.0 |
| 3.3141 | 14.2 | 196100 | 3.4433 | 1.0 |
| 3.4431 | 14.2 | 196200 | 3.4432 | 1.0 |
| 3.4539 | 14.21 | 196300 | 3.4426 | 1.0 |
| 3.3409 | 14.22 | 196400 | 3.4418 | 1.0 |
| 3.2736 | 14.23 | 196500 | 3.4422 | 1.0 |
| 3.8002 | 14.23 | 196600 | 3.4431 | 1.0 |
| 3.501 | 14.24 | 196700 | 3.4421 | 1.0 |
| 3.3537 | 14.25 | 196800 | 3.4420 | 1.0 |
| 3.4373 | 14.25 | 196900 | 3.4412 | 1.0 |
| 3.359 | 14.26 | 197000 | 3.4412 | 1.0 |
| 3.302 | 14.27 | 197100 | 3.4425 | 1.0 |
| 3.3282 | 14.28 | 197200 | 3.4424 | 1.0 |
| 3.3941 | 14.28 | 197300 | 3.4424 | 1.0 |
| 4.4183 | 14.29 | 197400 | 3.4435 | 1.0 |
| 3.4406 | 14.3 | 197500 | 3.4432 | 1.0 |
| 3.285 | 14.31 | 197600 | 3.4432 | 1.0 |
| 3.3289 | 14.31 | 197700 | 3.4442 | 1.0 |
| 3.3085 | 14.32 | 197800 | 3.4426 | 1.0 |
| 3.2033 | 14.33 | 197900 | 3.4446 | 1.0 |
| 3.3691 | 14.33 | 198000 | 3.4448 | 1.0 |
| 3.3715 | 14.34 | 198100 | 3.4448 | 1.0 |
| 4.5572 | 14.35 | 198200 | 3.4432 | 1.0 |
| 3.3509 | 14.36 | 198300 | 3.4431 | 1.0 |
| 3.3179 | 14.36 | 198400 | 3.4426 | 1.0 |
| 3.2891 | 14.37 | 198500 | 3.4436 | 1.0 |
| 3.3872 | 14.38 | 198600 | 3.4436 | 1.0 |
| 3.3177 | 14.38 | 198700 | 3.4442 | 1.0 |
| 3.4302 | 14.39 | 198800 | 3.4446 | 1.0 |
| 3.3834 | 14.4 | 198900 | 3.4441 | 1.0 |
| 3.4318 | 14.41 | 199000 | 3.4430 | 1.0 |
| 3.4176 | 14.41 | 199100 | 3.4431 | 1.0 |
| 4.6882 | 14.42 | 199200 | 3.4431 | 1.0 |
| 3.2657 | 14.43 | 199300 | 3.4436 | 1.0 |
| 3.3929 | 14.44 | 199400 | 3.4436 | 1.0 |
| 5.337 | 14.44 | 199500 | 3.4432 | 1.0 |
| 3.4289 | 14.45 | 199600 | 3.4432 | 1.0 |
| 3.2498 | 14.46 | 199700 | 3.4435 | 1.0 |
| 3.3635 | 14.46 | 199800 | 3.4432 | 1.0 |
| 5.4355 | 14.47 | 199900 | 3.4418 | 1.0 |
| 3.2158 | 14.48 | 200000 | 3.4427 | 1.0 |
| 3.4885 | 14.49 | 200100 | 3.4435 | 1.0 |
| 3.3739 | 14.49 | 200200 | 3.4430 | 1.0 |
| 3.4712 | 14.5 | 200300 | 3.4434 | 1.0 |
| 3.3742 | 14.51 | 200400 | 3.4444 | 1.0 |
| 3.3465 | 14.52 | 200500 | 3.4429 | 1.0 |
| 3.3277 | 14.52 | 200600 | 3.4430 | 1.0 |
| 3.3073 | 14.53 | 200700 | 3.4431 | 1.0 |
| 3.33 | 14.54 | 200800 | 3.4432 | 1.0 |
| 3.3857 | 14.54 | 200900 | 3.4436 | 1.0 |
| 3.4481 | 14.55 | 201000 | 3.4430 | 1.0 |
| 3.546 | 14.56 | 201100 | 3.4416 | 1.0 |
| 3.4435 | 14.57 | 201200 | 3.4404 | 1.0 |
| 3.3237 | 14.57 | 201300 | 3.4408 | 1.0 |
| 3.3347 | 14.58 | 201400 | 3.4420 | 1.0 |
| 4.5461 | 14.59 | 201500 | 3.4420 | 1.0 |
| 3.3307 | 14.59 | 201600 | 3.4430 | 1.0 |
| 3.3899 | 14.6 | 201700 | 3.4439 | 1.0 |
| 3.2613 | 14.61 | 201800 | 3.4435 | 1.0 |
| 3.2693 | 14.62 | 201900 | 3.4426 | 1.0 |
| 3.3621 | 14.62 | 202000 | 3.4430 | 1.0 |
| 3.4383 | 14.63 | 202100 | 3.4434 | 1.0 |
| 3.5096 | 14.64 | 202200 | 3.4444 | 1.0 |
| 3.3962 | 14.65 | 202300 | 3.4445 | 1.0 |
| 3.3854 | 14.65 | 202400 | 3.4441 | 1.0 |
| 3.3116 | 14.66 | 202500 | 3.4445 | 1.0 |
| 3.3691 | 14.67 | 202600 | 3.4445 | 1.0 |
| 3.3821 | 14.67 | 202700 | 3.4440 | 1.0 |
| 3.2872 | 14.68 | 202800 | 3.4431 | 1.0 |
| 3.3575 | 14.69 | 202900 | 3.4431 | 1.0 |
| 3.2881 | 14.7 | 203000 | 3.4435 | 1.0 |
| 3.4115 | 14.7 | 203100 | 3.4440 | 1.0 |
| 3.3814 | 14.71 | 203200 | 3.4439 | 1.0 |
| 3.3609 | 14.72 | 203300 | 3.4435 | 1.0 |
| 3.3261 | 14.73 | 203400 | 3.4430 | 1.0 |
| 3.2983 | 14.73 | 203500 | 3.4435 | 1.0 |
| 3.3094 | 14.74 | 203600 | 3.4431 | 1.0 |
| 3.2582 | 14.75 | 203700 | 3.4431 | 1.0 |
| 3.2963 | 14.75 | 203800 | 3.4435 | 1.0 |
| 3.361 | 14.76 | 203900 | 3.4435 | 1.0 |
| 3.2636 | 14.77 | 204000 | 3.4440 | 1.0 |
| 3.2908 | 14.78 | 204100 | 3.4439 | 1.0 |
| 3.4743 | 14.78 | 204200 | 3.4445 | 1.0 |
| 3.2633 | 14.79 | 204300 | 3.4444 | 1.0 |
| 3.6696 | 14.8 | 204400 | 3.4440 | 1.0 |
| 3.4295 | 14.8 | 204500 | 3.4439 | 1.0 |
| 3.2838 | 14.81 | 204600 | 3.4439 | 1.0 |
| 3.285 | 14.82 | 204700 | 3.4439 | 1.0 |
| 3.2501 | 14.83 | 204800 | 3.4443 | 1.0 |
| 3.2872 | 14.83 | 204900 | 3.4443 | 1.0 |
| 3.3486 | 14.84 | 205000 | 3.4443 | 1.0 |
| 3.2943 | 14.85 | 205100 | 3.4443 | 1.0 |
| 3.2908 | 14.86 | 205200 | 3.4438 | 1.0 |
| 4.0962 | 14.86 | 205300 | 3.4443 | 1.0 |
| 3.2306 | 14.87 | 205400 | 3.4433 | 1.0 |
| 3.4682 | 14.88 | 205500 | 3.4433 | 1.0 |
| 3.2785 | 14.88 | 205600 | 3.4438 | 1.0 |
| 3.4161 | 14.89 | 205700 | 3.4438 | 1.0 |
| 3.299 | 14.9 | 205800 | 3.4438 | 1.0 |
| 3.3116 | 14.91 | 205900 | 3.4438 | 1.0 |
| 3.3456 | 14.91 | 206000 | 3.4439 | 1.0 |
| 3.263 | 14.92 | 206100 | 3.4439 | 1.0 |
| 3.4408 | 14.93 | 206200 | 3.4444 | 1.0 |
| 3.3478 | 14.94 | 206300 | 3.4443 | 1.0 |
| 3.1718 | 14.94 | 206400 | 3.4438 | 1.0 |
| 3.2811 | 14.95 | 206500 | 3.4439 | 1.0 |
| 3.4132 | 14.96 | 206600 | 3.4439 | 1.0 |
| 3.2337 | 14.96 | 206700 | 3.4439 | 1.0 |
| 3.3859 | 14.97 | 206800 | 3.4439 | 1.0 |
| 3.3501 | 14.98 | 206900 | 3.4439 | 1.0 |
| 3.5111 | 14.99 | 207000 | 3.4439 | 1.0 |
| 3.5375 | 14.99 | 207100 | 3.4439 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
phueb/BabyBERTa-3 | 23c0b95e4e6acc39a388b711e0797893dba809e8 | 2022-01-18T14:41:25.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:CHILDES",
"transformers",
"BabyBERTa",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | phueb | null | phueb/BabyBERTa-3 | 1 | null | transformers | 30,140 | ---
language: en
tags:
- BabyBERTa
license: mit
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
pi3ni0/pubmedqa-scibert-special | d906c26acdf58d4fd55aa51702266905385486f7 | 2021-05-20T02:38:41.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | pi3ni0 | null | pi3ni0/pubmedqa-scibert-special | 1 | null | transformers | 30,141 | Entry not found |
pinecone/mpnet-retriever-discourse | 1a1c51b83dff2da55fbab83718443ddb64fa4dd3 | 2022-01-30T07:23:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"question-answering"
] | sentence-similarity | false | pinecone | null | pinecone/mpnet-retriever-discourse | 1 | null | sentence-transformers | 30,142 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- question-answering
---
# MPNet Retriever (Discourse)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used as a retriever model in open-domain question-answering tasks.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was fine-tuned on question-answer pairs scraper from several ML-focused Discourse forums \[HuggingFace, PyTorch, Streamlit, TensorFlow\].
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 105 with parameters:
```
{'batch_size': 12}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Fine-tuned by [James Briggs](https://www.youtube.com/c/jamesbriggs) at [Pinecone](https://www.pinecone.io). Learn more about the [fine-tuning process here](https://www.pinecone.io/learn/retriever-models/). |
plum/distilbert-base-cased | 32a5153090004cc633b3179223582fdc543ff1a4 | 2022-01-05T05:31:03.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | plum | null | plum/distilbert-base-cased | 1 | null | transformers | 30,143 | Entry not found |
plum/roberta-large | 0291e390b7a516dcc6e958246a874b70fd73aa6e | 2022-01-05T03:01:14.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | plum | null | plum/roberta-large | 1 | null | transformers | 30,144 | Entry not found |
polodealvarado/xls-r-300m-es | fe42b9da4eeff40d78dfa834a41420d50e137359 | 2022-03-23T18:34:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | polodealvarado | null | polodealvarado/xls-r-300m-es | 1 | 2 | transformers | 30,145 | ---
license: apache-2.0
language:
- es
tags:
- common_voice_8_0
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wave2vec-xls-r-300m-es
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 es
type: mozilla-foundation/common_voice_8_0
args: es
metrics:
- name: Test WER
type: wer
value: 14.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 28.63
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 29.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2-XLSR-300m-es
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the spanish common_voice dataset thanks to the GPU credits generously given by the OVHcloud for the Speech Recognition challenge.
It achieves the following results on the evaluation set
Without LM:
- Loss : 0.1900
- Wer : 0.146
With 5-gram:
- WER: 0.109
- CER: 0.036
### Usage with 5-gram.
The model can be used with n-gram (n=5) included in the processor as follows.
```python
import re
from transformers import AutoModelForCTC,Wav2Vec2ProcessorWithLM
import torch
# Loading model and processor
processor = Wav2Vec2ProcessorWithLM.from_pretrained("polodealvarado/xls-r-300m-es")
model = AutoModelForCTC.from_pretrained("polodealvarado/xls-r-300m-es")
# Cleaning characters
def remove_extra_chars(batch):
chars_to_ignore_regex = '[^a-záéíóúñ ]'
text = batch["translation"][target_lang]
batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower())
return batch
# Preparing dataset
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"],return_tensors="pt",padding=True).input_values[0]
with processor.as_target_processor():
batch["labels"] = processor(batch["sentence"]).input_ids
return batch
common_voice_test = load_dataset("mozilla-foundation/common_voice_8_0", "es", split="test",use_auth_token=True)
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000))
common_voice_test = common_voice_test.map(remove_extra_chars, remove_columns=dataset.column_names)
common_voice_test = common_voice_test.map(prepare_dataset)
# Testing first sample
inputs = torch_tensor(common_voice_test[0]["input_values"])
with torch.no_grad():
logits = model(inputs).logits
pred_ids = torch.argmax(logits, dim=-1)
text = processor.batch_decode(logits.numpy()).text
print(text) # 'bien y qué regalo vas a abrir primero'
```
On the other, you can execute the eval.py file for evaluation
```bash
# To use GPU: --device 0
$ python eval.py --model_id polodealvarado/xls-r-300m-es --dataset mozilla-foundation/common_voice_8_0 --config es --device 0 --split test
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6747 | 0.3 | 400 | 0.6535 | 0.5926 |
| 0.4439 | 0.6 | 800 | 0.3753 | 0.3193 |
| 0.3291 | 0.9 | 1200 | 0.3267 | 0.2721 |
| 0.2644 | 1.2 | 1600 | 0.2816 | 0.2311 |
| 0.24 | 1.5 | 2000 | 0.2647 | 0.2179 |
| 0.2265 | 1.79 | 2400 | 0.2406 | 0.2048 |
| 0.1994 | 2.09 | 2800 | 0.2357 | 0.1869 |
| 0.1613 | 2.39 | 3200 | 0.2242 | 0.1821 |
| 0.1546 | 2.69 | 3600 | 0.2123 | 0.1707 |
| 0.1441 | 2.99 | 4000 | 0.2067 | 0.1619 |
| 0.1138 | 3.29 | 4400 | 0.2044 | 0.1519 |
| 0.1072 | 3.59 | 4800 | 0.1917 | 0.1457 |
| 0.0992 | 3.89 | 5200 | 0.1900 | 0.1438 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
pourzare/wav2vec2-base-timit-demo-colab | 5e068ef7d2b48a59a5e2cb7caa661c9a6c60fb44 | 2021-11-09T09:53:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | pourzare | null | pourzare/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,146 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3821
- Wer: 0.3841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7018 | 2.01 | 500 | 1.9216 | 0.9924 |
| 1.0211 | 4.02 | 1000 | 0.5051 | 0.5095 |
| 0.4293 | 6.02 | 1500 | 0.4209 | 0.4282 |
| 0.2513 | 8.03 | 2000 | 0.3821 | 0.3841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
prajjwal1/ctrl_discovery_12 | 10243b66e6ae364a223c8ad9105ac0e0924e93b0 | 2021-05-26T18:53:23.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_12 | 1 | null | transformers | 30,147 | Entry not found |
prajjwal1/ctrl_discovery_14 | b289c42ad37cbf644e0e14838cc0d6aff3eb3ded | 2021-06-06T21:46:59.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_14 | 1 | null | transformers | 30,148 | Entry not found |
prajjwal1/ctrl_discovery_flipped_2 | c5e145f6aeca04f86efcee053310c983c262568b | 2021-03-07T17:49:29.000Z | [
"pytorch",
"ctrl",
"text-generation",
"transformers"
] | text-generation | false | prajjwal1 | null | prajjwal1/ctrl_discovery_flipped_2 | 1 | null | transformers | 30,149 | Entry not found |
prajjwal1/roberta_new | abd17371d4f661ae4bae5bda1a750440ef06a912 | 2021-05-28T21:47:53.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | prajjwal1 | null | prajjwal1/roberta_new | 1 | null | transformers | 30,150 | Entry not found |
prajwalcr/poetry-trust_gpt2 | 2df6e5be2c460b8a78be41041bdd52609904640e | 2021-08-03T10:44:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | prajwalcr | null | prajwalcr/poetry-trust_gpt2 | 1 | null | transformers | 30,151 | Entry not found |
preetham18/xls-r-hi-300m-8 | 3918346e0c15d39faa3ca21513fa5f0de541ac61 | 2022-02-06T20:40:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | preetham18 | null | preetham18/xls-r-hi-300m-8 | 1 | null | transformers | 30,152 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5258
- Wer: 1.0073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.917 | 16.13 | 500 | 4.8963 | 1.0 |
| 3.3585 | 32.25 | 1000 | 3.3069 | 1.0000 |
| 1.5873 | 48.38 | 1500 | 0.8274 | 1.0061 |
| 1.2654 | 64.51 | 2000 | 0.6250 | 1.0076 |
| 1.0917 | 80.64 | 2500 | 0.5460 | 1.0056 |
| 1.0001 | 96.76 | 3000 | 0.5304 | 1.0083 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
princebansal42/distilbert-base-uncased-finetuned-squad | f25639049d7be98bfd7ffcc8e1618a35360f5e55 | 2021-12-19T10:27:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | princebansal42 | null | princebansal42/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 30,153 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3993 | 1.0 | 2051 | 1.8058 |
| 1.0467 | 2.0 | 4102 | 1.9564 |
| 0.8304 | 3.0 | 6153 | 2.6623 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
princeton-nlp/datamux-mnli-20 | 629b6b4276dc6d5d2b17c86d3184118ee8c05467 | 2022-02-16T16:55:01.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-mnli-20 | 1 | null | transformers | 30,154 | Entry not found |
princeton-nlp/datamux-qnli-10 | ea1cb5d87cfac0a594ffdcc663801e6d7f10dd51 | 2022-02-16T16:58:49.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qnli-10 | 1 | null | transformers | 30,155 | Entry not found |
princeton-nlp/datamux-qnli-2 | e721baf46bd0586f64b6d4cc5df6db8b70b6539d | 2022-02-16T16:56:56.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qnli-2 | 1 | null | transformers | 30,156 | Entry not found |
princeton-nlp/datamux-qnli-20 | 6d566bff6fb287d0426262b57f139171ee93501f | 2022-02-16T17:00:42.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qnli-20 | 1 | null | transformers | 30,157 | Entry not found |
princeton-nlp/datamux-qnli-5 | 59bc778b1b35fffeb94c795bb24de357cec65950 | 2022-02-16T16:57:53.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qnli-5 | 1 | null | transformers | 30,158 | Entry not found |
princeton-nlp/datamux-qqp-10 | aa6c8737f5acef68ae965952a4928d53860947f9 | 2022-02-16T17:04:26.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qqp-10 | 1 | null | transformers | 30,159 | Entry not found |
princeton-nlp/datamux-qqp-40 | 69b629d206b138720b9a9c98a5a197eeddcc0c29 | 2022-02-16T17:06:48.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-qqp-40 | 1 | null | transformers | 30,160 | Entry not found |
princeton-nlp/datamux-retrieval-5 | 0a6c2ff9df3c5c90d359e3b708917b5c6c310738 | 2022-02-18T03:51:35.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-retrieval-5 | 1 | null | transformers | 30,161 | Entry not found |
princeton-nlp/datamux-sst2-10 | 39b5c635317f9eae379c2c7cc5fa89190ca4c38e | 2022-02-16T19:59:06.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-sst2-10 | 1 | null | transformers | 30,162 | Entry not found |
princeton-nlp/datamux-sst2-2 | 5da9e3960c88806217a7a6aecdb940ee205fabe6 | 2022-02-16T17:07:27.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-sst2-2 | 1 | null | transformers | 30,163 | Entry not found |
princeton-nlp/datamux-sst2-20 | 92bea30661e78b0bf2d339c32e0d3e1a7c3c43fa | 2022-02-16T20:00:02.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-sst2-20 | 1 | null | transformers | 30,164 | Entry not found |
princeton-nlp/datamux-sst2-40 | 40a19a915cd1434368500f9b2f14e9688132df2b | 2022-02-16T20:01:22.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/datamux-sst2-40 | 1 | null | transformers | 30,165 | Entry not found |
princeton-nlp/densephrases-multi-query-ay2 | 8316ff2eea40e45fc08aa9ab84d2d6d453e1aab4 | 2021-09-23T18:51:32.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-ay2 | 1 | null | transformers | 30,166 | Entry not found |
princeton-nlp/densephrases-multi-query-trec | a4533e3e44c8802824189358c3f17050d607f283 | 2021-09-20T21:45:57.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-trec | 1 | null | transformers | 30,167 | Entry not found |
pritoms/distilroberta-base-YTTranscript23 | 32ebff52f77807a7b1356e210a8b24600964d0b5 | 2022-02-03T05:52:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | pritoms | null | pritoms/distilroberta-base-YTTranscript23 | 1 | null | transformers | 30,168 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-YTTranscript23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-YTTranscript23
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 2.9007 |
| No log | 2.0 | 140 | 2.9651 |
| No log | 3.0 | 210 | 2.9374 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pritoms/distilroberta-base-finetuned-wikitext2 | 323eb5c9ae760148cfe173a3494b5a12a1d4d5d8 | 2021-09-25T11:50:19.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | pritoms | null | pritoms/distilroberta-base-finetuned-wikitext2 | 1 | null | transformers | 30,169 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: distilroberta-base-finetuned-wikitext2
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 13 | 2.7886 |
| No log | 2.0 | 26 | 2.7917 |
| No log | 3.0 | 39 | 2.7255 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
pritoms/gpt-neo-125M-Byethon | a269109ac1b4d06f4ec6b587a824f2d5b9e0001f | 2021-09-12T11:14:38.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | pritoms | null | pritoms/gpt-neo-125M-Byethon | 1 | null | transformers | 30,170 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt-neo-125M-Byethon
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-Byethon
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 237 | 0.8348 |
| No log | 2.0 | 474 | 0.6931 |
| 0.8151 | 3.0 | 711 | 0.6609 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
pritoms/gpt2-finetuned-python2 | dbeb29d28245ebfe631f0ef6a9ec8ccb406e96c3 | 2021-10-26T23:15:08.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | pritoms | null | pritoms/gpt2-finetuned-python2 | 1 | null | transformers | 30,171 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-python2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-python2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 25 | 2.0135 |
| No log | 2.0 | 50 | 1.9618 |
| No log | 3.0 | 75 | 1.9454 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
project2you/wav2vec2-large-xlsr-53-demo-colab | 151537426f042db8eeaeb47d6bcf5e271f4639a2 | 2021-12-02T11:58:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | project2you | null | project2you/wav2vec2-large-xlsr-53-demo-colab | 1 | null | transformers | 30,172 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6901
- Wer: 1.6299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.5034 | 3.42 | 400 | 3.5852 | 1.0 |
| 1.7853 | 6.83 | 800 | 0.7430 | 1.6774 |
| 0.5675 | 10.26 | 1200 | 0.6513 | 1.6330 |
| 0.3761 | 13.67 | 1600 | 0.6208 | 1.6081 |
| 0.2776 | 17.09 | 2000 | 0.6401 | 1.6081 |
| 0.2266 | 20.51 | 2400 | 0.6410 | 1.6295 |
| 0.1949 | 23.93 | 2800 | 0.6910 | 1.6287 |
| 0.1672 | 27.35 | 3200 | 0.6901 | 1.6299 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
proxyht/mdsister-news | c0ded08fc18052af2beaa132f566eaf1c6489ab4 | 2021-06-29T10:05:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | proxyht | null | proxyht/mdsister-news | 1 | null | transformers | 30,173 | Entry not found |
psblade/DialoGPT-medium-PotterBot | 1f612afe13f6de8a6f4c6acbbe57be0d883a30a7 | 2021-08-28T07:11:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | psblade | null | psblade/DialoGPT-medium-PotterBot | 1 | null | transformers | 30,174 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
pszemraj/gpt2-medium-vaguely-human-dialogue | 68b7ae8ad26546b06ea6962944160a42564b8bc1 | 2022-02-01T19:30:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt",
"license:mit"
] | text-generation | false | pszemraj | null | pszemraj/gpt2-medium-vaguely-human-dialogue | 1 | null | transformers | 30,175 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "Do you like my new haircut?\nperson beta:\n\n"
example_title: "haircut"
- text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n"
example_title: "teaching"
- text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n"
example_title: "favorite"
- text: "how much does it cost?\nperson beta:\n\n"
example_title: "money"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pszemraj/gpt2-medium-vaguely-human-dialogue
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a parsed version of Wizard of Wikipedia. Because the batch size was so large, it learned a general understanding of words that makes sense together but does not specifically respond to anything - sort of like an alien learning to imitate human words to convince others that it is human.
It achieves the following results on the evaluation set:
- Loss: 4.3281
## Model description
- a decent example of what happens when your batch size is too large and the global optima does not reflect specific prompts / use cases.
## Intended uses & limitations
- there are no intended uses
## Training and evaluation data
- a parsed version of the wizard of Wikipedia dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 34.991 | 1.0 | 837 | 14.8359 |
| 12.2881 | 2.0 | 1674 | 9.375 |
| 8.5071 | 3.0 | 2511 | 7.2148 |
| 7.6031 | 4.0 | 3348 | 6.1758 |
| 6.4808 | 5.0 | 4185 | 5.5820 |
| 5.8562 | 6.0 | 5022 | 5.0977 |
| 5.6094 | 7.0 | 5859 | 4.8203 |
| 5.2591 | 8.0 | 6696 | 4.5977 |
| 5.0031 | 9.0 | 7533 | 4.4219 |
| 4.8837 | 10.0 | 8370 | 4.3281 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
pszemraj/t5e-xl-lexical-3E | 561dae69543967e7e4cc21721aadd6d69d44851f | 2022-02-22T19:54:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | pszemraj | null | pszemraj/t5e-xl-lexical-3E | 1 | null | transformers | 30,176 | Entry not found |
ptnv-s/biobert_squad2_cased-finetuned-squad | 7dcc81638f949ad77770dc9b9c29aebb95e9afc7 | 2022-01-03T08:56:44.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | ptnv-s | null | ptnv-s/biobert_squad2_cased-finetuned-squad | 1 | null | transformers | 30,177 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: biobert_squad2_cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_squad2_cased-finetuned-squad
This model is a fine-tuned version of [clagator/biobert_squad2_cased](https://huggingface.co/clagator/biobert_squad2_cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
qarib/bert-base-qarib_far_8280k | a3d4344f6954c948abcba727b9c8436c6102d412 | 2021-04-21T13:40:36.000Z | [
"pytorch",
"ar",
"dataset:arabic_billion_words",
"dataset:open_subtitles",
"dataset:twitter",
"dataset:Farasa",
"arxiv:2102.10684",
"transformers",
"tf",
"QARiB",
"qarib"
] | null | false | qarib | null | qarib/bert-base-qarib_far_8280k | 1 | null | transformers | 30,178 | ---
language: ar
tags:
- pytorch
- tf
- QARiB
- qarib
datasets:
- arabic_billion_words
- open_subtitles
- twitter
- Farasa
metrics:
- f1
widget:
- text: "و+قام ال+مدير [MASK]"
---
# QARiB: QCRI Arabic and Dialectal BERT
## About QARiB Farasa
QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text.
For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from
[Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/).
QARiB: Is the Arabic name for "Boat".
## Model and Parameters:
- Data size: 14B tokens
- Vocabulary: 64k
- Iterations: 10M
- Number of Layers: 12
## Training QARiB
See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md)
## Using QARiB
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md)
This model expects the data to be segmented. You may use [Farasa Segmenter](https://farasa-api.qcri.org/segmentation/) API.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>>from transformers import pipeline
>>>fill_mask = pipeline("fill-mask", model="./models/bert-base-qarib_far")
>>> fill_mask("و+قام ال+مدير [MASK]")
[
]
>>> fill_mask("و+قام+ت ال+مدير+ة [MASK]")
[
]
>>> fill_mask("قللي وشفيييك يرحم [MASK]")
[
]
```
## Evaluations:
|**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**|
|---------------|---------|--------------|--------------|--------------|---------|
|Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** |
|Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** |
|Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% |
|Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** |
|Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% |
## Model Weights and Vocab Download
From Huggingface site: https://huggingface.co/qarib/bert-base-qarib_far
## Contacts
Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih
## Reference
```
@article{abdelali2021pretraining,
title={Pre-Training BERT on Arabic Tweets: Practical Considerations},
author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih},
year={2021},
eprint={2102.10684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
qingtao/wav2vec2-common_voice-tr-demo-dist | 4577a87917d22e5f208d4c1ac559a27128c5bf60 | 2021-11-10T08:29:37.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | qingtao | null | qingtao/wav2vec2-common_voice-tr-demo-dist | 1 | null | transformers | 30,179 | Entry not found |
qqhann/w2v_hf_jsut_xlsr53 | 5f807adb2cbc71d2ab18cf6fcb418bddb92a75b4 | 2021-04-01T14:49:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:common_voice",
"dataset:jsut",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | qqhann | null | qqhann/w2v_hf_jsut_xlsr53 | 1 | null | transformers | 30,180 | ---
language: ja
datasets:
- common_voice
- jsut
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Japanese XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ja
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 51.72
- name: Test CER
type: cer
value: 24.89
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), and JSUT dataset{s}.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
!pip install torchaudio
!pip install datasets transformers
!pip install jiwer
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
!pip install jaconv
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import MeCab
from jaconv import kata2hira
from typing import List
# Japanese preprocessing
tagger = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\。\、\「\」\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
def text2kata(text):
node = tagger.parseToNode(text)
word_class = []
while node:
word = node.surface
wclass = node.feature.split(',')
if wclass[0] != u'BOS/EOS':
if len(wclass) <= 6:
word_class.append((word))
elif wclass[6] == None:
word_class.append((word))
else:
word_class.append((wclass[6]))
node = node.next
return ' '.join(word_class)
def hiragana(text):
return kata2hira(text2kata(text))
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # JSUT is already 16kHz
# resampler = torchaudio.transforms.Resample(16_000, 16_000) # JSUT is already 16kHz
processor = Wav2Vec2Processor.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
model = Wav2Vec2ForCTC.from_pretrained("qqhann/w2v_hf_jsut_xlsr53")
model.to("cuda")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = hiragana(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
def cer_compute(predictions: List[str], references: List[str]):
p = [" ".join(list(" " + pred.replace(" ", ""))).strip() for pred in predictions]
r = [" ".join(list(" " + ref.replace(" ", ""))).strip() for ref in references]
return wer.compute(predictions=p, references=r)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * cer_compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 51.72 %
## Training
<!-- The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training. -->
The privately collected JSUT Japanese dataset was used for training.
<!-- The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
|
quantresearch/tst_t2_reweight_10_0 | 161e6c1647cdc7e055feb5593c173bd14f176f15 | 2021-09-16T09:33:01.000Z | [
"pytorch",
"transformers"
] | null | false | quantresearch | null | quantresearch/tst_t2_reweight_10_0 | 1 | null | transformers | 30,181 | Entry not found |
quincyqiang/chtesla3 | bf8e361d5c46e64fba5dd582ebbb1d1e279c29b7 | 2021-05-20T03:51:07.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | quincyqiang | null | quincyqiang/chtesla3 | 1 | null | transformers | 30,182 | Entry not found |
qwerty/DialoGPT-small-rick | 720c5f38b1dc7e2a51bb9e86ec8c55798c213040 | 2022-01-12T10:06:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | qwerty | null | qwerty/DialoGPT-small-rick | 1 | null | transformers | 30,183 | ---
tags:
- conversational
---
# DialoGPT Small Rick
|
r3cdhummingbird/DialoGPT-medium-joshua | b6f6ebae7576852b3e12f802f6bd791cad7dada5 | 2021-09-26T15:01:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | r3cdhummingbird | null | r3cdhummingbird/DialoGPT-medium-joshua | 1 | null | transformers | 30,184 | ---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/T3879/Joshua-Bot_Model/tree/main/twewy-discord-chatbot-main)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3cdhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3cdhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
rachelcorey/DialoGPT-medium-niles | f2f659c4c7fc4889611bef8370d8973fded8dc03 | 2022-01-11T15:13:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rachelcorey | null | rachelcorey/DialoGPT-medium-niles | 1 | null | transformers | 30,185 | ---
tags:
- conversational
---
# a chatbot based on Niles Crane |
radhakri119/wav2vec2-base-timit-demo-colab | 9336842aac6ad9041cd64330c1f0c497125efb86 | 2022-01-20T16:09:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | radhakri119 | null | radhakri119/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,186 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4780
- Wer: 0.3403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5299 | 4.0 | 500 | 1.5195 | 0.9991 |
| 0.6229 | 8.0 | 1000 | 0.4447 | 0.4282 |
| 0.2136 | 12.0 | 1500 | 0.4154 | 0.3764 |
| 0.1196 | 16.0 | 2000 | 0.4394 | 0.3597 |
| 0.0834 | 20.0 | 2500 | 0.4891 | 0.3619 |
| 0.0591 | 24.0 | 3000 | 0.4535 | 0.3439 |
| 0.0448 | 28.0 | 3500 | 0.4780 | 0.3403 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
rafakat/Botsuana-rick | c5e4a9d031b0217fe597bd733409112d08c22e5c | 2021-08-28T17:00:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rafakat | null | rafakat/Botsuana-rick | 1 | null | transformers | 30,187 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
rafiulrumy/wav2vec2-base-timit-demo-colab | 337ae88eb61d90ae3d5f6982a54c775ce4eda429 | 2021-12-11T21:02:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rafiulrumy | null | rafiulrumy/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,188 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0755
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.0894 | 0.34 | 50 | 3.8065 | 1.0 |
| 3.2971 | 0.69 | 100 | 3.0704 | 1.0 |
| 3.1262 | 1.03 | 150 | 3.0153 | 1.0 |
| 2.9925 | 1.38 | 200 | 3.0094 | 1.0 |
| 3.2159 | 1.72 | 250 | 3.0755 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab_2 | 67c4023dcdd5d04b628eac4688d7e05f60a6e1b1 | 2021-12-08T09:51:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rafiulrumy | null | rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab_2 | 1 | null | transformers | 30,189 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi-demo-colab_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi-demo-colab_2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8793
- Wer: 1.1357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 22.381 | 1.11 | 20 | 22.1964 | 1.0 |
| 7.6212 | 2.22 | 40 | 4.0591 | 1.0 |
| 3.6951 | 3.32 | 60 | 3.6782 | 1.0 |
| 3.5574 | 4.43 | 80 | 3.6776 | 1.0 |
| 3.5374 | 5.54 | 100 | 3.5649 | 1.0 |
| 3.5512 | 6.65 | 120 | 3.5266 | 1.0 |
| 3.5075 | 7.76 | 140 | 3.6860 | 1.0 |
| 3.5097 | 8.86 | 160 | 3.4941 | 1.0 |
| 3.481 | 9.97 | 180 | 3.4659 | 1.0 |
| 3.5623 | 11.11 | 200 | 3.7254 | 1.0 |
| 3.4404 | 12.22 | 220 | 3.5225 | 1.0 |
| 3.432 | 13.32 | 240 | 3.5706 | 1.0 |
| 3.4177 | 14.43 | 260 | 3.3833 | 1.0 |
| 3.3735 | 15.54 | 280 | 3.4140 | 1.0 |
| 3.31 | 16.65 | 300 | 3.2702 | 1.0 |
| 3.2256 | 17.76 | 320 | 3.2405 | 1.0 |
| 3.0546 | 18.86 | 340 | 3.1644 | 1.0 |
| 2.7233 | 19.97 | 360 | 2.9753 | 1.0 |
| 2.2822 | 21.11 | 380 | 3.1119 | 1.1183 |
| 1.8027 | 22.22 | 400 | 3.0035 | 1.2378 |
| 1.5274 | 23.32 | 420 | 2.8536 | 1.2227 |
| 1.2313 | 24.43 | 440 | 2.9544 | 1.0951 |
| 1.0956 | 25.54 | 460 | 2.8814 | 1.0661 |
| 0.9456 | 26.65 | 480 | 3.1192 | 1.1589 |
| 0.7893 | 27.76 | 500 | 3.2919 | 1.1833 |
| 0.7256 | 28.86 | 520 | 3.0864 | 1.0951 |
| 0.6051 | 29.97 | 540 | 3.5888 | 1.1821 |
| 0.6087 | 31.11 | 560 | 3.4579 | 1.1392 |
| 0.5529 | 32.22 | 580 | 3.1998 | 1.0708 |
| 0.5211 | 33.32 | 600 | 3.4655 | 1.1311 |
| 0.4506 | 34.43 | 620 | 3.4338 | 1.1694 |
| 0.4101 | 35.54 | 640 | 3.5189 | 1.1450 |
| 0.4484 | 36.65 | 660 | 3.6585 | 1.1601 |
| 0.4038 | 37.76 | 680 | 3.6314 | 1.1497 |
| 0.3539 | 38.86 | 700 | 3.6955 | 1.1485 |
| 0.3898 | 39.97 | 720 | 3.5738 | 1.1148 |
| 0.35 | 41.11 | 740 | 3.6594 | 1.1195 |
| 0.3328 | 42.22 | 760 | 3.6894 | 1.1299 |
| 0.3264 | 43.32 | 780 | 3.7290 | 1.1021 |
| 0.3364 | 44.43 | 800 | 3.7256 | 1.1543 |
| 0.3071 | 45.54 | 820 | 3.8834 | 1.1415 |
| 0.3074 | 46.65 | 840 | 3.8077 | 1.1450 |
| 0.3064 | 47.76 | 860 | 3.8733 | 1.1346 |
| 0.3223 | 48.86 | 880 | 3.8780 | 1.1323 |
| 0.275 | 49.97 | 900 | 3.8793 | 1.1357 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
rahul26/DialoGPT-small-RaMScript | 7ef9a4f9edcf6749bb5403ac93aa56f9ad92fd17 | 2021-10-20T15:53:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rahul26 | null | rahul26/DialoGPT-small-RaMScript | 1 | null | transformers | 30,190 | Entry not found |
rahulchakwate/albert-xlarge-finetuned-squad | 8ddbe701885f3f9c7bb87b1d07971aa7b9ed1de8 | 2021-12-13T03:05:20.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/albert-xlarge-finetuned-squad | 1 | null | transformers | 30,191 | Entry not found |
raphaelmerx/distilbert-base-uncased-finetuned-imdb | 3c993644587fa9dab9aa6acc6053b1599bed713e | 2021-12-01T07:54:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | raphaelmerx | null | raphaelmerx/distilbert-base-uncased-finetuned-imdb | 1 | null | transformers | 30,192 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7117 | 1.0 | 157 | 2.4977 |
| 2.5783 | 2.0 | 314 | 2.4241 |
| 2.5375 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
raphaelmerx/marian-finetuned-en-map | f9f913ebb372ceb3515f1916644e3a0d39134e04 | 2021-12-15T12:54:46.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | raphaelmerx | null | raphaelmerx/marian-finetuned-en-map | 1 | null | transformers | 30,193 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: marian-finetuned-en-map
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-en-map
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-map](https://huggingface.co/Helsinki-NLP/opus-mt-en-map) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0542
- eval_bleu: 30.0673
- eval_runtime: 870.8596
- eval_samples_per_second: 14.467
- eval_steps_per_second: 0.226
- epoch: 2.29
- step: 17104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
raruidol/PlayerANchess | a6b120bde0c12ef0535292d59107016fa47a93bf | 2021-09-16T08:57:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | raruidol | null | raruidol/PlayerANchess | 1 | null | transformers | 30,194 | Algebraic Notation model of sequences of moves done by a unique player in a chess game. |
reach-vb/wav2vec2-large-xls-r-1B-common_voice-sl-ft | 095eff565c6a9ae069bb90a7e8d6a29b0b401b6c | 2022-03-23T18:29:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sl",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | reach-vb | null | reach-vb/wav2vec2-large-xls-r-1B-common_voice-sl-ft | 1 | 1 | transformers | 30,195 | ---
license: apache-2.0
language:
- sl
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-1B-common_voice-sl-ft
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: lv
metrics:
- name: Test WER
type: wer
value: 23.26
- name: Test CER
type: cer
value: 7.95
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: sl
metrics:
- name: Test WER
type: wer
value: 13.59
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Test WER
type: wer
value: 62.71
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 62.34
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1B-common_voice-sl-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2112
- Wer: 0.1404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8291 | 12.2 | 500 | 0.5674 | 0.7611 |
| 0.0416 | 24.39 | 1000 | 0.3093 | 0.2964 |
| 0.0256 | 36.59 | 1500 | 0.2224 | 0.2072 |
| 0.0179 | 48.78 | 2000 | 0.2274 | 0.1960 |
| 0.0113 | 60.98 | 2500 | 0.2078 | 0.1582 |
| 0.0086 | 73.17 | 3000 | 0.1898 | 0.1552 |
| 0.0059 | 85.37 | 3500 | 0.2054 | 0.1446 |
| 0.0044 | 97.56 | 4000 | 0.2112 | 0.1404 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
remotejob/tweetsT5_small_sum_fi | d3076f1431cb51c4694e68ad71c3f975ea6911c3 | 2021-07-02T01:47:21.000Z | [
"pytorch",
"rust",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | remotejob | null | remotejob/tweetsT5_small_sum_fi | 1 | null | transformers | 30,196 | Small t5-small model for summarization
|
ricardo-filho/BERT-pt-inf-corpus-v.1 | c67d8dc486c837af80e87491ad9dd679595b0c2b | 2021-07-24T01:42:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ricardo-filho | null | ricardo-filho/BERT-pt-inf-corpus-v.1 | 1 | null | transformers | 30,197 | Entry not found |
ricardo-filho/BERT-pt-institutional | 39985421ad4217ac9d088c54a4e1df05bdaf6336 | 2021-07-22T13:49:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ricardo-filho | null | ricardo-filho/BERT-pt-institutional | 1 | null | transformers | 30,198 | hello
|
ricardo-filho/bert-base-portuguese-cased-nli-assin | 02c67f7e7d3bb9224d72b4678c8ce282f8c068ea | 2021-08-04T01:52:07.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/bert-base-portuguese-cased-nli-assin | 1 | null | sentence-transformers | 30,199 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 295 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 30,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.