modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 18:28:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aymanelotfi/ppo-LunarLander-v2 | aymanelotfi | 2024-02-15T14:13:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-15T14:05:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 42.17 +/- 91.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Wajid333/poca-SoccerTwos | Wajid333 | 2024-02-15T14:03:16Z | 73 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2024-02-15T04:25:28Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wajid333/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
PG-AGI/ai-interviewer | PG-AGI | 2024-02-15T14:02:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T14:02:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cuphadi/a2c-PandaReachDense-v3 | Cuphadi | 2024-02-15T13:54:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-15T13:50:54Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jingya/tiny-stable-diffusion-lora-64 | Jingya | 2024-02-15T13:53:13Z | 0 | 0 | null | [
"tensorboard",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-15T13:46:44Z | ---
license: apache-2.0
---
tiny lora trained with [pokemon](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image#training-with-lora) for `hf-internal-testing/tiny-stable-diffusion-torch`. [TEST CIS ONLY]
|
manimaranpa07/my_Ws_extraction_model | manimaranpa07 | 2024-02-15T13:48:36Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-02-13T16:16:58Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_Ws_extraction_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_Ws_extraction_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2355
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 3 | 1.4172 | 0.0526 | 0.0833 | 0.0645 | 0.9083 |
| No log | 2.0 | 6 | 1.2355 | 0.0 | 0.0 | 0.0 | 0.9570 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Norod78/SDXL-Fairy-Form-LoRA | Norod78 | 2024-02-15T13:47:21Z | 12 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2024-02-15T13:47:06Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: A FairyForm Snoop dog holding a smoking wand
parameters:
negative_prompt: >-
cartoon, drawing, painting, illustration, blurry, grainy, unfocused, nsfw,
nude, naked, bad hands, mutilated limbs, detached limbs, extra fingers
output:
url: >-
images/01138-7780-A FairyForm Snoop dog holding a smoking wand
_lora_SDXL_Fairy_Form_LoRA_0.8_.jpg
- text: A painting of The Mona Lisa FairyForm
parameters:
negative_prompt: >-
blurry, grainy, unfocused, nsfw, nude, naked, bad hands, mutilated limbs,
detached limbs, extra fingers
output:
url: >-
images/01101-7778-A painting of The Mona Lisa FairyForm
_lora_SDXL_Fairy_Form_LoRA_0.8_.jpg
- text: A professional studio photo of Godzilla FairyForm in a ruined city
parameters:
negative_prompt: blurry, grainy, unfocused, cartoon, illustration, drawing
output:
url: >-
images/01064-7777-A professional studio photo of Godzilla FairyForm in a
ruined city _lora_SDXL_Fairy_Form_LoRA_0.8_.jpg
- text: A cinematic photo of a FairyForm wonderwoman in a field of pink flowers
parameters:
negative_prompt: >-
blurry, grainy, unfocused, cartoon, illustration, drawing, nsfw, nude,
naked, bad hands, mutilated limbs, detached limbs, etra fingers
output:
url: >-
images/01075-7780-A cinematic photo of a FairyForm wonderwoman in a field
of pink flowers _lora_SDXL_Fairy_Form_LoRA_0.8_.jpg
- text: >-
A cinematic photo of a FairyForm Cthulhu rising from the sea in a great
sparkle storm
parameters:
negative_prompt: >-
blurry, grainy, unfocused, nsfw, nude, naked, bad hands, mutilated limbs,
detached limbs, extra fingers
output:
url: >-
images/01078-7779-A cinematic photo of a FairyForm Cthulhu rising from the
sea in a great sparkle storm _lora_SDXL_Fairy_Form_LoRA_0.8_.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: FairyForm
---
# SDXL Fairy Form LoRA
<Gallery />
## Model description
Turn things into their Fairy Form
Use *FairyForm* in your prompts
[CivitAI link](https://civitai.com/models/306810/sdxl-fairy-form-lora)
[The dataset](https://civitai.com/api/download/training-data/344394)
## Trigger words
You should use `FairyForm` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/SDXL-Fairy-Form-LoRA/tree/main) them in the Files & versions tab.
|
Yuss68/HAR_model | Yuss68 | 2024-02-15T13:40:28Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-15T13:39:07Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: HAR_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HAR_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5524
- Rouge1: 0.3529
- Rouge2: 0.1071
- Rougel: 0.2263
- Rougelsum: 0.2263
- Gen Len: 86.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 2.9579 | 0.312 | 0.0738 | 0.2003 | 0.2003 | 75.0 |
| No log | 2.0 | 2 | 2.8855 | 0.312 | 0.0738 | 0.2003 | 0.2003 | 75.0 |
| No log | 3.0 | 3 | 2.8381 | 0.3376 | 0.0808 | 0.205 | 0.205 | 77.5 |
| No log | 4.0 | 4 | 2.7929 | 0.3383 | 0.0903 | 0.2018 | 0.2018 | 74.5 |
| No log | 5.0 | 5 | 2.7389 | 0.3383 | 0.0903 | 0.2018 | 0.2018 | 74.5 |
| No log | 6.0 | 6 | 2.6640 | 0.3383 | 0.0903 | 0.2018 | 0.2018 | 74.5 |
| No log | 7.0 | 7 | 2.6333 | 0.3422 | 0.0916 | 0.1961 | 0.1961 | 72.0 |
| No log | 8.0 | 8 | 2.6110 | 0.3383 | 0.0903 | 0.2018 | 0.2018 | 74.5 |
| No log | 9.0 | 9 | 2.5951 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
| No log | 10.0 | 10 | 2.5826 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
| No log | 11.0 | 11 | 2.5732 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
| No log | 12.0 | 12 | 2.5632 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
| No log | 13.0 | 13 | 2.5632 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
| No log | 14.0 | 14 | 2.5562 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
| No log | 15.0 | 15 | 2.5524 | 0.3529 | 0.1071 | 0.2263 | 0.2263 | 86.0 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
hiendang7613/xlmr-lstm-crf-resume-ner4 | hiendang7613 | 2024-02-15T13:38:27Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:fjd_dataset",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-02-15T10:11:34Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- fjd_dataset
model-index:
- name: xlmr-lstm-crf-resume-ner4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-lstm-crf-resume-ner4
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the fjd_dataset dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1764
- eval_precision: 0.5811
- eval_recall: 0.5602
- eval_f1: 0.5705
- eval_accuracy: 0.9501
- eval_runtime: 52.6822
- eval_samples_per_second: 94.415
- eval_steps_per_second: 2.961
- epoch: 4.0
- step: 3680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
ChayanM/Image_Captioner_Mimic | ChayanM | 2024-02-15T13:36:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2024-02-11T07:33:57Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Image_Captioner_Mimic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Image_Captioner_Mimic
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0963
- Rouge1: 32.528
- Rouge2: 19.9922
- Rougel: 31.403
- Rougelsum: 31.9372
- Gen Len: 12.5584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0597 | 1.0 | 24457 | 0.0567 | 37.8657 | 27.8087 | 37.4596 | 37.752 | 9.9527 |
| 0.0533 | 2.0 | 48914 | 0.0526 | 39.2211 | 28.2036 | 38.5786 | 38.9976 | 10.7079 |
| 0.0507 | 3.0 | 73371 | 0.0499 | 39.3449 | 28.3875 | 38.7151 | 39.0449 | 10.2091 |
| 0.0457 | 4.0 | 97828 | 0.0479 | 39.8753 | 28.5 | 39.127 | 39.6178 | 11.2407 |
| 0.0419 | 5.0 | 122285 | 0.0461 | 40.0478 | 28.797 | 39.3201 | 39.7468 | 10.3153 |
| 0.0406 | 6.0 | 146742 | 0.0445 | 39.7923 | 28.4281 | 39.0583 | 39.4523 | 10.4186 |
| 0.0373 | 7.0 | 171199 | 0.0429 | 39.954 | 28.535 | 39.2226 | 39.6457 | 10.6640 |
| 0.0347 | 8.0 | 195656 | 0.0419 | 39.4329 | 28.0336 | 38.6815 | 39.0968 | 10.7775 |
| 0.031 | 9.0 | 220113 | 0.0411 | 39.4524 | 28.1057 | 38.6998 | 39.0906 | 10.8397 |
| 0.0286 | 10.0 | 244570 | 0.0407 | 39.1493 | 27.639 | 38.3784 | 38.8085 | 10.9530 |
| 0.0261 | 11.0 | 269027 | 0.0408 | 38.8083 | 27.2206 | 37.9679 | 38.422 | 11.2390 |
| 0.0249 | 12.0 | 293484 | 0.0412 | 38.3972 | 26.7316 | 37.5838 | 38.0409 | 11.4510 |
| 0.0214 | 13.0 | 317941 | 0.0424 | 37.785 | 26.3302 | 36.9553 | 37.3764 | 11.4482 |
| 0.0188 | 14.0 | 342398 | 0.0438 | 36.9552 | 25.3108 | 36.0278 | 36.4965 | 11.6232 |
| 0.0174 | 15.0 | 366855 | 0.0458 | 35.6476 | 23.9574 | 34.6526 | 35.1259 | 11.6605 |
| 0.0153 | 16.0 | 391312 | 0.0487 | 34.657 | 22.8337 | 33.5891 | 34.1343 | 12.2395 |
| 0.013 | 17.0 | 415769 | 0.0518 | 33.5548 | 21.1569 | 32.4899 | 33.0394 | 12.2604 |
| 0.0114 | 18.0 | 440226 | 0.0559 | 34.3809 | 22.0108 | 33.2698 | 33.8578 | 12.0861 |
| 0.01 | 19.0 | 464683 | 0.0601 | 32.9062 | 20.3145 | 31.8147 | 32.3802 | 12.5176 |
| 0.0081 | 20.0 | 489140 | 0.0651 | 32.9482 | 20.3862 | 31.865 | 32.3837 | 12.4577 |
| 0.0069 | 21.0 | 513597 | 0.0698 | 32.3054 | 19.764 | 31.2178 | 31.7592 | 12.4939 |
| 0.0057 | 22.0 | 538054 | 0.0751 | 31.7627 | 19.0106 | 30.6263 | 31.175 | 12.7530 |
| 0.0048 | 23.0 | 562511 | 0.0793 | 31.8295 | 19.255 | 30.6958 | 31.2314 | 12.6077 |
| 0.0041 | 24.0 | 586968 | 0.0834 | 32.1523 | 19.2017 | 30.9774 | 31.5383 | 12.7461 |
| 0.0032 | 25.0 | 611425 | 0.0870 | 32.5379 | 20.0041 | 31.3903 | 31.9037 | 12.6848 |
| 0.0025 | 26.0 | 635882 | 0.0903 | 32.6757 | 20.1388 | 31.5495 | 32.0827 | 12.5950 |
| 0.0023 | 27.0 | 660339 | 0.0927 | 32.0874 | 19.3546 | 30.9125 | 31.4675 | 12.6290 |
| 0.0019 | 28.0 | 684796 | 0.0947 | 32.6988 | 20.1847 | 31.5643 | 32.1143 | 12.5412 |
| 0.0017 | 29.0 | 709253 | 0.0958 | 32.4574 | 19.7702 | 31.2955 | 31.8608 | 12.5558 |
| 0.0014 | 30.0 | 733710 | 0.0963 | 32.528 | 19.9922 | 31.403 | 31.9372 | 12.5584 |
### Framework versions
- Transformers 4.37.1
- Pytorch 1.13.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.1
|
hewonty/bert-ner-finetuned-pii | hewonty | 2024-02-15T13:25:44Z | 98 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-02-13T12:09:46Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-ner-finetuned-pii
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-ner-finetuned-pii
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Precision: 0.9427
- Recall: 0.9727
- F1: 0.9575
- Accuracy: 0.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0105 | 1.0 | 1324 | 0.0132 | 0.8641 | 0.9464 | 0.9033 | 0.9960 |
| 0.0056 | 2.0 | 2648 | 0.0080 | 0.9298 | 0.9643 | 0.9467 | 0.9978 |
| 0.0047 | 3.0 | 3972 | 0.0076 | 0.9427 | 0.9727 | 0.9575 | 0.9982 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
logicker/SkkuDS-DPO-72B-v4 | logicker | 2024-02-15T13:23:59Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained, dpo",
"conversational",
"en",
"arxiv:2309.16609",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T10:05:07Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained, dpo
---
# Qwen1.5-72B
## DPO Tuning
- Dataset: Intel/orca_dpo_pairs
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Citation
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
jhovitor98/pormas_gpt2 | jhovitor98 | 2024-02-15T13:23:50Z | 0 | 0 | null | [
"text-generation",
"pt",
"license:other",
"region:us"
]
| text-generation | 2024-02-15T13:15:20Z | ---
license: other
language:
- pt
pipeline_tag: text-generation
--- |
Anguuuuus/laryngitis-phrase | Anguuuuus | 2024-02-15T13:13:15Z | 146 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-15T13:13:00Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: laryngitis-phrase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laryngitis-phrase
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4868
- Accuracy: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6956 | 1.0 | 6 | 0.6940 | 0.4545 |
| 0.6829 | 2.0 | 12 | 0.7607 | 0.1818 |
| 0.6688 | 3.0 | 18 | 0.7834 | 0.1818 |
| 0.6342 | 4.0 | 24 | 0.7330 | 0.2727 |
| 0.5927 | 5.0 | 30 | 0.6679 | 0.6818 |
| 0.5485 | 6.0 | 36 | 0.6057 | 0.7273 |
| 0.5085 | 7.0 | 42 | 0.5197 | 0.8636 |
| 0.4655 | 8.0 | 48 | 0.4943 | 0.8636 |
| 0.4122 | 9.0 | 54 | 0.5054 | 0.8636 |
| 0.3926 | 10.0 | 60 | 0.4868 | 0.8636 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
CatBarks/bertES_PosWeighted1_model | CatBarks | 2024-02-15T13:04:27Z | 194 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-15T13:03:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LarryAIDraw/Ranger-8 | LarryAIDraw | 2024-02-15T13:02:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-02-15T13:00:27Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/285085/kantai-collection-ranger |
LarryAIDraw/buzhihuo_v0_5 | LarryAIDraw | 2024-02-15T13:01:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-02-15T12:58:15Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/101276/realistic-and-animegame-lessonmyojigreater-buzhihuo-cosplay-cosplay |
LarryAIDraw/buzhihuo_V1 | LarryAIDraw | 2024-02-15T13:01:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-02-15T12:57:44Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/47056/onmyojishiranui-buzhihuo |
minuva/MiniLMv2-toxic-jigsaw-lite | minuva | 2024-02-15T12:56:27Z | 101 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"toxic",
"toxicity",
"hate speech",
"offensive language",
"multi-class-classification",
"multi-label-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T17:47:07Z | ---
language:
- en
tags:
- toxic
- toxicity
- hate speech
- offensive language
- multi-class-classification
- multi-label-classification
license: apache-2.0
---
# Text Classification Toxicity
This model is a fined-tuned version of [MiniLMv2-L6-H384](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Large) on the on the [Jigsaw 1st Kaggle competition](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge) dataset using [unitary/toxic-bert](https://huggingface.co/unitary/toxic-bert) as teacher model.
The quantized version in ONNX format can be found [here](https://huggingface.co/minuva/MiniLMv2-toxic-jigaw-lite-onnx).
The model contains two labels only (toxicity and severe toxicity). For the model with all labels refer to this [page](https://huggingface.co/minuva/MiniLMv2-toxic-jijgsaw)
# Load the Model
```py
from transformers import pipeline
pipe = pipeline(model='minuva/MiniLMv2-toxic-jigsaw-lite', task='text-classification')
pipe("This is pure trash")
# [{'label': 'toxic', 'score': 0.887}]
```
# Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 48
- eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- warmup_ratio: 0.1
# Metrics (comparison with teacher model)
| Teacher (params) | Student (params) | Set (metric) | Score (teacher) | Score (student) |
|--------------------|-------------|----------|--------| --------|
| unitary/toxic-bert (110M) | MiniLMv2-toxic-jigsaw-lite (23M) | Test (ROC_AUC) | 0.982677 | 0.9815 |
# Deployment
Check our [fast-nlp-text-toxicity repository](https://github.com/minuva/fast-nlp-text-toxicity) for a FastAPI and ONNX based server to deploy this model on CPU devices.
|
CatBarks/GPT2ES_ClassWeighted001_tokenizer | CatBarks | 2024-02-15T12:52:43Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T12:52:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hugo-massonnat/Reinforce-PixelCopter | hugo-massonnat | 2024-02-15T12:47:23Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-13T14:54:34Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Oysiyl/speecht5_tts_common_voice_nl | Oysiyl | 2024-02-15T12:45:12Z | 83 | 1 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"nl",
"dataset:common_voice_16_1",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2024-02-15T11:15:08Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- common_voice_16_1
model-index:
- name: speecht5_tts_common_voice_nl
results: []
language:
- nl
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts_common_voice_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_16_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7187 | 1.0 | 441 | 0.4533 |
| 0.4947 | 2.0 | 882 | 0.4243 |
| 0.4648 | 3.0 | 1323 | 0.4131 |
| 0.4468 | 4.0 | 1764 | 0.4062 |
| 0.4384 | 5.0 | 2205 | 0.4016 |
| 0.4362 | 6.0 | 2646 | 0.3982 |
| 0.4309 | 7.0 | 3087 | 0.3964 |
| 0.4317 | 8.0 | 3528 | 0.3959 |
| 0.427 | 9.0 | 3969 | 0.3939 |
| 0.424 | 10.0 | 4410 | 0.3938 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.2 |
SmartComponents/bge-micro-v2 | SmartComponents | 2024-02-15T12:38:51Z | 278 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-02-15T12:19:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge_micro
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 67.76119402985074
- type: ap
value: 29.637849284211114
- type: f1
value: 61.31181187111905
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 79.7547
- type: ap
value: 74.21401629809145
- type: f1
value: 79.65319615433783
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.452000000000005
- type: f1
value: 37.0245198854966
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.152
- type: map_at_10
value: 46.702
- type: map_at_100
value: 47.563
- type: map_at_1000
value: 47.567
- type: map_at_3
value: 42.058
- type: map_at_5
value: 44.608
- type: mrr_at_1
value: 32.006
- type: mrr_at_10
value: 47.064
- type: mrr_at_100
value: 47.910000000000004
- type: mrr_at_1000
value: 47.915
- type: mrr_at_3
value: 42.283
- type: mrr_at_5
value: 44.968
- type: ndcg_at_1
value: 31.152
- type: ndcg_at_10
value: 55.308
- type: ndcg_at_100
value: 58.965
- type: ndcg_at_1000
value: 59.067
- type: ndcg_at_3
value: 45.698
- type: ndcg_at_5
value: 50.296
- type: precision_at_1
value: 31.152
- type: precision_at_10
value: 8.279
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.753
- type: precision_at_5
value: 13.485
- type: recall_at_1
value: 31.152
- type: recall_at_10
value: 82.788
- type: recall_at_100
value: 98.72
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 56.259
- type: recall_at_5
value: 67.425
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.52692241938116
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 33.245710292773595
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.08493637155168
- type: mrr
value: 71.94378490084861
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.1602804378326
- type: cos_sim_spearman
value: 82.92478106365587
- type: euclidean_pearson
value: 82.27930167277077
- type: euclidean_spearman
value: 82.18560759458093
- type: manhattan_pearson
value: 82.34277425888187
- type: manhattan_spearman
value: 81.72776583704467
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.17207792207792
- type: f1
value: 81.09893836310513
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.109308463095516
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.06048212317168
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.233999999999998
- type: map_at_10
value: 38.092999999999996
- type: map_at_100
value: 39.473
- type: map_at_1000
value: 39.614
- type: map_at_3
value: 34.839
- type: map_at_5
value: 36.523
- type: mrr_at_1
value: 35.193000000000005
- type: mrr_at_10
value: 44.089
- type: mrr_at_100
value: 44.927
- type: mrr_at_1000
value: 44.988
- type: mrr_at_3
value: 41.559000000000005
- type: mrr_at_5
value: 43.162
- type: ndcg_at_1
value: 35.193000000000005
- type: ndcg_at_10
value: 44.04
- type: ndcg_at_100
value: 49.262
- type: ndcg_at_1000
value: 51.847
- type: ndcg_at_3
value: 39.248
- type: ndcg_at_5
value: 41.298
- type: precision_at_1
value: 35.193000000000005
- type: precision_at_10
value: 8.555
- type: precision_at_100
value: 1.3820000000000001
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 19.123
- type: precision_at_5
value: 13.648
- type: recall_at_1
value: 28.233999999999998
- type: recall_at_10
value: 55.094
- type: recall_at_100
value: 76.85300000000001
- type: recall_at_1000
value: 94.163
- type: recall_at_3
value: 40.782000000000004
- type: recall_at_5
value: 46.796
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.538
- type: map_at_10
value: 28.449
- type: map_at_100
value: 29.471000000000004
- type: map_at_1000
value: 29.599999999999998
- type: map_at_3
value: 26.371
- type: map_at_5
value: 27.58
- type: mrr_at_1
value: 26.815
- type: mrr_at_10
value: 33.331
- type: mrr_at_100
value: 34.114
- type: mrr_at_1000
value: 34.182
- type: mrr_at_3
value: 31.561
- type: mrr_at_5
value: 32.608
- type: ndcg_at_1
value: 26.815
- type: ndcg_at_10
value: 32.67
- type: ndcg_at_100
value: 37.039
- type: ndcg_at_1000
value: 39.769
- type: ndcg_at_3
value: 29.523
- type: ndcg_at_5
value: 31.048
- type: precision_at_1
value: 26.815
- type: precision_at_10
value: 5.955
- type: precision_at_100
value: 1.02
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 14.033999999999999
- type: precision_at_5
value: 9.911
- type: recall_at_1
value: 21.538
- type: recall_at_10
value: 40.186
- type: recall_at_100
value: 58.948
- type: recall_at_1000
value: 77.158
- type: recall_at_3
value: 30.951
- type: recall_at_5
value: 35.276
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.211999999999996
- type: map_at_10
value: 46.562
- type: map_at_100
value: 47.579
- type: map_at_1000
value: 47.646
- type: map_at_3
value: 43.485
- type: map_at_5
value: 45.206
- type: mrr_at_1
value: 40.627
- type: mrr_at_10
value: 49.928
- type: mrr_at_100
value: 50.647
- type: mrr_at_1000
value: 50.685
- type: mrr_at_3
value: 47.513
- type: mrr_at_5
value: 48.958
- type: ndcg_at_1
value: 40.627
- type: ndcg_at_10
value: 52.217
- type: ndcg_at_100
value: 56.423
- type: ndcg_at_1000
value: 57.821999999999996
- type: ndcg_at_3
value: 46.949000000000005
- type: ndcg_at_5
value: 49.534
- type: precision_at_1
value: 40.627
- type: precision_at_10
value: 8.476
- type: precision_at_100
value: 1.15
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 21.003
- type: precision_at_5
value: 14.469999999999999
- type: recall_at_1
value: 35.211999999999996
- type: recall_at_10
value: 65.692
- type: recall_at_100
value: 84.011
- type: recall_at_1000
value: 94.03099999999999
- type: recall_at_3
value: 51.404
- type: recall_at_5
value: 57.882
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.09
- type: map_at_10
value: 29.516
- type: map_at_100
value: 30.462
- type: map_at_1000
value: 30.56
- type: map_at_3
value: 26.945000000000004
- type: map_at_5
value: 28.421999999999997
- type: mrr_at_1
value: 23.616
- type: mrr_at_10
value: 31.221
- type: mrr_at_100
value: 32.057
- type: mrr_at_1000
value: 32.137
- type: mrr_at_3
value: 28.738000000000003
- type: mrr_at_5
value: 30.156
- type: ndcg_at_1
value: 23.616
- type: ndcg_at_10
value: 33.97
- type: ndcg_at_100
value: 38.806000000000004
- type: ndcg_at_1000
value: 41.393
- type: ndcg_at_3
value: 28.908
- type: ndcg_at_5
value: 31.433
- type: precision_at_1
value: 23.616
- type: precision_at_10
value: 5.299
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 12.015
- type: precision_at_5
value: 8.701
- type: recall_at_1
value: 22.09
- type: recall_at_10
value: 46.089999999999996
- type: recall_at_100
value: 68.729
- type: recall_at_1000
value: 88.435
- type: recall_at_3
value: 32.584999999999994
- type: recall_at_5
value: 38.550000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.469
- type: map_at_10
value: 22.436
- type: map_at_100
value: 23.465
- type: map_at_1000
value: 23.608999999999998
- type: map_at_3
value: 19.716
- type: map_at_5
value: 21.182000000000002
- type: mrr_at_1
value: 18.905
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.46
- type: mrr_at_1000
value: 27.553
- type: mrr_at_3
value: 23.921999999999997
- type: mrr_at_5
value: 25.302999999999997
- type: ndcg_at_1
value: 18.905
- type: ndcg_at_10
value: 27.437
- type: ndcg_at_100
value: 32.555
- type: ndcg_at_1000
value: 35.885
- type: ndcg_at_3
value: 22.439
- type: ndcg_at_5
value: 24.666
- type: precision_at_1
value: 18.905
- type: precision_at_10
value: 5.2490000000000006
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 10.862
- type: precision_at_5
value: 8.085
- type: recall_at_1
value: 15.469
- type: recall_at_10
value: 38.706
- type: recall_at_100
value: 61.242
- type: recall_at_1000
value: 84.84
- type: recall_at_3
value: 24.973
- type: recall_at_5
value: 30.603
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.918000000000003
- type: map_at_10
value: 34.296
- type: map_at_100
value: 35.632000000000005
- type: map_at_1000
value: 35.748999999999995
- type: map_at_3
value: 31.304
- type: map_at_5
value: 33.166000000000004
- type: mrr_at_1
value: 30.703000000000003
- type: mrr_at_10
value: 39.655
- type: mrr_at_100
value: 40.569
- type: mrr_at_1000
value: 40.621
- type: mrr_at_3
value: 37.023
- type: mrr_at_5
value: 38.664
- type: ndcg_at_1
value: 30.703000000000003
- type: ndcg_at_10
value: 39.897
- type: ndcg_at_100
value: 45.777
- type: ndcg_at_1000
value: 48.082
- type: ndcg_at_3
value: 35.122
- type: ndcg_at_5
value: 37.691
- type: precision_at_1
value: 30.703000000000003
- type: precision_at_10
value: 7.305000000000001
- type: precision_at_100
value: 1.208
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 16.811
- type: precision_at_5
value: 12.203999999999999
- type: recall_at_1
value: 24.918000000000003
- type: recall_at_10
value: 51.31
- type: recall_at_100
value: 76.534
- type: recall_at_1000
value: 91.911
- type: recall_at_3
value: 37.855
- type: recall_at_5
value: 44.493
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.416
- type: map_at_10
value: 30.474
- type: map_at_100
value: 31.759999999999998
- type: map_at_1000
value: 31.891000000000002
- type: map_at_3
value: 27.728
- type: map_at_5
value: 29.247
- type: mrr_at_1
value: 28.881
- type: mrr_at_10
value: 36.418
- type: mrr_at_100
value: 37.347
- type: mrr_at_1000
value: 37.415
- type: mrr_at_3
value: 33.942
- type: mrr_at_5
value: 35.386
- type: ndcg_at_1
value: 28.881
- type: ndcg_at_10
value: 35.812
- type: ndcg_at_100
value: 41.574
- type: ndcg_at_1000
value: 44.289
- type: ndcg_at_3
value: 31.239
- type: ndcg_at_5
value: 33.302
- type: precision_at_1
value: 28.881
- type: precision_at_10
value: 6.598
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.954
- type: precision_at_5
value: 10.776
- type: recall_at_1
value: 22.416
- type: recall_at_10
value: 46.243
- type: recall_at_100
value: 71.352
- type: recall_at_1000
value: 90.034
- type: recall_at_3
value: 32.873000000000005
- type: recall_at_5
value: 38.632
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.528166666666667
- type: map_at_10
value: 30.317833333333333
- type: map_at_100
value: 31.44108333333333
- type: map_at_1000
value: 31.566666666666666
- type: map_at_3
value: 27.84425
- type: map_at_5
value: 29.233333333333334
- type: mrr_at_1
value: 26.75733333333333
- type: mrr_at_10
value: 34.24425
- type: mrr_at_100
value: 35.11375
- type: mrr_at_1000
value: 35.184333333333335
- type: mrr_at_3
value: 32.01225
- type: mrr_at_5
value: 33.31225
- type: ndcg_at_1
value: 26.75733333333333
- type: ndcg_at_10
value: 35.072583333333334
- type: ndcg_at_100
value: 40.13358333333334
- type: ndcg_at_1000
value: 42.81825
- type: ndcg_at_3
value: 30.79275000000001
- type: ndcg_at_5
value: 32.822
- type: precision_at_1
value: 26.75733333333333
- type: precision_at_10
value: 6.128083333333334
- type: precision_at_100
value: 1.019
- type: precision_at_1000
value: 0.14391666666666664
- type: precision_at_3
value: 14.129916666666665
- type: precision_at_5
value: 10.087416666666668
- type: recall_at_1
value: 22.528166666666667
- type: recall_at_10
value: 45.38341666666667
- type: recall_at_100
value: 67.81791666666668
- type: recall_at_1000
value: 86.71716666666666
- type: recall_at_3
value: 33.38741666666667
- type: recall_at_5
value: 38.62041666666667
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.975
- type: map_at_10
value: 28.144999999999996
- type: map_at_100
value: 28.994999999999997
- type: map_at_1000
value: 29.086000000000002
- type: map_at_3
value: 25.968999999999998
- type: map_at_5
value: 27.321
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 30.822
- type: mrr_at_100
value: 31.647
- type: mrr_at_1000
value: 31.712
- type: mrr_at_3
value: 28.860000000000003
- type: mrr_at_5
value: 30.041
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 31.929999999999996
- type: ndcg_at_100
value: 36.258
- type: ndcg_at_1000
value: 38.682
- type: ndcg_at_3
value: 27.972
- type: ndcg_at_5
value: 30.089
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 4.923
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 11.860999999999999
- type: precision_at_5
value: 8.466
- type: recall_at_1
value: 21.975
- type: recall_at_10
value: 41.102
- type: recall_at_100
value: 60.866
- type: recall_at_1000
value: 78.781
- type: recall_at_3
value: 30.268
- type: recall_at_5
value: 35.552
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.845999999999998
- type: map_at_10
value: 21.861
- type: map_at_100
value: 22.798
- type: map_at_1000
value: 22.925
- type: map_at_3
value: 19.922
- type: map_at_5
value: 21.054000000000002
- type: mrr_at_1
value: 19.098000000000003
- type: mrr_at_10
value: 25.397
- type: mrr_at_100
value: 26.246000000000002
- type: mrr_at_1000
value: 26.33
- type: mrr_at_3
value: 23.469
- type: mrr_at_5
value: 24.646
- type: ndcg_at_1
value: 19.098000000000003
- type: ndcg_at_10
value: 25.807999999999996
- type: ndcg_at_100
value: 30.445
- type: ndcg_at_1000
value: 33.666000000000004
- type: ndcg_at_3
value: 22.292
- type: ndcg_at_5
value: 24.075
- type: precision_at_1
value: 19.098000000000003
- type: precision_at_10
value: 4.58
- type: precision_at_100
value: 0.8099999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 10.346
- type: precision_at_5
value: 7.542999999999999
- type: recall_at_1
value: 15.845999999999998
- type: recall_at_10
value: 34.172999999999995
- type: recall_at_100
value: 55.24099999999999
- type: recall_at_1000
value: 78.644
- type: recall_at_3
value: 24.401
- type: recall_at_5
value: 28.938000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.974
- type: map_at_10
value: 30.108
- type: map_at_100
value: 31.208000000000002
- type: map_at_1000
value: 31.330999999999996
- type: map_at_3
value: 27.889999999999997
- type: map_at_5
value: 29.023
- type: mrr_at_1
value: 26.493
- type: mrr_at_10
value: 33.726
- type: mrr_at_100
value: 34.622
- type: mrr_at_1000
value: 34.703
- type: mrr_at_3
value: 31.575999999999997
- type: mrr_at_5
value: 32.690999999999995
- type: ndcg_at_1
value: 26.493
- type: ndcg_at_10
value: 34.664
- type: ndcg_at_100
value: 39.725
- type: ndcg_at_1000
value: 42.648
- type: ndcg_at_3
value: 30.447999999999997
- type: ndcg_at_5
value: 32.145
- type: precision_at_1
value: 26.493
- type: precision_at_10
value: 5.7090000000000005
- type: precision_at_100
value: 0.9199999999999999
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 13.464
- type: precision_at_5
value: 9.384
- type: recall_at_1
value: 22.974
- type: recall_at_10
value: 45.097
- type: recall_at_100
value: 66.908
- type: recall_at_1000
value: 87.495
- type: recall_at_3
value: 33.338
- type: recall_at_5
value: 37.499
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.408
- type: map_at_10
value: 29.580000000000002
- type: map_at_100
value: 31.145
- type: map_at_1000
value: 31.369000000000003
- type: map_at_3
value: 27.634999999999998
- type: map_at_5
value: 28.766000000000002
- type: mrr_at_1
value: 27.272999999999996
- type: mrr_at_10
value: 33.93
- type: mrr_at_100
value: 34.963
- type: mrr_at_1000
value: 35.031
- type: mrr_at_3
value: 32.016
- type: mrr_at_5
value: 33.221000000000004
- type: ndcg_at_1
value: 27.272999999999996
- type: ndcg_at_10
value: 33.993
- type: ndcg_at_100
value: 40.333999999999996
- type: ndcg_at_1000
value: 43.361
- type: ndcg_at_3
value: 30.918
- type: ndcg_at_5
value: 32.552
- type: precision_at_1
value: 27.272999999999996
- type: precision_at_10
value: 6.285
- type: precision_at_100
value: 1.389
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 14.427000000000001
- type: precision_at_5
value: 10.356
- type: recall_at_1
value: 22.408
- type: recall_at_10
value: 41.318
- type: recall_at_100
value: 70.539
- type: recall_at_1000
value: 90.197
- type: recall_at_3
value: 32.513
- type: recall_at_5
value: 37.0
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.258000000000003
- type: map_at_10
value: 24.294
- type: map_at_100
value: 25.305
- type: map_at_1000
value: 25.419999999999998
- type: map_at_3
value: 22.326999999999998
- type: map_at_5
value: 23.31
- type: mrr_at_1
value: 18.484
- type: mrr_at_10
value: 25.863999999999997
- type: mrr_at_100
value: 26.766000000000002
- type: mrr_at_1000
value: 26.855
- type: mrr_at_3
value: 23.968
- type: mrr_at_5
value: 24.911
- type: ndcg_at_1
value: 18.484
- type: ndcg_at_10
value: 28.433000000000003
- type: ndcg_at_100
value: 33.405
- type: ndcg_at_1000
value: 36.375
- type: ndcg_at_3
value: 24.455
- type: ndcg_at_5
value: 26.031
- type: precision_at_1
value: 18.484
- type: precision_at_10
value: 4.603
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 10.659
- type: precision_at_5
value: 7.505000000000001
- type: recall_at_1
value: 17.258000000000003
- type: recall_at_10
value: 39.589999999999996
- type: recall_at_100
value: 62.592000000000006
- type: recall_at_1000
value: 84.917
- type: recall_at_3
value: 28.706
- type: recall_at_5
value: 32.224000000000004
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.578999999999999
- type: map_at_10
value: 17.642
- type: map_at_100
value: 19.451
- type: map_at_1000
value: 19.647000000000002
- type: map_at_3
value: 14.618
- type: map_at_5
value: 16.145
- type: mrr_at_1
value: 23.322000000000003
- type: mrr_at_10
value: 34.204
- type: mrr_at_100
value: 35.185
- type: mrr_at_1000
value: 35.235
- type: mrr_at_3
value: 30.847
- type: mrr_at_5
value: 32.824
- type: ndcg_at_1
value: 23.322000000000003
- type: ndcg_at_10
value: 25.352999999999998
- type: ndcg_at_100
value: 32.574
- type: ndcg_at_1000
value: 36.073
- type: ndcg_at_3
value: 20.318
- type: ndcg_at_5
value: 22.111
- type: precision_at_1
value: 23.322000000000003
- type: precision_at_10
value: 8.02
- type: precision_at_100
value: 1.5730000000000002
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 15.049000000000001
- type: precision_at_5
value: 11.87
- type: recall_at_1
value: 10.578999999999999
- type: recall_at_10
value: 30.964999999999996
- type: recall_at_100
value: 55.986000000000004
- type: recall_at_1000
value: 75.565
- type: recall_at_3
value: 18.686
- type: recall_at_5
value: 23.629
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.327
- type: map_at_10
value: 14.904
- type: map_at_100
value: 20.29
- type: map_at_1000
value: 21.42
- type: map_at_3
value: 10.911
- type: map_at_5
value: 12.791
- type: mrr_at_1
value: 57.25
- type: mrr_at_10
value: 66.62700000000001
- type: mrr_at_100
value: 67.035
- type: mrr_at_1000
value: 67.052
- type: mrr_at_3
value: 64.833
- type: mrr_at_5
value: 65.908
- type: ndcg_at_1
value: 43.75
- type: ndcg_at_10
value: 32.246
- type: ndcg_at_100
value: 35.774
- type: ndcg_at_1000
value: 42.872
- type: ndcg_at_3
value: 36.64
- type: ndcg_at_5
value: 34.487
- type: precision_at_1
value: 57.25
- type: precision_at_10
value: 25.924999999999997
- type: precision_at_100
value: 7.670000000000001
- type: precision_at_1000
value: 1.599
- type: precision_at_3
value: 41.167
- type: precision_at_5
value: 34.65
- type: recall_at_1
value: 7.327
- type: recall_at_10
value: 19.625
- type: recall_at_100
value: 41.601
- type: recall_at_1000
value: 65.117
- type: recall_at_3
value: 12.308
- type: recall_at_5
value: 15.437999999999999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.53
- type: f1
value: 39.39884255816736
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.913000000000004
- type: map_at_10
value: 69.592
- type: map_at_100
value: 69.95599999999999
- type: map_at_1000
value: 69.973
- type: map_at_3
value: 67.716
- type: map_at_5
value: 68.899
- type: mrr_at_1
value: 63.561
- type: mrr_at_10
value: 74.2
- type: mrr_at_100
value: 74.468
- type: mrr_at_1000
value: 74.47500000000001
- type: mrr_at_3
value: 72.442
- type: mrr_at_5
value: 73.58
- type: ndcg_at_1
value: 63.561
- type: ndcg_at_10
value: 74.988
- type: ndcg_at_100
value: 76.52799999999999
- type: ndcg_at_1000
value: 76.88000000000001
- type: ndcg_at_3
value: 71.455
- type: ndcg_at_5
value: 73.42699999999999
- type: precision_at_1
value: 63.561
- type: precision_at_10
value: 9.547
- type: precision_at_100
value: 1.044
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 28.143
- type: precision_at_5
value: 18.008
- type: recall_at_1
value: 58.913000000000004
- type: recall_at_10
value: 87.18
- type: recall_at_100
value: 93.852
- type: recall_at_1000
value: 96.256
- type: recall_at_3
value: 77.55199999999999
- type: recall_at_5
value: 82.42399999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.761000000000001
- type: map_at_10
value: 19.564999999999998
- type: map_at_100
value: 21.099
- type: map_at_1000
value: 21.288999999999998
- type: map_at_3
value: 16.683999999999997
- type: map_at_5
value: 18.307000000000002
- type: mrr_at_1
value: 23.302
- type: mrr_at_10
value: 30.979
- type: mrr_at_100
value: 32.121
- type: mrr_at_1000
value: 32.186
- type: mrr_at_3
value: 28.549000000000003
- type: mrr_at_5
value: 30.038999999999998
- type: ndcg_at_1
value: 23.302
- type: ndcg_at_10
value: 25.592
- type: ndcg_at_100
value: 32.416
- type: ndcg_at_1000
value: 36.277
- type: ndcg_at_3
value: 22.151
- type: ndcg_at_5
value: 23.483999999999998
- type: precision_at_1
value: 23.302
- type: precision_at_10
value: 7.377000000000001
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.212
- type: precision_at_3
value: 14.712
- type: precision_at_5
value: 11.358
- type: recall_at_1
value: 11.761000000000001
- type: recall_at_10
value: 31.696
- type: recall_at_100
value: 58.01500000000001
- type: recall_at_1000
value: 81.572
- type: recall_at_3
value: 20.742
- type: recall_at_5
value: 25.707
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.275
- type: map_at_10
value: 44.712
- type: map_at_100
value: 45.621
- type: map_at_1000
value: 45.698
- type: map_at_3
value: 42.016999999999996
- type: map_at_5
value: 43.659
- type: mrr_at_1
value: 64.551
- type: mrr_at_10
value: 71.58099999999999
- type: mrr_at_100
value: 71.952
- type: mrr_at_1000
value: 71.96900000000001
- type: mrr_at_3
value: 70.236
- type: mrr_at_5
value: 71.051
- type: ndcg_at_1
value: 64.551
- type: ndcg_at_10
value: 53.913999999999994
- type: ndcg_at_100
value: 57.421
- type: ndcg_at_1000
value: 59.06
- type: ndcg_at_3
value: 49.716
- type: ndcg_at_5
value: 51.971999999999994
- type: precision_at_1
value: 64.551
- type: precision_at_10
value: 11.110000000000001
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 30.822
- type: precision_at_5
value: 20.273
- type: recall_at_1
value: 32.275
- type: recall_at_10
value: 55.55
- type: recall_at_100
value: 69.38600000000001
- type: recall_at_1000
value: 80.35799999999999
- type: recall_at_3
value: 46.232
- type: recall_at_5
value: 50.682
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 76.4604
- type: ap
value: 70.40498168422701
- type: f1
value: 76.38572688476046
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 15.065999999999999
- type: map_at_10
value: 25.058000000000003
- type: map_at_100
value: 26.268
- type: map_at_1000
value: 26.344
- type: map_at_3
value: 21.626
- type: map_at_5
value: 23.513
- type: mrr_at_1
value: 15.501000000000001
- type: mrr_at_10
value: 25.548
- type: mrr_at_100
value: 26.723000000000003
- type: mrr_at_1000
value: 26.793
- type: mrr_at_3
value: 22.142
- type: mrr_at_5
value: 24.024
- type: ndcg_at_1
value: 15.501000000000001
- type: ndcg_at_10
value: 31.008000000000003
- type: ndcg_at_100
value: 37.08
- type: ndcg_at_1000
value: 39.102
- type: ndcg_at_3
value: 23.921999999999997
- type: ndcg_at_5
value: 27.307
- type: precision_at_1
value: 15.501000000000001
- type: precision_at_10
value: 5.155
- type: precision_at_100
value: 0.822
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.363
- type: precision_at_5
value: 7.917000000000001
- type: recall_at_1
value: 15.065999999999999
- type: recall_at_10
value: 49.507
- type: recall_at_100
value: 78.118
- type: recall_at_1000
value: 93.881
- type: recall_at_3
value: 30.075000000000003
- type: recall_at_5
value: 38.222
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.6703146374829
- type: f1
value: 90.1258004293966
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 68.29229366165072
- type: f1
value: 50.016194478997875
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.57767316745124
- type: f1
value: 67.16194062146954
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.92064559515804
- type: f1
value: 73.6680729569968
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.56335607367883
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.131807833734268
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.07390328719844
- type: mrr
value: 32.117370992867905
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.274
- type: map_at_10
value: 11.489
- type: map_at_100
value: 14.518
- type: map_at_1000
value: 15.914
- type: map_at_3
value: 8.399
- type: map_at_5
value: 9.889000000000001
- type: mrr_at_1
value: 42.724000000000004
- type: mrr_at_10
value: 51.486
- type: mrr_at_100
value: 51.941
- type: mrr_at_1000
value: 51.99
- type: mrr_at_3
value: 49.278
- type: mrr_at_5
value: 50.485
- type: ndcg_at_1
value: 39.938
- type: ndcg_at_10
value: 31.862000000000002
- type: ndcg_at_100
value: 29.235
- type: ndcg_at_1000
value: 37.802
- type: ndcg_at_3
value: 35.754999999999995
- type: ndcg_at_5
value: 34.447
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 23.901
- type: precision_at_100
value: 7.715
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 33.437
- type: precision_at_5
value: 29.782999999999998
- type: recall_at_1
value: 5.274
- type: recall_at_10
value: 15.351
- type: recall_at_100
value: 29.791
- type: recall_at_1000
value: 60.722
- type: recall_at_3
value: 9.411
- type: recall_at_5
value: 12.171999999999999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.099
- type: map_at_10
value: 27.913
- type: map_at_100
value: 29.281000000000002
- type: map_at_1000
value: 29.343999999999998
- type: map_at_3
value: 23.791
- type: map_at_5
value: 26.049
- type: mrr_at_1
value: 18.337
- type: mrr_at_10
value: 29.953999999999997
- type: mrr_at_100
value: 31.080999999999996
- type: mrr_at_1000
value: 31.130000000000003
- type: mrr_at_3
value: 26.168000000000003
- type: mrr_at_5
value: 28.277
- type: ndcg_at_1
value: 18.308
- type: ndcg_at_10
value: 34.938
- type: ndcg_at_100
value: 41.125
- type: ndcg_at_1000
value: 42.708
- type: ndcg_at_3
value: 26.805
- type: ndcg_at_5
value: 30.686999999999998
- type: precision_at_1
value: 18.308
- type: precision_at_10
value: 6.476999999999999
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.784999999999998
- type: precision_at_5
value: 9.878
- type: recall_at_1
value: 16.099
- type: recall_at_10
value: 54.63
- type: recall_at_100
value: 82.24900000000001
- type: recall_at_1000
value: 94.242
- type: recall_at_3
value: 33.174
- type: recall_at_5
value: 42.164
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.947
- type: map_at_10
value: 81.499
- type: map_at_100
value: 82.17
- type: map_at_1000
value: 82.194
- type: map_at_3
value: 78.567
- type: map_at_5
value: 80.34400000000001
- type: mrr_at_1
value: 78.18
- type: mrr_at_10
value: 85.05
- type: mrr_at_100
value: 85.179
- type: mrr_at_1000
value: 85.181
- type: mrr_at_3
value: 83.91
- type: mrr_at_5
value: 84.638
- type: ndcg_at_1
value: 78.2
- type: ndcg_at_10
value: 85.715
- type: ndcg_at_100
value: 87.2
- type: ndcg_at_1000
value: 87.39
- type: ndcg_at_3
value: 82.572
- type: ndcg_at_5
value: 84.176
- type: precision_at_1
value: 78.2
- type: precision_at_10
value: 12.973
- type: precision_at_100
value: 1.5010000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.949999999999996
- type: precision_at_5
value: 23.62
- type: recall_at_1
value: 67.947
- type: recall_at_10
value: 93.804
- type: recall_at_100
value: 98.971
- type: recall_at_1000
value: 99.91600000000001
- type: recall_at_3
value: 84.75399999999999
- type: recall_at_5
value: 89.32
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.457201684255104
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.162226937477875
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.173
- type: map_at_10
value: 10.463000000000001
- type: map_at_100
value: 12.278
- type: map_at_1000
value: 12.572
- type: map_at_3
value: 7.528
- type: map_at_5
value: 8.863
- type: mrr_at_1
value: 20.599999999999998
- type: mrr_at_10
value: 30.422
- type: mrr_at_100
value: 31.6
- type: mrr_at_1000
value: 31.663000000000004
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.065
- type: ndcg_at_1
value: 20.599999999999998
- type: ndcg_at_10
value: 17.687
- type: ndcg_at_100
value: 25.172
- type: ndcg_at_1000
value: 30.617
- type: ndcg_at_3
value: 16.81
- type: ndcg_at_5
value: 14.499
- type: precision_at_1
value: 20.599999999999998
- type: precision_at_10
value: 9.17
- type: precision_at_100
value: 2.004
- type: precision_at_1000
value: 0.332
- type: precision_at_3
value: 15.6
- type: precision_at_5
value: 12.58
- type: recall_at_1
value: 4.173
- type: recall_at_10
value: 18.575
- type: recall_at_100
value: 40.692
- type: recall_at_1000
value: 67.467
- type: recall_at_3
value: 9.488000000000001
- type: recall_at_5
value: 12.738
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 81.12603499315416
- type: cos_sim_spearman
value: 73.62060290948378
- type: euclidean_pearson
value: 78.14083565781135
- type: euclidean_spearman
value: 73.16840437541543
- type: manhattan_pearson
value: 77.92017261109734
- type: manhattan_spearman
value: 72.8805059949965
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 79.75955377133172
- type: cos_sim_spearman
value: 71.8872633964069
- type: euclidean_pearson
value: 76.31922068538256
- type: euclidean_spearman
value: 70.86449661855376
- type: manhattan_pearson
value: 76.47852229730407
- type: manhattan_spearman
value: 70.99367421984789
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 78.80762722908158
- type: cos_sim_spearman
value: 79.84588978756372
- type: euclidean_pearson
value: 79.8216849781164
- type: euclidean_spearman
value: 80.22647061695481
- type: manhattan_pearson
value: 79.56604194112572
- type: manhattan_spearman
value: 79.96495189862462
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.1012718092742
- type: cos_sim_spearman
value: 76.86011381793661
- type: euclidean_pearson
value: 79.94426039862019
- type: euclidean_spearman
value: 77.36751135465131
- type: manhattan_pearson
value: 79.87959373304288
- type: manhattan_spearman
value: 77.37717129004746
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.90618420346104
- type: cos_sim_spearman
value: 84.77290791243722
- type: euclidean_pearson
value: 84.64732258073293
- type: euclidean_spearman
value: 85.21053649543357
- type: manhattan_pearson
value: 84.61616883522647
- type: manhattan_spearman
value: 85.19803126766931
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.52192114059063
- type: cos_sim_spearman
value: 81.9103244827937
- type: euclidean_pearson
value: 80.99375176138985
- type: euclidean_spearman
value: 81.540250641079
- type: manhattan_pearson
value: 80.84979573396426
- type: manhattan_spearman
value: 81.3742591621492
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.82166001234197
- type: cos_sim_spearman
value: 86.81857495659123
- type: euclidean_pearson
value: 85.72798403202849
- type: euclidean_spearman
value: 85.70482438950965
- type: manhattan_pearson
value: 85.51579093130357
- type: manhattan_spearman
value: 85.41233705379751
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.48071151079803
- type: cos_sim_spearman
value: 65.37838108084044
- type: euclidean_pearson
value: 64.67378947096257
- type: euclidean_spearman
value: 65.39187147219869
- type: manhattan_pearson
value: 65.35487466133208
- type: manhattan_spearman
value: 65.51328499442272
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.64702367823314
- type: cos_sim_spearman
value: 82.49732953181818
- type: euclidean_pearson
value: 83.05996062475664
- type: euclidean_spearman
value: 82.28159546751176
- type: manhattan_pearson
value: 82.98305503664952
- type: manhattan_spearman
value: 82.18405771943928
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.5744649318696
- type: mrr
value: 93.35386291268645
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.093999999999994
- type: map_at_10
value: 61.646
- type: map_at_100
value: 62.197
- type: map_at_1000
value: 62.22800000000001
- type: map_at_3
value: 58.411
- type: map_at_5
value: 60.585
- type: mrr_at_1
value: 55.00000000000001
- type: mrr_at_10
value: 62.690999999999995
- type: mrr_at_100
value: 63.139
- type: mrr_at_1000
value: 63.166999999999994
- type: mrr_at_3
value: 60.111000000000004
- type: mrr_at_5
value: 61.778
- type: ndcg_at_1
value: 55.00000000000001
- type: ndcg_at_10
value: 66.271
- type: ndcg_at_100
value: 68.879
- type: ndcg_at_1000
value: 69.722
- type: ndcg_at_3
value: 60.672000000000004
- type: ndcg_at_5
value: 63.929
- type: precision_at_1
value: 55.00000000000001
- type: precision_at_10
value: 9.0
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.555999999999997
- type: precision_at_5
value: 16.2
- type: recall_at_1
value: 52.093999999999994
- type: recall_at_10
value: 79.567
- type: recall_at_100
value: 91.60000000000001
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.633
- type: recall_at_5
value: 72.68299999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83267326732673
- type: cos_sim_ap
value: 95.77995366495178
- type: cos_sim_f1
value: 91.51180311401306
- type: cos_sim_precision
value: 91.92734611503532
- type: cos_sim_recall
value: 91.10000000000001
- type: dot_accuracy
value: 99.63366336633663
- type: dot_ap
value: 88.53996286967461
- type: dot_f1
value: 81.06537530266343
- type: dot_precision
value: 78.59154929577464
- type: dot_recall
value: 83.7
- type: euclidean_accuracy
value: 99.82376237623762
- type: euclidean_ap
value: 95.53192209281187
- type: euclidean_f1
value: 91.19683481701286
- type: euclidean_precision
value: 90.21526418786692
- type: euclidean_recall
value: 92.2
- type: manhattan_accuracy
value: 99.82376237623762
- type: manhattan_ap
value: 95.55642082191741
- type: manhattan_f1
value: 91.16186693147964
- type: manhattan_precision
value: 90.53254437869822
- type: manhattan_recall
value: 91.8
- type: max_accuracy
value: 99.83267326732673
- type: max_ap
value: 95.77995366495178
- type: max_f1
value: 91.51180311401306
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 54.508462134213474
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.06549765184959
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.43129549466616
- type: mrr
value: 50.20613169510227
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.069516173193044
- type: cos_sim_spearman
value: 29.872498354017353
- type: dot_pearson
value: 28.80761257516063
- type: dot_spearman
value: 28.397422678527708
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.169
- type: map_at_10
value: 1.208
- type: map_at_100
value: 5.925
- type: map_at_1000
value: 14.427000000000001
- type: map_at_3
value: 0.457
- type: map_at_5
value: 0.716
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 74.075
- type: mrr_at_100
value: 74.303
- type: mrr_at_1000
value: 74.303
- type: mrr_at_3
value: 71.0
- type: mrr_at_5
value: 72.89999999999999
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 50.376
- type: ndcg_at_100
value: 38.582
- type: ndcg_at_1000
value: 35.663
- type: ndcg_at_3
value: 55.592
- type: ndcg_at_5
value: 53.647999999999996
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 53.2
- type: precision_at_100
value: 39.6
- type: precision_at_1000
value: 16.218
- type: precision_at_3
value: 59.333000000000006
- type: precision_at_5
value: 57.599999999999994
- type: recall_at_1
value: 0.169
- type: recall_at_10
value: 1.423
- type: recall_at_100
value: 9.049999999999999
- type: recall_at_1000
value: 34.056999999999995
- type: recall_at_3
value: 0.48700000000000004
- type: recall_at_5
value: 0.792
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.319
- type: map_at_10
value: 7.112
- type: map_at_100
value: 12.588
- type: map_at_1000
value: 14.056
- type: map_at_3
value: 2.8049999999999997
- type: map_at_5
value: 4.68
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 33.94
- type: mrr_at_100
value: 35.193000000000005
- type: mrr_at_1000
value: 35.193000000000005
- type: mrr_at_3
value: 29.932
- type: mrr_at_5
value: 32.279
- type: ndcg_at_1
value: 15.306000000000001
- type: ndcg_at_10
value: 18.096
- type: ndcg_at_100
value: 30.512
- type: ndcg_at_1000
value: 42.148
- type: ndcg_at_3
value: 17.034
- type: ndcg_at_5
value: 18.509
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 18.776
- type: precision_at_100
value: 7.02
- type: precision_at_1000
value: 1.467
- type: precision_at_3
value: 19.048000000000002
- type: precision_at_5
value: 22.041
- type: recall_at_1
value: 1.319
- type: recall_at_10
value: 13.748
- type: recall_at_100
value: 43.972
- type: recall_at_1000
value: 79.557
- type: recall_at_3
value: 4.042
- type: recall_at_5
value: 7.742
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.2282
- type: ap
value: 13.995763859570426
- type: f1
value: 54.08126256731344
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.64006791171477
- type: f1
value: 57.95841320748957
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.19267841788564
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.96614412588663
- type: cos_sim_ap
value: 67.75985678572738
- type: cos_sim_f1
value: 64.04661542276222
- type: cos_sim_precision
value: 60.406922357343305
- type: cos_sim_recall
value: 68.15303430079156
- type: dot_accuracy
value: 79.5732252488526
- type: dot_ap
value: 51.30562107572645
- type: dot_f1
value: 53.120759837177744
- type: dot_precision
value: 46.478037198258804
- type: dot_recall
value: 61.97889182058047
- type: euclidean_accuracy
value: 84.00786791440663
- type: euclidean_ap
value: 67.58930214486998
- type: euclidean_f1
value: 64.424821579775
- type: euclidean_precision
value: 59.4817958454322
- type: euclidean_recall
value: 70.26385224274406
- type: manhattan_accuracy
value: 83.87673600762949
- type: manhattan_ap
value: 67.4250981523309
- type: manhattan_f1
value: 64.10286658015808
- type: manhattan_precision
value: 57.96885001066781
- type: manhattan_recall
value: 71.68865435356201
- type: max_accuracy
value: 84.00786791440663
- type: max_ap
value: 67.75985678572738
- type: max_f1
value: 64.424821579775
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.41347459929368
- type: cos_sim_ap
value: 84.89261930113058
- type: cos_sim_f1
value: 77.13677607258877
- type: cos_sim_precision
value: 74.88581164358733
- type: cos_sim_recall
value: 79.52725592854944
- type: dot_accuracy
value: 86.32359219156285
- type: dot_ap
value: 79.29794992131094
- type: dot_f1
value: 72.84356337679777
- type: dot_precision
value: 67.31761478675462
- type: dot_recall
value: 79.35786880197105
- type: euclidean_accuracy
value: 88.33585593976791
- type: euclidean_ap
value: 84.73257641312746
- type: euclidean_f1
value: 76.83529582788195
- type: euclidean_precision
value: 72.76294052863436
- type: euclidean_recall
value: 81.3905143209116
- type: manhattan_accuracy
value: 88.3086894089339
- type: manhattan_ap
value: 84.66304891729399
- type: manhattan_f1
value: 76.8181650632165
- type: manhattan_precision
value: 73.6864436744219
- type: manhattan_recall
value: 80.22790267939637
- type: max_accuracy
value: 88.41347459929368
- type: max_ap
value: 84.89261930113058
- type: max_f1
value: 77.13677607258877
---
# bge-micro-v2
> Forked from https://huggingface.co/TaylorAI/bge-micro-v2 purely to ensure it remains available. See also [license](LICENSE).
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Distilled in a 2-step training process (bge-micro was step 1) from `BAAI/bge-small-en-v1.5`.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
NBA55/llama2-7B-without-grade-epoch-04-new | NBA55 | 2024-02-15T12:36:10Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-02-15T12:35:54Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
btemirov/distill-whisper-jargon | btemirov | 2024-02-15T12:28:15Z | 60 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:distil-whisper/distil-small.en",
"base_model:finetune:distil-whisper/distil-small.en",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-01-11T04:25:09Z | ---
license: mit
base_model: distil-whisper/distil-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: distill-whisper-jargon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distill-whisper-jargon
This model is a fine-tuned version of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) on the [btemirov/fin-terms](https://huggingface.co/datasets/btemirov/fin-terms) dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4314
- Wer: 78.4173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 3.8199 | 22.22 | 100 | 3.5227 | 79.3525 |
| 2.3504 | 44.44 | 200 | 3.7073 | 77.3022 |
| 1.4612 | 66.67 | 300 | 4.1042 | 78.6691 |
| 0.9713 | 88.89 | 400 | 4.3164 | 77.7698 |
| 0.7453 | 111.11 | 500 | 4.4314 | 78.4173 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CatBarks/bertES_PosWeighted001_tokenizer | CatBarks | 2024-02-15T12:26:11Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T12:26:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/bertES_PosWeighted001_model | CatBarks | 2024-02-15T12:26:10Z | 176 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-15T12:25:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IndiaBuild/GGUF_Navarna_v0_1_OpenHermes_Hindi | IndiaBuild | 2024-02-15T12:24:32Z | 3 | 0 | null | [
"gguf",
"llama.cpp",
"hindi",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-02-13T22:08:29Z | ---
tags:
- gguf
- llama.cpp
- hindi
---
## Navarna 7B GGUF VERSION here is the orignal [TokenBender/Navarna_v0_1_OpenHermes_Hindi](https://huggingface.co/TokenBender/Navarna_v0_1_OpenHermes_Hindi) |
Meli101/sentence-classifier | Meli101 | 2024-02-15T12:21:41Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-15T12:21:21Z | ---
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
model-index:
- name: sentence-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-classifier
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3291
- Precision: 0.9236
- Recall: 0.9217
- Accuracy: 0.9219
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:------:|
| No log | 1.0 | 154 | 0.3536 | 0.8783 | 0.8745 | 0.8747 | 0.8753 |
| No log | 2.0 | 308 | 0.2784 | 0.9132 | 0.9105 | 0.9105 | 0.9109 |
| No log | 3.0 | 462 | 0.2928 | 0.9189 | 0.9160 | 0.9162 | 0.9165 |
| 0.3402 | 4.0 | 616 | 0.3098 | 0.9239 | 0.9223 | 0.9227 | 0.9228 |
| 0.3402 | 5.0 | 770 | 0.3291 | 0.9236 | 0.9217 | 0.9219 | 0.9221 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
sravaniayyagari/new-model | sravaniayyagari | 2024-02-15T12:19:31Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T12:19:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LegoClipStars/Priscilla_Perez_RH | LegoClipStars | 2024-02-15T12:18:41Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:cc-by-4.0",
"region:us"
]
| text-to-image | 2024-02-15T12:18:06Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: NEFT
parameters:
negative_prompt: High school student
output:
url: images/Priscilla_Perez_Main_Outfit.jpg
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: Please spare me
license: cc-by-4.0
---
# Priscilla_Perez_RH
<Gallery />
## Model description
Here's my RVC voice model of Priscilla Perez from Rainbow High season 4.
## Trigger words
You should use `Please spare me` to trigger the image generation.
## Download model
[Download](/LegoClipStars/Priscilla_Perez_RH/tree/main) them in the Files & versions tab.
|
nabilayumnan/emotion_classification | nabilayumnan | 2024-02-15T12:03:53Z | 179 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-15T11:27:39Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train[:800]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2936
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5449 | 0.4562 |
| No log | 2.0 | 80 | 1.5041 | 0.4188 |
| No log | 3.0 | 120 | 1.3526 | 0.5375 |
| No log | 4.0 | 160 | 1.3390 | 0.5125 |
| No log | 5.0 | 200 | 1.2977 | 0.4875 |
| No log | 6.0 | 240 | 1.2655 | 0.525 |
| No log | 7.0 | 280 | 1.2572 | 0.5437 |
| No log | 8.0 | 320 | 1.2862 | 0.4875 |
| No log | 9.0 | 360 | 1.2907 | 0.5375 |
| No log | 10.0 | 400 | 1.2621 | 0.5125 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
mlath123/flan-t5-base-samsum | mlath123 | 2024-02-15T11:48:28Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-15T11:47:29Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3707
- Rouge1: 47.3426
- Rouge2: 23.8703
- Rougel: 40.0537
- Rougelsum: 43.5879
- Gen Len: 17.2063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4525 | 1.0 | 1842 | 1.3837 | 46.3005 | 22.8797 | 39.0659 | 42.773 | 17.2149 |
| 1.3436 | 2.0 | 3684 | 1.3725 | 47.0672 | 23.547 | 39.8291 | 43.3576 | 17.1954 |
| 1.2821 | 3.0 | 5526 | 1.3708 | 47.2477 | 23.6592 | 39.7661 | 43.4389 | 17.2295 |
| 1.2307 | 4.0 | 7368 | 1.3707 | 47.3426 | 23.8703 | 40.0537 | 43.5879 | 17.2063 |
| 1.1985 | 5.0 | 9210 | 1.3762 | 47.4705 | 23.9801 | 40.0948 | 43.7244 | 17.2833 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
AntoineGourru/Mistral_qlora_drome_Rplusplus | AntoineGourru | 2024-02-15T11:47:35Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
]
| null | 2024-02-15T11:47:28Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0 |
aisuko/sft-microsoft-phi2-on-dialogsum | aisuko | 2024-02-15T11:46:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2024-02-15T11:05:11Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: sft-microsoft-phi2-on-dialogsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-microsoft-phi2-on-dialogsum
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4203 | 5.0 | 50 | 1.3966 |
| 1.2814 | 10.0 | 100 | 1.3639 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.1 |
DiptiPawar/t5_recommendation_sports_equipment_english | DiptiPawar | 2024-02-15T11:45:33Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-15T09:53:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment_english
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3614
- Rouge1: 63.8331
- Rouge2: 0.0
- Rougel: 63.8135
- Rougelsum: 63.8922
- Gen Len: 3.0177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 0.96 | 6 | 7.0341 | 41.3666 | 0.0 | 41.2761 | 41.3863 | 3.4923 |
| No log | 1.96 | 12 | 2.9883 | 40.7910 | 0.0 | 40.6533 | 40.7615 | 3.0248 |
| No log | 2.96 | 18 | 0.7740 | 40.7320 | 0.0 | 40.6139 | 40.7320 | 3.0094 |
| No log | 3.96 | 24 | 0.6257 | 59.8583 | 0.0 | 59.8583 | 59.8583 | 3.0 |
| No log | 4.96 | 30 | 0.6243 | 59.8583 | 0.0 | 59.8583 | 59.8583 | 3.0 |
| No log | 5.96 | 36 | 0.4635 | 60.0945 | 0.0 | 59.9764 | 60.0945 | 3.0035 |
| No log | 6.96 | 42 | 0.3732 | 58.2841 | 0.0 | 58.1267 | 58.3038 | 3.1606 |
| No log | 7.96 | 48 | 0.3615 | 60.6749 | 0.0 | 60.5667 | 60.6848 | 3.0767 |
| No log | 8.96 | 54 | 0.3673 | 61.3144 | 0.0 | 61.1177 | 61.2948 | 3.0260 |
| No log | 9.96 | 60 | 0.3614 | 63.8331 | 0.0 | 63.8135 | 63.8922 | 3.0177 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.1.0+cu121
- Datasets 2.8.0
- Tokenizers 0.13.3
|
musiclang/musiclang-4k | musiclang | 2024-02-15T11:34:20Z | 94 | 16 | transformers | [
"transformers",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-23T14:34:08Z | ---
license: gpl-3.0
widget:
- text: "CHORD_CHANGE"
example_title: "Predict from scratch"
---
MusicLang Predict model
=======================

MusicLang Predict is a model for creating original midi soundtracks with generative AI model.
It can be used for different use cases :
- Predict a new song from scratch (a fixed number of bars)
- Continue a song from a prompt
- Predict a new song from a template (see examples below)
- Continue a song from a prompt and a template
To solve template generation use cases,
we provide an interface to create a template from an existing midi file.
To make the prediction we have an inference package available here : [MusicLang Predict](https://github.com/MusicLang/musiclang_predict)
which is based on the musiclang language : [MusicLang](https://github.com/MusicLang/musiclang).
Installation
------------
Install the musiclang-predict package with pip :
```bash
pip install musiclang-predict
```
How to use ?
------------
1. Create a new 2 bars song from scratch :
```python
from musiclang_predict import predict, MusicLangTokenizer
from transformers import GPT2LMHeadModel
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained('musiclang/musiclang-4k')
tokenizer = MusicLangTokenizer('musiclang/musiclang-4k')
soundtrack = predict(model, tokenizer, chord_duration=4, nb_chords=2)
soundtrack.to_midi('song.mid', tempo=120, time_signature=(4, 4))
```
2. Or use an existing midi song as a song structure template :
```python
from musiclang_predict import midi_file_to_template, predict_with_template, MusicLangTokenizer
from transformers import GPT2LMHeadModel
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained('musiclang/musiclang-4k')
tokenizer = MusicLangTokenizer('musiclang/musiclang-4k')
template = midi_file_to_template('my_song.mid')
soundtrack = predict_with_template(template, model, tokenizer)
soundtrack.to_midi('song.mid', tempo=template['tempo'], time_signature=template['time_signature'])
```
See : [MusicLang templates](https://discovered-scabiosa-ea3.notion.site/Create-a-song-template-with-MusicLang-dfd8cad0a14b464fb3475c7fa19c1a82)
For a full description of our template format.
It's only a dictionary containing information for each chord of the song and some metadata like tempo.
You can even create your own without using a base midi file !
3. Or even use a prompt and a template to create a song
```python
from musiclang_predict import midi_file_to_template, predict_with_template, MusicLangTokenizer
from transformers import GPT2LMHeadModel
from musiclang import Score
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained('musiclang/musiclang-4k')
tokenizer = MusicLangTokenizer('musiclang/musiclang-4k')
template = midi_file_to_template('my_song.mid')
# Take the first chord of the template as a prompt
prompt = Score.from_midi('my_prompt.mid', chord_range=(0, 4))
soundtrack = predict_with_template(template, model, tokenizer,
prompt=prompt, # Prompt the model with a musiclang score
prompt_included_in_template=True # To say the prompt score is included in the template
)
soundtrack.to_midi('song.mid', tempo=template['tempo'], time_signature=template['time_signature'])
```
Contact us
----------
If you want to help shape the future of open source music generation,
please contact [us](mailto:[email protected])
License
-------
The MusicLang predict package (this package) and its associated models is licensed under the GPL-3.0 License.
The MusicLang base language (musiclang package) is licensed under the BSD 3-Clause License. |
sunwooooong/distilbert-base-uncased-finetuned-emotion | sunwooooong | 2024-02-15T11:17:40Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-31T15:16:38Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.926984518712486
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2213
- Accuracy: 0.927
- F1: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.845 | 1.0 | 250 | 0.3299 | 0.9025 | 0.9003 |
| 0.2539 | 2.0 | 500 | 0.2213 | 0.927 | 0.9270 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
LuisCe/tobacco-multi-classification | LuisCe | 2024-02-15T11:15:53Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2024-02-15T11:15:50Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_8_16_32_0.05_2_0.0002 | ferrazzipietro | 2024-02-15T11:07:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T11:07:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vincevas/coze-stablelm-2-1_6b | vincevas | 2024-02-15T11:06:07Z | 9 | 0 | null | [
"gguf",
"base_model:stabilityai/stablelm-2-zephyr-1_6b",
"base_model:quantized:stabilityai/stablelm-2-zephyr-1_6b",
"region:us"
]
| null | 2024-02-15T10:56:03Z | ---
base_model: "stabilityai/stablelm-2-zephyr-1_6b"
---
This is a quantized version of the Stable LM 2 Zephyr 1.6B model, see the
[model card](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b)
for a model description and license.
This quantized version has been generated from the
[model.safetensors](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b/tree/main) weights file
using the [`Candle tensor-tools`](https://github.com/huggingface/candle/blob/main/candle-core/examples/tensor-tools.rs) application:
```shell
tensor-tools quantize --quantization q4_1 --out-file stablelm-2-zephyr-1_6b-Q4_1.gguf model.safetensors
```
|
Manish055/whisper.cpp | Manish055 | 2024-02-15T11:05:52Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2024-02-15T10:52:50Z | ---
license: mit
---
# OpenAI's Whisper models converted to ggml format
[Available models](https://huggingface.co/Manish055/whisper.cpp/tree/main)
| Model | Disk | Mem | SHA |
| ------- | ------ | ------- | ------------------------------------------ |
| tiny | 75 MB | ~390 MB | `bd577a113a864445d4c299885e0cb97d4ba92b5f` |
| tiny.en | 75 MB | ~390 MB | `c78c86eb1a8faa21b369bcd33207cc90d64ae9df` |
| base | 142 MB | ~500 MB | `465707469ff3a37a2b9b8d8f89f2f99de7299dac` |
| base.en | 142 MB | ~500 MB | `137c40403d78fd54d454da0f9bd998f78703390c` |
| small | 466 MB | ~1.0 GB | `55356645c2b361a969dfd0ef2c5a50d530afd8d5` |
|
haihuynh/ppo-LunarLanderv2 | haihuynh | 2024-02-15T11:00:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-15T10:59:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.19 +/- 21.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_8_16_32_0.05_2_0.0002_versionebfloat16 | ferrazzipietro | 2024-02-15T10:53:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T10:53:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tsavage68/chat_1000STEPS_1e6_03beta_DPO | tsavage68 | 2024-02-15T10:42:57Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T10:39:14Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: chat_1000STEPS_1e6_03beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chat_1000STEPS_1e6_03beta_DPO
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Rewards/chosen: -0.5183
- Rewards/rejected: -0.7327
- Rewards/accuracies: 0.5363
- Rewards/margins: 0.2144
- Logps/rejected: -21.2336
- Logps/chosen: -18.4723
- Logits/rejected: -0.6767
- Logits/chosen: -0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6885 | 0.2 | 100 | 0.6933 | -0.2467 | -0.2660 | 0.4637 | 0.0193 | -19.6779 | -17.5670 | -0.6067 | -0.6066 |
| 0.683 | 0.39 | 200 | 0.6859 | 0.0215 | -0.0664 | 0.4923 | 0.0879 | -19.0127 | -16.6730 | -0.6150 | -0.6148 |
| 0.6033 | 0.59 | 300 | 0.6999 | -0.1969 | -0.2977 | 0.4791 | 0.1009 | -19.7837 | -17.4008 | -0.6311 | -0.6309 |
| 0.6812 | 0.78 | 400 | 0.6942 | -0.0785 | -0.2126 | 0.4813 | 0.1340 | -19.4998 | -17.0064 | -0.6041 | -0.6039 |
| 0.6633 | 0.98 | 500 | 0.6789 | -0.1266 | -0.2799 | 0.5077 | 0.1533 | -19.7242 | -17.1665 | -0.5557 | -0.5555 |
| 0.2615 | 1.17 | 600 | 0.6788 | -0.4082 | -0.6084 | 0.5253 | 0.2002 | -20.8192 | -18.1052 | -0.6281 | -0.6279 |
| 0.3175 | 1.37 | 700 | 0.6809 | -0.4980 | -0.7087 | 0.5297 | 0.2107 | -21.1536 | -18.4046 | -0.6655 | -0.6653 |
| 0.2805 | 1.56 | 800 | 0.6794 | -0.5125 | -0.7293 | 0.5341 | 0.2169 | -21.2224 | -18.4529 | -0.6754 | -0.6753 |
| 0.3255 | 1.76 | 900 | 0.6807 | -0.5148 | -0.7297 | 0.5385 | 0.2149 | -21.2235 | -18.4605 | -0.6768 | -0.6766 |
| 0.2966 | 1.95 | 1000 | 0.6804 | -0.5183 | -0.7327 | 0.5363 | 0.2144 | -21.2336 | -18.4723 | -0.6767 | -0.6766 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.0+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
warmestman/whisper-larger-v3-mn-2000steps | warmestman | 2024-02-15T10:41:51Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hf-asr-leaderboard",
"mn",
"dataset:mozilla-foundation/common_voice_16_1",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-15T09:49:29Z | ---
library_name: transformers
tags:
- whisper-event
- hf-asr-leaderboard
license: mit
datasets:
- mozilla-foundation/common_voice_16_1
language:
- mn
pipeline_tag: automatic-speech-recognition
---
# Model Card for Model ID
GPU - A100-80GB
## Model Details
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Ankhbayasgalan Davaadorj
- **Model type:** Whisper
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model:** openai/whisper-large-v3
#### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-03
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4856 | 1.97 | 1000 | 0.496397 |
| 0.1312 | 3.94 | 2000 | 0.395565 |
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** A100 80GB
- **Hours used:** 1:07:08 hours
## Model Card Authors
@Ankhbayasgalan davaadorj
## Model Card Contact
[email protected] |
leftyjoy/my-luk-dog | leftyjoy | 2024-02-15T10:40:48Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-02-15T10:31:42Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Luk-Dog Dreambooth model trained by leftyjoy following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: JPCE-083
Sample pictures of this concept:


|
gabrielbenabou/ppo-LunarLander-v2 | gabrielbenabou | 2024-02-15T10:38:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-15T07:52:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.11 +/- 20.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EvaKlimentova/knots_protbertBFD_alphafold | EvaKlimentova | 2024-02-15T10:38:12Z | 98 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dataset:EvaKlimentova/knots_AF",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-08T09:46:55Z | ---
datasets:
- EvaKlimentova/knots_AF
---
# M1 - finetuned ProtBert-BFD
The model is trained on [knots_AF dataset](https://huggingface.co/datasets/EvaKlimentova/knots_AF)
The accuracy on the test set is ~ 0.9848
| M1 ProtBert BFD | Dataset size | Unknotted set size | Accuracy | TPR | TNR |
|:----------------------------:|:------------:|:------------------:|:--------:|:------:|:-------:|
| All | 39412 | 19718 | 0.9848 | 0.9871 | 0.9826 |
| SPOUT | 7371 | 550 | 0.9905 | 0.9963 | 0.9182 |
| TDD | 612 | 24 | 0.9918 | 0.9966 | 0.8750 |
| DUF | 736 | 429 | 0.97905 | 0.9826 | 0.9767 |
| AdoMet synthase | 1794 | 240 | 0.9939 | 0.9968 | 0.9750 |
| Carbonic anhydrase | 1531 | 539 | 0.9556 | 0.9718 | 0.9258 |
| UCH | 477 | 125 | 0.9099 | 0.9631 | 0.7600 |
| ATCase/OTCase | 3799 | 3352 | 0.9992 | 0.9955 | 0.9997 |
| ribosomal-mitochondrial | 147 | 41 | 0.8912 | 0.9906 | 0.63412 |
| membrane | 8309 | 1577 | 0.9791 | 0.9895 | 0.9347 |
| VIT | 14347 | 12639 | 0.9873 | 0.9415 | 0.9935 |
| biosynthesis of lantibiotics | 392 | 286 | 0.9719 | 0.9811 | 0.9685 |
| PGluconate dehydrogenase | 1 | 0 | 1.0 | 1.0 | | |
kouki13/facebook4 | kouki13 | 2024-02-15T10:29:16Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T10:25:38Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
kouki13/facebook2 | kouki13 | 2024-02-15T10:27:27Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T09:52:27Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
kouki13/facebook3 | kouki13 | 2024-02-15T10:23:33Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T10:14:17Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
cataluna84/pixel_peft_model-new | cataluna84 | 2024-02-15T10:20:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T10:20:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DT12the/Math-Mixtral-7B | DT12the | 2024-02-15T10:12:19Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:merge:meta-math/MetaMath-Mistral-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T10:09:14Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- meta-math/MetaMath-Mistral-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
# No additional parameters needed for the base model
- model: meta-math/MetaMath-Mistral-7B
parameters:
density: 0.7 # A higher density for MetaMath to prioritize its parameters for math questions
weight: 0.7 # Higher weight to MetaMath to ensure its influence on math-related answers is strong
merge_method: ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
normalize: true
dtype: float16
```
|
hugo-massonnat/poca-SoccerTwos | hugo-massonnat | 2024-02-15T10:09:23Z | 0 | 0 | ml-agents | [
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2024-02-15T10:09:01Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hugo-massonnat/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aghorbani/bank-tx-cat-opt-125m | aghorbani | 2024-02-15T09:58:01Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-15T09:57:53Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [facebook/opt-125m](https://huggingface.co/facebook/opt-125m)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.36.1
```
Also make sure you are providing your huggingface token if the model is lying in a private repo.
- You can login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
You will also need to download the classification head, either manually, or by running the following code:
```python
from huggingface_hub import hf_hub_download
model_name = "aghorbani/bank-tx-cat-opt-125m" # either local folder or huggingface model name
hf_hub_download(repo_id=model_name, filename="classification_head.pth", local_dir="./")
```
You can make classification predictions by following the example below:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "aghorbani/bank-tx-cat-opt-125m" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
).cuda().eval()
head_weights = torch.load("classification_head.pth", map_location="cuda")
# settings can be arbitrary here as we overwrite with saved weights
head = torch.nn.Linear(1, 1, bias=False).to("cuda")
head.weight.data = head_weights
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
out = model(**inputs).logits
logits = head(out[:,-1])
print(logits)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
OPTForCausalLM(
(model): OPTModel(
(decoder): OPTDecoder(
(embed_tokens): Embedding(50272, 768, padding_idx=1)
(embed_positions): OPTLearnedPositionalEmbedding(2050, 768)
(final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(layers): ModuleList(
(0-11): 12 x OPTDecoderLayer(
(self_attn): OPTAttention(
(k_proj): Linear(in_features=768, out_features=768, bias=True)
(v_proj): Linear(in_features=768, out_features=768, bias=True)
(q_proj): Linear(in_features=768, out_features=768, bias=True)
(out_proj): Linear(in_features=768, out_features=768, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
)
)
)
)
(lm_head): Linear(in_features=768, out_features=50272, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
Doniaa/distilroberta-base-finetuned-wikitext2 | Doniaa | 2024-02-15T09:57:10Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T09:49:17Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 425 | 0.0087 |
| 0.5768 | 2.0 | 850 | 0.0035 |
| 0.0087 | 3.0 | 1275 | 0.0020 |
| 0.0039 | 4.0 | 1700 | 0.0006 |
| 0.0023 | 5.0 | 2125 | 0.0005 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
konz00/EvilxEchidna-7b-GGUF | konz00 | 2024-02-15T09:43:26Z | 25 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T07:34:38Z | ---
library_name: transformers
pipeline_tag: text-generation
---
GGUF version for [Test157t/EvilxEchidna-7b](https://huggingface.co/Test157t/EvilxEchidna-7b) |
Doniaa/trial512 | Doniaa | 2024-02-15T09:42:42Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T09:34:23Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 425 | 0.0087 |
| 0.5768 | 2.0 | 850 | 0.0035 |
| 0.0087 | 3.0 | 1275 | 0.0020 |
| 0.0039 | 4.0 | 1700 | 0.0006 |
| 0.0023 | 5.0 | 2125 | 0.0005 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
zidnikh000/Belajar | zidnikh000 | 2024-02-15T09:36:37Z | 0 | 0 | null | [
"text-classification",
"id",
"dataset:teknium/OpenHermes-2.5",
"region:us"
]
| text-classification | 2024-02-15T09:35:21Z | ---
datasets:
- teknium/OpenHermes-2.5
language:
- id
metrics:
- accuracy
pipeline_tag: text-classification
--- |
aghanim1/arttherapy | aghanim1 | 2024-02-15T09:33:26Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2024-02-15T08:11:47Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: <lora:art_therapy_v1:1> art_therapy, monochrome, lineart, flowers
parameters:
negative_prompt: color
output:
url: images/IMG_0044.PNG
- text: <lora:art_therapy_v1:0.8> flying falcon, detailed, monochrome, lineart
parameters:
negative_prompt: bad art, bad quality
output:
url: images/Falcon.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Lineart
---
# Art Therapy
<Gallery />
## Model description
This model is based on art therapy coloring images. Euler a sampling method produces the best results with this model.
## Trigger words
You should use `Lineart` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aghanim1/arttherapy/tree/main) them in the Files & versions tab.
|
hiendang7613/xlmr-lstm-crf-resume-ner3 | hiendang7613 | 2024-02-15T09:28:17Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:fcv_dataset",
"base_model:hiendang7613/xlmr-lstm-crf-resume-ner3",
"base_model:finetune:hiendang7613/xlmr-lstm-crf-resume-ner3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-02-15T02:53:17Z | ---
license: mit
base_model: hiendang7613/xlmr-lstm-crf-resume-ner3
tags:
- generated_from_trainer
datasets:
- fcv_dataset
model-index:
- name: xlmr-lstm-crf-resume-ner3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-lstm-crf-resume-ner3
This model is a fine-tuned version of [hiendang7613/xlmr-lstm-crf-resume-ner3](https://huggingface.co/hiendang7613/xlmr-lstm-crf-resume-ner3) on the fcv_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
Lalith16/Zephyr-Largedataset-2Epoch-CCApp | Lalith16 | 2024-02-15T09:24:16Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
]
| null | 2024-02-15T09:23:32Z | ---
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5641 | 0.14 | 100 | 1.4610 |
| 1.1695 | 0.28 | 200 | 1.0388 |
| 1.0319 | 0.42 | 300 | 0.9440 |
| 0.905 | 0.56 | 400 | 0.8829 |
| 0.8655 | 0.7 | 500 | 0.8225 |
| 0.8329 | 0.85 | 600 | 0.8042 |
| 0.85 | 0.99 | 700 | 0.7728 |
| 0.7348 | 1.13 | 800 | 0.7426 |
| 0.6723 | 1.27 | 900 | 0.7197 |
| 0.6791 | 1.41 | 1000 | 0.6933 |
| 0.6576 | 1.55 | 1100 | 0.6864 |
| 0.6863 | 1.69 | 1200 | 0.6731 |
| 0.6328 | 1.83 | 1300 | 0.6652 |
| 0.6264 | 1.97 | 1400 | 0.6622 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
Eunju2834/aicomment_kogpt2 | Eunju2834 | 2024-02-15T09:14:50Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"kogpt2",
"comment generation",
"ko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T08:56:32Z | ---
language:
- ko
tags:
- kogpt2
- comment generation
--- |
Balu94pratap/my_awesome_distil_huner_model | Balu94pratap | 2024-02-15T09:05:24Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:transformer_dataset_ner_kaggle",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-02-14T09:16:35Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- transformer_dataset_ner_kaggle
model-index:
- name: my_awesome_distil_huner_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_distil_huner_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the transformer_dataset_ner_kaggle dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.1
|
itsyasin2002ai/Yaseen-finetuned-kde4-en-to-fr | itsyasin2002ai | 2024-02-15T08:52:47Z | 124 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2024-02-15T07:27:28Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: Yaseen-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Yaseen-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
sravaniayyagari/new-finetuned-model | sravaniayyagari | 2024-02-15T08:52:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-02-14T10:36:07Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Deci/DeciDiffusion-v1-0 | Deci | 2024-02-15T08:50:19Z | 42 | 139 | diffusers | [
"diffusers",
"safetensors",
"Deci AI",
"DeciDiffusion",
"text-to-image",
"en",
"dataset:laion/laion-art",
"dataset:laion/laion2B-en",
"arxiv:2202.00512",
"arxiv:2305.08891",
"arxiv:2102.09672",
"arxiv:2303.09556",
"arxiv:1904.00962",
"arxiv:1803.07474",
"arxiv:2307.01952",
"arxiv:1911.07023",
"arxiv:2001.03653",
"arxiv:2206.10789",
"license:openrail++",
"diffusers:DeciDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-13T12:08:18Z | ---
pipeline_tag: text-to-image
inference: true
license: openrail++
language:
- en
tags:
- Deci AI
- DeciDiffusion
datasets:
- laion/laion-art
- laion/laion2B-en
---
# DeciDiffusion 1.0
DeciDiffusion 1.0 is an 820 million parameter text-to-image latent diffusion model trained on the LAION-v2 dataset and fine-tuned on the LAION-ART dataset. Advanced training techniques were used to speed up training, improve training performance, and achieve better inference quality.
## Model Details
- **Developed by:** Deci
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s) (NLP):** English
- **Code License:** The code in this repository is released under the [Apache 2.0 License](https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-MODEL.md)
- **Weights License:** The weights are released under the [CreativeML Open RAIL++-M License](https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-WEIGHTS.md)
### Model Sources
- **Blog:** [A technical overview and comparison to Stable Diffusion 1.5](https://deci.ai/blog/decidiffusion-1-0-3x-faster-than-stable-diffusion-same-quality/?utm_campaign=repos&utm_source=hugging-face&utm_medium=model-card&utm_content=decidiffusion-v1)
- **Demo:** [Experience DeciDiffusion in action](https://huggingface.co/spaces/Deci/DeciDiffusion-v1-0)
## Model Architecture
DeciDiffusion 1.0 is a diffusion-based text-to-image generation model. While it maintains foundational architecture elements from Stable Diffusion, such as the Variational Autoencoder (VAE) and CLIP's pre-trained Text Encoder, DeciDiffusion introduces significant enhancements. The primary innovation is the substitution of U-Net with the more efficient U-Net-NAS, a design pioneered by Deci. This novel component streamlines the model by reducing the number of parameters, leading to superior computational efficiency.
## Training Details
### Training Procedure
The model was trained in 4 phases:
- **Phase 1:** Trained from scratch 1.28 million steps at resolution 256x256 on a 320 million sample subset of LAION-v2.
- **Phase 2:** Trained from 870k steps at resolution 512x512 on the same dataset to learn more fine-detailed information.
- **Phase 3:** Trained 65k steps with EMA, another learning rate scheduler, and more "qualitative" data.
- **Phase 4:** Fine-tuning on a 2M sample subset of LAION-ART.
### Training Techniques
DeciDiffusion 1.0 was trained to be sample efficient, i.e. to produce high-quality results using fewer diffusion timesteps during inference.
The following training techniques were used to that end:
- **[V-prediction](https://arxiv.org/pdf/2202.00512.pdf)**
- **[Enforcing zero terminal SNR during training](https://arxiv.org/pdf/2305.08891.pdf)**
- **[Employing a cosine variance schedule](https://arxiv.org/pdf/2102.09672.pdf)**
- **[Using a Min-SNR loss weighting strategy](https://arxiv.org/abs/2303.09556)**
- **[Employing Rescale Classifier-Free Guidance during inference](https://arxiv.org/pdf/2305.08891.pdf)**
- **[Sampling from the last timestep](https://arxiv.org/pdf/2305.08891.pdf)**
- **Training from 870k steps at resolution 512x512 on the same dataset to learn more fine-detailed information.**
- **[Utilizing LAMB optimizer with large batch](https://arxiv.org/abs/1904.00962)**
-
The following techniques were used to shorten training time:
- **Using precomputed VAE and CLIP latents**
- **Using EMA only in the last phase of training**
### Additional Details
#### Phase 1
- **Hardware:** 8 x 8 x A100 (80gb)
- **Optimizer:** AdamW
- **Batch:** 8192
- **Learning rate:** 1e-4
#### Phases 2-4
- **Hardware:** 8 x 8 x H100 (80gb)
- **Optimizer:** LAMB
- **Batch:** 6144
- **Learning rate:** 5e-3
## Evaluation
On average, DeciDiffusion’s generated images after 30 iterations achieve comparable Frechet Inception Distance (FID) scores to those generated by Stable Diffusion 1.5 after 50 iterations.
However, many recent articles question the reliability of FID scores, warning that FID results [tend to be fragile](https://huggingface.co/docs/diffusers/conceptual/evaluation), that they are [inconsistent with human judgments on MNIST](https://arxiv.org/pdf/1803.07474.pdf) and [subjective evaluation](https://arxiv.org/pdf/2307.01952.pdf), that they are [statistically biased](https://arxiv.org/pdf/1911.07023.pdf), and that they [give better scores](https://arxiv.org/pdf/2001.03653.pdf) to memorization of the dataset than to generalization beyond it.
Given this skepticism about FID’s reliability, we chose to assess DeciDiffusion 1.0's sample efficiency by performing a user study against Stable Diffusion 1.5. Our source for image captions was the [PartiPrompts](https://arxiv.org/pdf/2206.10789.pdf) benchmark, which was introduced to compare large text-to-image models on various challenging prompts.
For our study we chose 10 random prompts and for each prompt generated 3 images
by Stable Diffusion 1.5 configured to run for 50 iterations and 3 images by DeciDiffusion configured to run for 30 iterations.
We then presented 30 side by side comparisons to a group of professionals, who voted based on adherence to the prompt and aesthetic value.
According to the results, DeciDiffusion at 30 iterations exhibits an edge in aesthetics, but when it comes to prompt alignment, it’s on par with Stable Diffusion at 50 iterations.
The following table summarizes our survey results:
|Answer| Better image aesthetics | Better prompt alignment |
|:----------|:----------|:----------|
| DeciDiffusion 1.0 30 Iterations | 41.1% | 20.8% |
| StableDiffusion v1.5 50 Iterations | 30.5% |18.8% |
| On Par | 26.3% |39.1% |
| Neither | 2.1% | 11.4%|
## Runtime Benchmarks
The following tables provide an image latency comparison between DeciDiffusion 1.0 and Stable Diffusion v1.5.
DeciDiffusion 1.0 vs. Stable Diffusion v1.5 at FP16 precision
|Inference Tool + Iterations| DeciDiffusion 1.0 on A10 (seconds/image) | Stable Diffusion v1.5 on A10 (seconds/image) |
|:----------|:----------|:----------|
| Pytorch 50 Iterations | 2.11 | 2.95 |
| Infery 50 Iterations | 1.55 |2.08 |
| Pytorch 35 Iterations | 1.52 |- |
| Infery 35 Iterations | 1.07 | -|
| Pytorch 30 Iterations | 1.29 | -|
| Infery 30 Iterations | 0.98 | - |
## How to Use
```bibtex
# pip install diffusers transformers torch
from diffusers import StableDiffusionPipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
checkpoint = "Deci/DeciDiffusion-v1-0"
pipeline = StableDiffusionPipeline.from_pretrained(checkpoint, custom_pipeline=checkpoint, torch_dtype=torch.float16)
pipeline.unet = pipeline.unet.from_pretrained(checkpoint, subfolder='flexible_unet', torch_dtype=torch.float16)
pipeline = pipeline.to(device)
img = pipeline(prompt=['A photo of an astronaut riding a horse on Mars']).images[0]
```
# Uses
### Misuse, Malicious Use, and Out-of-Scope Use
The model must not be employed to deliberately produce or spread images that foster hostile or unwelcoming settings for individuals. This encompasses generating visuals that might be predictably upsetting, distressing, or inappropriate, as well as content that perpetuates existing or historical biases.
#### Out-of-Scope Use
The model isn't designed to produce accurate or truthful depictions of people or events. Thus, using it for such purposes exceeds its intended capabilities.
#### Misuse and Malicious Use
Misusing the model to produce content that harms or maligns individuals is strictly discouraged. Such misuses include, but aren't limited to:
- Creating offensive, degrading, or damaging portrayals of individuals, their cultures, religions, or surroundings.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.Deliberately endorsing or disseminating prejudiced content or harmful stereotypes.
- Deliberately endorsing or disseminating prejudiced content or harmful stereotypes.
- Posing as someone else without their agreement.
- Generating explicit content without the knowledge or agreement of potential viewers.
- Distributing copyrighted or licensed content against its usage terms.
- Sharing modified versions of copyrighted or licensed content in breach of its usage guidelines.
## Limitations and Bias
### Limitations
The model has certain limitations and may not function optimally in the following scenarios:
- It doesn't produce completely photorealistic images.
- Rendering legible text is beyond its capability.
- Complex compositions, like visualizing “A green sphere to the left of a blue square”, are challenging for the model.
- Generation of faces and human figures may be imprecise.
- It is primarily optimized for English captions and might not be as effective with other languages.
- The autoencoding component of the model is lossy.
### Bias
The remarkable abilities of image generation models can unintentionally amplify societal biases. DeciDiffusion was mainly trained on subsets of LAION-v2, focused on English descriptions. Consequently, non-English communities and cultures might be underrepresented, leading to a bias towards white and western norms. Outputs from non-English prompts are notably less accurate. Given these biases, users should approach DeciDiffusion with discretion, regardless of input.
## How to Cite
Please cite this model using this format.
```bibtex
@misc{DeciFoundationModels,
title = {DeciDiffusion 1.0},
author = {DeciAI Research Team},
year = {2023}
url={[https://huggingface.co/deci/decidiffusion-v1-0](https://huggingface.co/deci/decidiffusion-v1-0)},
}
``` |
Deci/DeciLM-6b-instruct | Deci | 2024-02-15T08:49:02Z | 193 | 133 | transformers | [
"transformers",
"safetensors",
"text-generation",
"Deci AI",
"DeciLM",
"Instruction",
"custom_code",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:Open-Orca/OpenOrca",
"license:llama2",
"license:other",
"model-index",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-13T07:21:13Z | ---
license: [llama2, other]
datasets:
- cerebras/SlimPajama-627B
- Open-Orca/OpenOrca
language:
- en
tags:
- Deci AI
- DeciLM
- Instruction
model-index:
- name: DeciLM 6B
results:
- task:
type: text-generation
dataset:
type: ai2/arc
name: ai2_arc
metrics:
- name: ARC Challenge
type: ARC Challenge
value: 43.43
verified: false
- task:
type: text-generation
dataset:
type: ai2/arc
name: ai2_arc
metrics:
- name: ARC Easy
type: ARC Easy
value: 70.58
verified: false
- task:
type: text-generation
dataset:
type: boolq
name: boolq
metrics:
- name: BoolQ
type: BoolQ
value: 77.34
verified: false
- task:
type: text-generation
dataset:
type: hellaswag
name: hellaswag
metrics:
- name: HellaSwag
type: HellaSwag
value: 74.57
verified: false
- task:
type: text-generation
dataset:
type: LAMBDA
name: OpenAI LAMBDA
metrics:
- name: LAMBDA
type: LAMBDA
value: 70.1
verified: false
- task:
type: text-generation
dataset:
type: OpenBookQA
name: openbookqa
metrics:
- name: OpenBookQA
type: OpenBookQA
value: 33
verified: false
- task:
type: text-generation
dataset:
type: PIQA
name: piqa
metrics:
- name: PIQA
type: PIQA
value: 77.52
verified: false
- task:
type: text-generation
dataset:
type: truthful_qa
name: truthful_qa
metrics:
- name: TruthfulQA
type: TruthfulQA
value: 43.89
verified: false
- task:
type: text-generation
dataset:
type: winogrande
name: winogrande
metrics:
- name: Winogrande
type: Winogrande
value: 67.64
verified: false
---
# DeciLM 6B-Instruct
DeciLM 6B-Instruct is a model for short-form instruction following. It is built by LoRA fine-tuning [DeciLM 6B](https://huggingface.co/Deci/DeciLM-6b) on a subset of the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
- **Developed by:** Deci
- **Model type:** DeciLM is an auto-regressive language model using an optimized transformer decoder architecture that includes variable Grouped-Query Attention.
- **Language(s) (NLP):** English
- **License:** [Llama 2 Community License Agreement](https://huggingface.co/Deci/DeciLM-6b-instruct/blob/main/LICENSE.md) with an extention of Deci regarding hosting service providers.
### Model Sources
- **Paper:** [DeciLM 6B Technical Blog](https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/?utm_campaign=repos&utm_source=hugging-face&utm_medium=model-card&utm_content=decilm-6b-instruct)
- **Demo:** [DeciLM 6B-Instruct Demo](https://huggingface.co/spaces/Deci/DeciLM-6b-instruct)
- **Notebook:** [DeciLM 6B-Instruct Notebook](https://bit.ly/decilm-instruct-nb)
## Uses
The model is intended for commercial and research use in English and can be fine-tuned for use in other languages.
## How to Get Started with the Model
Use the code below to get started with the model.
```bibtex
# pip install -q transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Deci/DeciLM-6b-instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, trust_remote_code=True).to(device)
inputs = tokenizer.encode("How do I make french toast? Think through it step by step", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_p=0.95)
print(tokenizer.decode(outputs[0]))
```
## Training Details
DeciLM 6B underwent training utilizing the SlimPijamas dataset, leveraging advanced proprietary methodologies allowing for fast training. DeciLM 6B was further finetuned on a subset of the OpenOrca dataset, giving rise to DeciLM-6B-Instruct.
## Evaluation
Below are DeciLM's 6B-instruct evaluation results.
| Average | ARC Challenge* | ARC Easy* | BoolQ | HellaSwag* | LAMBDA OpenAI | OpenBookQA | PIQA | TruthfulQA | Winogrande |
|:----------|:----------|:----------|:----------|:----------|:----------|:----------|:----------|:----------|:----------|
| 62.01 | 44.43 | 70.58 | 77.34 | 74.57 | 70.1 | 33 | 77.52 |43.89 | 67.64 |
Accuracy-norm score*
## Runtime Benchmarks
|Inference Tool/Hardware | A10 (tokens/sec) |
|:----------|:----------|
| PyTorch | 652.49 |
| Infery LLM | 2,029.6 |
- Throughput (tokens/sec) - Measured with optimal batch - PyTorch BS 64, Infery LLM BS 128
- In order to replicate the results of the PyTorch benchmark, use this [code example](https://huggingface.co/Deci/DeciLM-6b-instruct/blob/main/hf_benchmark_example.py)
## Disclaimer
DeciLM 6B-Instruct has not been aligned for safety or trained using RLHF.
## How to Cite
Please cite this model using this format.
```bibtex
@misc{DeciFoundationModels,
title = {DeciLM 6B Instruct},
author = {DeciAI Research Team},
year = {2023}
url={[https://huggingface.co/Deci/DeciLM-6b-instruct](https://huggingface.co/Deci/DeciLM-6b-instruct)},
}
``` |
Deci/DeciCoder-1b | Deci | 2024-02-15T08:45:52Z | 2,571 | 246 | transformers | [
"transformers",
"safetensors",
"text-generation",
"text generation",
"Deci AI",
"DeciCoder",
"custom_code",
"dataset:bigcode/starcoderdata",
"arxiv:2305.13245",
"arxiv:2104.09864",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-08-16T14:52:10Z | ---
pipeline_tag: text-generation
license: apache-2.0
tags:
- text generation
- Deci AI
- DeciCoder
programming_language:
- Java
- JavaScript
- Python
metrics:
- code_eval
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
model-index:
- name: DeciCoder-1b
results:
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.191
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 0.184
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.166
verified: false
datasets:
- bigcode/starcoderdata
---
# Model Card for DeciCoder 1B
DeciCoder 1B is a 1 billion parameter decoder-only code completion model
trained on the Python, Java, and Javascript subsets of [Starcoder Training Dataset](https://huggingface.co/datasets/bigcode/starcoderdata).
The model uses Grouped Query Attention and has a context window of 2048
tokens. It was trained using a Fill-in-the-Middle training objective. The model's
architecture was generated by Deci's proprietary Neural Architecture
Search-based technology, AutoNAC.
## Model Details
- **Developed by:** [Deci](https://deci.ai/)
- **Model type:** DeciCoder is an auto-regressive language model based on the transformer decoder architecture, using Grouped Query Attention.
- **Language(s):** Python, Java, JavaScript
- **License:** Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Architecture
| Parameters | Layers | Heads | Sequence Length | GQA num_key_value_heads | Hidden Size |
|:----------|:----------|:----------|:----------|:----------|:----------|
| 1.1B | 20 | 32 | 2048 | 4 | 2048 | |
- **Decoder layer:** Grouped Query Attention [Ainslie et al., 2023](https://arxiv.org/abs/2305.13245)
- **Position Embeddings:** Rotary Position Embeddings [Su et al., 2021](https://arxiv.org/abs/2104.09864)
## Uses
The model is intended to do single/multiline code completion from a
context window of up to 2048k tokens. It is *not* an instruction model
and commands like \"Write a function that computes the absolute value of
an integer,\" won't yield the desired results. A more effective approach
is to frame instructions in the style of source code comments (e.g. \#
this function calculates the absolute value of an integer) or to present
a function signature and docstring, enabling the model to complete the
function's body.
### How to Use
```bibtex
# pip install -q transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Deci/DeciCoder-1b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, trust_remote_code=True).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
### Attribution
DeciCoder was trained on StarCoder Training Dataset, filtered for
Python, Java, and Javascript code. For additional information, please
refer to [https://huggingface.co/datasets/bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata).
### Limitations
The model has undergone training with source code from Python, Java, and
JavaScript. While the primary language in the source is English, it does
contain other languages. Therefore, the model can produce code snippets
given some context. However, there\'s no assurance that the resulting
code will function as expected. It might be suboptimal, contain bugs, or
even exploits.
## Training Details
### Training Data
DeciCoder was trained on the Python, Java, and Javascript subsets of [Starcoder Training Dataset](https://huggingface.co/datasets/bigcode/starcoderdata)
### Training Procedure
- **Warm-Up Steps**: 9000
- **Total Training Steps**: 284k
- **Total Tokens**: 446B
- **Global Batch Size**: 768
- **Optimizer**: AdamW
- **Optimizer Parameters**: beta1=0.9, beta2=0.95
- **Weight Decay**: 0.1
- **Learning Rate**: 4e-4
- **Learning Rate Schedule**: cosine
## Evaluation
Below are DeciCoder's pass@1 on MultiPL HumanEval scores
| Python | JavaScript | Java |
|:----------|:----------|:----------|
| 19.1% | 18.4% | 16.6% |
### Runtime Benchmarks
|Inference Tool/Hardware | A10 (tokens/sec) |A100 (tokens/sec) |
|:----------|:----------|:----------|
| PyTorch | 1,364.2 | 3,244.4 |
| Infery LLM | 3,889.3 | 11,676.8 |
- Throughput (tokens/sec) - Measured with optimal batch size per hardware - A10 on BS 128, A100 on BS 512
- Infery-LLM, Deci's optimization and inference SDK's features a suite of optimization techniques, including selective quantization, optimized beam search, continuous batching, and custom CUDA kernels. To explore the full capabilities of Infery-LLM, we invite you to [book a demo](https://deci.ai/infery-llm-book-a-demo/?utm_campaign=repos&utm_source=hugging-face&utm_medium=model-card&utm_content=decicoder-1b) with our experts.
## Documentation
- [Notebook](https://colab.research.google.com/drive/1JCxvBsWCZKHfIcHSMVf7GZCs3ClMQPjs)
- Blog post: [Introducing DeciCoder: The New Gold Standard in Efficient and Accurate Code Generation](https://deci.ai/blog/decicoder-efficient-and-accurate-code-generation-llm/?utm_campaign=repos&utm_source=hugging-face&utm_medium=model-card&utm_content=decicoder-1b)
- Questions:Feel free to contact us via our [Discord Community!](https://discord.com/invite/p9ecgRhDR8/)
## How to Cite
Please cite this model using this format.
```bibtex
@misc{DeciFoundationModels,
title = {DeciCoder},
author = {DeciAI Research Team},
year = {2023}
url={[https://huggingface.co/deci/decicoder-1b](https://huggingface.co/deci/decicoder-1b)},
}
``` |
KeiMura/QueAnsModel | KeiMura | 2024-02-15T08:09:49Z | 99 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-02-15T08:03:46Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: QueAnsModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QueAnsModel
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.1018 |
| 2.7104 | 2.0 | 500 | 1.6047 |
| 2.7104 | 3.0 | 750 | 1.5344 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
aghanim1/sadu | aghanim1 | 2024-02-15T08:05:03Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2024-02-15T08:03:52Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: sadu, pattern, red, knitted, embroidery, repeating, palm trees, camel
parameters:
negative_prompt: Person, 1person
output:
url: images/IMG_0070.PNG
- text: '-'
output:
url: images/IMG_0067.PNG
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Sadu, pattern, red, embroidery, repeating
---
# Sadu Traditional UAE Knitting Pattern
<Gallery />
## Model description
This model is trained on traditional Emirati (UAE) embroidery (on the List of Intangible Cultural Heritage in Need of Urgent Safeguarding by UNESCO).
## Trigger words
You should use `Sadu` to trigger the image generation.
You should use `pattern` to trigger the image generation.
You should use `red` to trigger the image generation.
You should use `embroidery` to trigger the image generation.
You should use `repeating` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aghanim1/sadu/tree/main) them in the Files & versions tab.
|
hellomyoh/mistral_7b_ft_ko-en_v0.1 | hellomyoh | 2024-02-15T07:59:28Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T07:54:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
maridze/Saiga_2_13b_fine_tune_custom_data | maridze | 2024-02-15T07:49:37Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-02-14T14:28:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
ahmetkca/trendyol-7B-v1.0-f32-gguf | ahmetkca | 2024-02-15T07:48:53Z | 0 | 0 | null | [
"gguf",
"turkish",
"tr",
"trendyol",
"llama",
"llama.cpp",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T07:34:53Z | ---
language:
- tr
tags:
- turkish
- tr
- trendyol
- llama
- gguf
- llama.cpp
--- |
RansikaC99/llama2-qlora-finetunined-4-bit-1500-3epoch | RansikaC99 | 2024-02-15T07:46:00Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-02-15T07:45:53Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
ahmetkca/trendyol-7B-v1.0-f16-gguf | ahmetkca | 2024-02-15T07:39:54Z | 4 | 0 | null | [
"gguf",
"turkish",
"tr",
"trendyol",
"llama",
"llama.cpp",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T07:20:10Z | ---
language:
- tr
tags:
- turkish
- tr
- trendyol
- gguf
- llama
- llama.cpp
--- |
hustvl/Vim-tiny-midclstok | hustvl | 2024-02-15T07:39:04Z | 0 | 5 | null | [
"arxiv:2401.09417",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-10T14:40:18Z | ---
license: apache-2.0
---
<br>
# Vim Model Card
## Model Details
Vision Mamba (Vim) is a generic backbone trained on the ImageNet-1K dataset for vision tasks.
- **Developed by:** [HUST](https://english.hust.edu.cn/), [Horizon Robotics](https://en.horizon.cc/), [BAAI](https://www.baai.ac.cn/english.html)
- **Model type:** A generic vision backbone based on the bidirectional state space model (SSM) architecture.
- **License:** Non-commercial license
### Model Sources
- **Repository:** https://github.com/hustvl/Vim
- **Paper:** https://arxiv.org/abs/2401.09417
## Uses
The primary use of Vim is research on vision tasks, e.g., classification, segmentation, detection, and instance segmentation, with an SSM-based backbone.
The primary intended users of the model are researchers and hobbyists in computer vision, machine learning, and artificial intelligence.
## How to Get Started with the Model
- You can replace the backbone for vision tasks with the proposed Vim: https://github.com/hustvl/Vim/blob/main/vim/models_mamba.py
- Then you can load this checkpoint and start training.
## Training Details
Vim is pretrained on ImageNet-1K with classification supervision.
The training data is around 1.3M images from [ImageNet-1K dataset](https://www.image-net.org/challenges/LSVRC/2012/).
See more details in this [paper](https://arxiv.org/abs/2401.09417).
## Evaluation
Vim-tiny is evaluated on ImageNet-1K val set, and achieves 76.1% Top-1 Acc. By further finetuning at finer granularity, Vim-tiny achieves 78.3% Top-1 Acc. See more details in this [paper](https://arxiv.org/abs/2401.09417).
## Additional Information
### Citation Information
```
@article{vim,
title={Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model},
author={Lianghui Zhu and Bencheng Liao and Qian Zhang and Xinlong Wang and Wenyu Liu and Xinggang Wang},
journal={arXiv preprint arXiv:2401.09417},
year={2024}
}
```
|
ehsangharibnezhad/phi-1_5-finetuned-vicgalle-alpaca-gpt4 | ehsangharibnezhad | 2024-02-15T07:37:04Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-15T21:07:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FINNUMBER/Yi-Ko-6B-Finch-SA-full | FINNUMBER | 2024-02-15T07:35:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T05:55:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Balexml/segformer-b0-finetuned-segments-sidewalk-2 | Balexml | 2024-02-15T07:33:05Z | 18 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-11-22T18:27:53Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0652
- Mean Iou: 0.9415
- Mean Accuracy: 0.9614
- Overall Accuracy: 0.9785
- Accuracy Water: 0.9290
- Accuracy Non-water: 0.9937
- Iou Water: 0.9104
- Iou Non-water: 0.9725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Non-water | Iou Water | Iou Non-water |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:------------------:|:---------:|:-------------:|
| 0.2744 | 3.33 | 20 | 0.5474 | 0.7591 | 0.8996 | 0.8886 | 0.9204 | 0.8789 | 0.6602 | 0.8579 |
| 0.6419 | 6.67 | 40 | 0.4140 | 0.7124 | 0.8985 | 0.8547 | 0.9812 | 0.8158 | 0.6136 | 0.8111 |
| 0.1373 | 10.0 | 60 | 0.2129 | 0.8964 | 0.9580 | 0.9590 | 0.9561 | 0.9599 | 0.8457 | 0.9471 |
| 0.1052 | 13.33 | 80 | 0.1804 | 0.8872 | 0.9597 | 0.9545 | 0.9696 | 0.9498 | 0.8335 | 0.9410 |
| 0.2278 | 16.67 | 100 | 0.1461 | 0.9220 | 0.9639 | 0.9702 | 0.9519 | 0.9758 | 0.8825 | 0.9616 |
| 0.0835 | 20.0 | 120 | 0.1184 | 0.9289 | 0.9635 | 0.9732 | 0.9453 | 0.9818 | 0.8923 | 0.9655 |
| 0.3156 | 23.33 | 140 | 0.1160 | 0.9295 | 0.9589 | 0.9737 | 0.9309 | 0.9868 | 0.8926 | 0.9663 |
| 0.0834 | 26.67 | 160 | 0.1072 | 0.9286 | 0.9551 | 0.9735 | 0.9203 | 0.9899 | 0.8910 | 0.9662 |
| 0.0626 | 30.0 | 180 | 0.1039 | 0.9299 | 0.9551 | 0.9741 | 0.9191 | 0.9910 | 0.8929 | 0.9669 |
| 0.0658 | 33.33 | 200 | 0.0961 | 0.9235 | 0.9687 | 0.9705 | 0.9653 | 0.9722 | 0.8851 | 0.9619 |
| 0.065 | 36.67 | 220 | 0.1010 | 0.9317 | 0.9571 | 0.9747 | 0.9237 | 0.9904 | 0.8958 | 0.9677 |
| 0.0485 | 40.0 | 240 | 0.0950 | 0.9324 | 0.9555 | 0.9751 | 0.9187 | 0.9924 | 0.8966 | 0.9682 |
| 0.0563 | 43.33 | 260 | 0.0963 | 0.9226 | 0.9554 | 0.9710 | 0.9260 | 0.9848 | 0.8823 | 0.9629 |
| 0.2679 | 46.67 | 280 | 0.0929 | 0.9341 | 0.9544 | 0.9758 | 0.9140 | 0.9949 | 0.8990 | 0.9692 |
| 0.2814 | 50.0 | 300 | 0.1009 | 0.9310 | 0.9508 | 0.9748 | 0.9055 | 0.9961 | 0.8940 | 0.9679 |
| 0.0347 | 53.33 | 320 | 0.0869 | 0.9279 | 0.9564 | 0.9732 | 0.9247 | 0.9880 | 0.8901 | 0.9657 |
| 0.037 | 56.67 | 340 | 0.0838 | 0.9281 | 0.9599 | 0.9731 | 0.9352 | 0.9847 | 0.8908 | 0.9655 |
| 0.0322 | 60.0 | 360 | 0.0844 | 0.9315 | 0.9567 | 0.9746 | 0.9230 | 0.9905 | 0.8953 | 0.9676 |
| 0.042 | 63.33 | 380 | 0.0758 | 0.9303 | 0.9593 | 0.9740 | 0.9317 | 0.9870 | 0.8940 | 0.9667 |
| 0.0528 | 66.67 | 400 | 0.0872 | 0.9318 | 0.9567 | 0.9748 | 0.9226 | 0.9908 | 0.8958 | 0.9678 |
| 0.0288 | 70.0 | 420 | 0.0837 | 0.9280 | 0.9575 | 0.9731 | 0.9280 | 0.9870 | 0.8903 | 0.9656 |
| 0.2019 | 73.33 | 440 | 0.0834 | 0.9342 | 0.9570 | 0.9758 | 0.9217 | 0.9924 | 0.8994 | 0.9691 |
| 0.0386 | 76.67 | 460 | 0.0649 | 0.9377 | 0.9626 | 0.9769 | 0.9357 | 0.9896 | 0.9050 | 0.9704 |
| 0.0295 | 80.0 | 480 | 0.0703 | 0.9350 | 0.9601 | 0.9759 | 0.9301 | 0.9900 | 0.9008 | 0.9692 |
| 0.0399 | 83.33 | 500 | 0.0828 | 0.9365 | 0.9569 | 0.9767 | 0.9196 | 0.9942 | 0.9026 | 0.9703 |
| 0.025 | 86.67 | 520 | 0.0874 | 0.9343 | 0.9531 | 0.9760 | 0.9100 | 0.9963 | 0.8991 | 0.9695 |
| 0.0254 | 90.0 | 540 | 0.0669 | 0.9356 | 0.9659 | 0.9759 | 0.9472 | 0.9847 | 0.9023 | 0.9690 |
| 0.1682 | 93.33 | 560 | 0.0852 | 0.9370 | 0.9571 | 0.9769 | 0.9197 | 0.9945 | 0.9035 | 0.9705 |
| 0.0379 | 96.67 | 580 | 0.0709 | 0.9305 | 0.9588 | 0.9741 | 0.9299 | 0.9877 | 0.8942 | 0.9669 |
| 0.0346 | 100.0 | 600 | 0.0862 | 0.9291 | 0.9562 | 0.9737 | 0.9231 | 0.9893 | 0.8919 | 0.9664 |
| 0.0232 | 103.33 | 620 | 0.0750 | 0.9372 | 0.9571 | 0.9770 | 0.9195 | 0.9946 | 0.9038 | 0.9706 |
| 0.0356 | 106.67 | 640 | 0.0771 | 0.9335 | 0.9574 | 0.9755 | 0.9234 | 0.9915 | 0.8984 | 0.9686 |
| 0.0346 | 110.0 | 660 | 0.0635 | 0.9342 | 0.9618 | 0.9755 | 0.9358 | 0.9877 | 0.8998 | 0.9686 |
| 0.0365 | 113.33 | 680 | 0.0701 | 0.9344 | 0.9711 | 0.9751 | 0.9636 | 0.9786 | 0.9009 | 0.9678 |
| 0.1537 | 116.67 | 700 | 0.0762 | 0.9350 | 0.9576 | 0.9761 | 0.9226 | 0.9925 | 0.9006 | 0.9694 |
| 0.0334 | 120.0 | 720 | 0.0686 | 0.9362 | 0.9578 | 0.9765 | 0.9225 | 0.9932 | 0.9024 | 0.9700 |
| 0.3516 | 123.33 | 740 | 0.0629 | 0.9348 | 0.9603 | 0.9758 | 0.9309 | 0.9896 | 0.9005 | 0.9691 |
| 0.1583 | 126.67 | 760 | 0.0727 | 0.9355 | 0.9578 | 0.9763 | 0.9228 | 0.9927 | 0.9014 | 0.9697 |
| 0.0183 | 130.0 | 780 | 0.0652 | 0.9332 | 0.9591 | 0.9752 | 0.9287 | 0.9895 | 0.8982 | 0.9683 |
| 0.0184 | 133.33 | 800 | 0.0750 | 0.9329 | 0.9573 | 0.9752 | 0.9236 | 0.9910 | 0.8975 | 0.9683 |
| 0.0214 | 136.67 | 820 | 0.0730 | 0.9372 | 0.9560 | 0.9771 | 0.9163 | 0.9957 | 0.9037 | 0.9708 |
| 0.0212 | 140.0 | 840 | 0.0645 | 0.9358 | 0.9580 | 0.9764 | 0.9235 | 0.9926 | 0.9018 | 0.9698 |
| 0.018 | 143.33 | 860 | 0.0699 | 0.9305 | 0.9590 | 0.9741 | 0.9304 | 0.9875 | 0.8941 | 0.9669 |
| 0.0324 | 146.67 | 880 | 0.0770 | 0.9363 | 0.9577 | 0.9766 | 0.9220 | 0.9934 | 0.9026 | 0.9701 |
| 0.0337 | 150.0 | 900 | 0.0612 | 0.9386 | 0.9657 | 0.9771 | 0.9440 | 0.9873 | 0.9066 | 0.9706 |
| 0.0427 | 153.33 | 920 | 0.0546 | 0.9414 | 0.9641 | 0.9784 | 0.9371 | 0.9910 | 0.9106 | 0.9722 |
| 0.032 | 156.67 | 940 | 0.0684 | 0.9332 | 0.9583 | 0.9753 | 0.9262 | 0.9903 | 0.8980 | 0.9684 |
| 0.0413 | 160.0 | 960 | 0.0699 | 0.9346 | 0.9574 | 0.9759 | 0.9225 | 0.9923 | 0.9000 | 0.9692 |
| 0.0127 | 163.33 | 980 | 0.0706 | 0.9376 | 0.9572 | 0.9772 | 0.9195 | 0.9949 | 0.9044 | 0.9709 |
| 0.0202 | 166.67 | 1000 | 0.0768 | 0.9377 | 0.9574 | 0.9772 | 0.9202 | 0.9947 | 0.9045 | 0.9709 |
| 0.0329 | 170.0 | 1020 | 0.0663 | 0.9369 | 0.9583 | 0.9768 | 0.9233 | 0.9932 | 0.9034 | 0.9703 |
| 0.235 | 173.33 | 1040 | 0.0540 | 0.9447 | 0.9704 | 0.9794 | 0.9535 | 0.9874 | 0.9160 | 0.9735 |
| 0.016 | 176.67 | 1060 | 0.0558 | 0.9384 | 0.9617 | 0.9773 | 0.9324 | 0.9911 | 0.9060 | 0.9709 |
| 0.0112 | 180.0 | 1080 | 0.0614 | 0.9394 | 0.9665 | 0.9774 | 0.9457 | 0.9872 | 0.9079 | 0.9710 |
| 0.1565 | 183.33 | 1100 | 0.0642 | 0.9362 | 0.9620 | 0.9763 | 0.9349 | 0.9891 | 0.9028 | 0.9697 |
| 0.0154 | 186.67 | 1120 | 0.0690 | 0.9381 | 0.9578 | 0.9773 | 0.9211 | 0.9946 | 0.9051 | 0.9710 |
| 0.1415 | 190.0 | 1140 | 0.0684 | 0.9340 | 0.9579 | 0.9756 | 0.9243 | 0.9914 | 0.8992 | 0.9689 |
| 0.0505 | 193.33 | 1160 | 0.0744 | 0.9293 | 0.9575 | 0.9737 | 0.9270 | 0.9880 | 0.8922 | 0.9663 |
| 0.0101 | 196.67 | 1180 | 0.0535 | 0.9393 | 0.9610 | 0.9777 | 0.9296 | 0.9924 | 0.9073 | 0.9714 |
| 0.0126 | 200.0 | 1200 | 0.0749 | 0.9356 | 0.9576 | 0.9763 | 0.9223 | 0.9929 | 0.9015 | 0.9697 |
| 0.0194 | 203.33 | 1220 | 0.0758 | 0.9356 | 0.9568 | 0.9763 | 0.9200 | 0.9937 | 0.9013 | 0.9698 |
| 0.0155 | 206.67 | 1240 | 0.0830 | 0.9301 | 0.9570 | 0.9740 | 0.9248 | 0.9892 | 0.8934 | 0.9668 |
| 0.0195 | 210.0 | 1260 | 0.0965 | 0.9352 | 0.9574 | 0.9761 | 0.9220 | 0.9928 | 0.9008 | 0.9695 |
| 0.0399 | 213.33 | 1280 | 0.0811 | 0.9377 | 0.9572 | 0.9772 | 0.9194 | 0.9949 | 0.9045 | 0.9709 |
| 0.0191 | 216.67 | 1300 | 0.0678 | 0.9324 | 0.9588 | 0.9749 | 0.9283 | 0.9892 | 0.8969 | 0.9679 |
| 0.0303 | 220.0 | 1320 | 0.0826 | 0.9323 | 0.9569 | 0.9750 | 0.9227 | 0.9910 | 0.8965 | 0.9680 |
| 0.0085 | 223.33 | 1340 | 0.0606 | 0.9370 | 0.9635 | 0.9766 | 0.9388 | 0.9882 | 0.9040 | 0.9699 |
| 0.0095 | 226.67 | 1360 | 0.0643 | 0.9347 | 0.9588 | 0.9759 | 0.9266 | 0.9910 | 0.9003 | 0.9692 |
| 0.0141 | 230.0 | 1380 | 0.0717 | 0.9345 | 0.9575 | 0.9759 | 0.9228 | 0.9922 | 0.8999 | 0.9692 |
| 0.0083 | 233.33 | 1400 | 0.0608 | 0.9362 | 0.9593 | 0.9765 | 0.9268 | 0.9917 | 0.9025 | 0.9699 |
| 0.0082 | 236.67 | 1420 | 0.0721 | 0.9359 | 0.9577 | 0.9764 | 0.9225 | 0.9930 | 0.9019 | 0.9699 |
| 0.0338 | 240.0 | 1440 | 0.0747 | 0.9365 | 0.9575 | 0.9767 | 0.9213 | 0.9937 | 0.9028 | 0.9702 |
| 0.0145 | 243.33 | 1460 | 0.0709 | 0.9340 | 0.9574 | 0.9756 | 0.9231 | 0.9918 | 0.8991 | 0.9689 |
| 0.0186 | 246.67 | 1480 | 0.0615 | 0.9367 | 0.9592 | 0.9766 | 0.9262 | 0.9922 | 0.9032 | 0.9701 |
| 0.0375 | 250.0 | 1500 | 0.0697 | 0.9352 | 0.9581 | 0.9761 | 0.9240 | 0.9921 | 0.9010 | 0.9695 |
| 0.1679 | 253.33 | 1520 | 0.0797 | 0.9349 | 0.9579 | 0.9760 | 0.9238 | 0.9921 | 0.9005 | 0.9694 |
| 0.0142 | 256.67 | 1540 | 0.0671 | 0.9323 | 0.9580 | 0.9749 | 0.9260 | 0.9900 | 0.8967 | 0.9679 |
| 0.0296 | 260.0 | 1560 | 0.0681 | 0.9329 | 0.9585 | 0.9751 | 0.9270 | 0.9899 | 0.8976 | 0.9682 |
| 0.029 | 263.33 | 1580 | 0.0726 | 0.9354 | 0.9581 | 0.9762 | 0.9240 | 0.9922 | 0.9012 | 0.9696 |
| 0.1336 | 266.67 | 1600 | 0.0692 | 0.9365 | 0.9579 | 0.9766 | 0.9224 | 0.9933 | 0.9028 | 0.9702 |
| 0.0327 | 270.0 | 1620 | 0.0724 | 0.9359 | 0.9571 | 0.9764 | 0.9207 | 0.9936 | 0.9018 | 0.9699 |
| 0.0074 | 273.33 | 1640 | 0.0951 | 0.9335 | 0.9511 | 0.9758 | 0.9045 | 0.9977 | 0.8978 | 0.9693 |
| 0.1754 | 276.67 | 1660 | 0.0728 | 0.9373 | 0.9584 | 0.9769 | 0.9235 | 0.9934 | 0.9040 | 0.9705 |
| 0.0137 | 280.0 | 1680 | 0.0573 | 0.9395 | 0.9607 | 0.9778 | 0.9285 | 0.9929 | 0.9075 | 0.9715 |
| 0.0132 | 283.33 | 1700 | 0.0724 | 0.9326 | 0.9576 | 0.9750 | 0.9246 | 0.9905 | 0.8970 | 0.9681 |
| 0.0133 | 286.67 | 1720 | 0.0847 | 0.9362 | 0.9582 | 0.9765 | 0.9236 | 0.9928 | 0.9024 | 0.9700 |
| 0.0065 | 290.0 | 1740 | 0.0688 | 0.9324 | 0.9589 | 0.9749 | 0.9288 | 0.9891 | 0.8969 | 0.9679 |
| 0.1475 | 293.33 | 1760 | 0.0751 | 0.9386 | 0.9582 | 0.9775 | 0.9217 | 0.9946 | 0.9059 | 0.9712 |
| 0.0143 | 296.67 | 1780 | 0.0785 | 0.9378 | 0.9572 | 0.9772 | 0.9194 | 0.9950 | 0.9047 | 0.9709 |
| 0.006 | 300.0 | 1800 | 0.0714 | 0.9369 | 0.9584 | 0.9768 | 0.9236 | 0.9932 | 0.9035 | 0.9704 |
| 0.0183 | 303.33 | 1820 | 0.0900 | 0.9376 | 0.9585 | 0.9771 | 0.9234 | 0.9935 | 0.9044 | 0.9707 |
| 0.0185 | 306.67 | 1840 | 0.0756 | 0.9368 | 0.9583 | 0.9767 | 0.9235 | 0.9931 | 0.9032 | 0.9703 |
| 0.0127 | 310.0 | 1860 | 0.0741 | 0.9320 | 0.9577 | 0.9748 | 0.9256 | 0.9899 | 0.8962 | 0.9678 |
| 0.0308 | 313.33 | 1880 | 0.0675 | 0.9328 | 0.9582 | 0.9751 | 0.9264 | 0.9901 | 0.8974 | 0.9682 |
| 0.0303 | 316.67 | 1900 | 0.0584 | 0.9384 | 0.9602 | 0.9773 | 0.9279 | 0.9925 | 0.9058 | 0.9710 |
| 0.1517 | 320.0 | 1920 | 0.0687 | 0.9371 | 0.9587 | 0.9769 | 0.9244 | 0.9930 | 0.9038 | 0.9704 |
| 0.02 | 323.33 | 1940 | 0.0737 | 0.9311 | 0.9577 | 0.9744 | 0.9263 | 0.9892 | 0.8949 | 0.9673 |
| 0.1405 | 326.67 | 1960 | 0.0750 | 0.9389 | 0.9580 | 0.9776 | 0.9211 | 0.9950 | 0.9063 | 0.9714 |
| 0.0133 | 330.0 | 1980 | 0.0624 | 0.9382 | 0.9589 | 0.9773 | 0.9242 | 0.9936 | 0.9054 | 0.9710 |
| 0.0352 | 333.33 | 2000 | 0.0719 | 0.9375 | 0.9581 | 0.9770 | 0.9224 | 0.9938 | 0.9042 | 0.9707 |
| 0.0053 | 336.67 | 2020 | 0.0660 | 0.9369 | 0.9587 | 0.9768 | 0.9246 | 0.9928 | 0.9034 | 0.9703 |
| 0.0374 | 340.0 | 2040 | 0.0806 | 0.9378 | 0.9581 | 0.9772 | 0.9220 | 0.9941 | 0.9047 | 0.9708 |
| 0.0188 | 343.33 | 2060 | 0.0694 | 0.9327 | 0.9581 | 0.9751 | 0.9262 | 0.9901 | 0.8973 | 0.9681 |
| 0.0129 | 346.67 | 2080 | 0.0538 | 0.9426 | 0.9662 | 0.9787 | 0.9426 | 0.9898 | 0.9125 | 0.9727 |
| 0.0195 | 350.0 | 2100 | 0.0743 | 0.9328 | 0.9576 | 0.9751 | 0.9245 | 0.9907 | 0.8973 | 0.9682 |
| 0.0179 | 353.33 | 2120 | 0.0583 | 0.9372 | 0.9595 | 0.9769 | 0.9267 | 0.9923 | 0.9040 | 0.9704 |
| 0.1339 | 356.67 | 2140 | 0.0612 | 0.9385 | 0.9589 | 0.9774 | 0.9239 | 0.9939 | 0.9059 | 0.9712 |
| 0.006 | 360.0 | 2160 | 0.0900 | 0.9377 | 0.9576 | 0.9772 | 0.9207 | 0.9945 | 0.9045 | 0.9708 |
| 0.0124 | 363.33 | 2180 | 0.0660 | 0.9372 | 0.9588 | 0.9769 | 0.9247 | 0.9929 | 0.9040 | 0.9705 |
| 0.144 | 366.67 | 2200 | 0.0671 | 0.9307 | 0.9583 | 0.9742 | 0.9282 | 0.9884 | 0.8943 | 0.9670 |
| 0.0358 | 370.0 | 2220 | 0.0739 | 0.9335 | 0.9579 | 0.9754 | 0.9247 | 0.9910 | 0.8985 | 0.9686 |
| 0.0046 | 373.33 | 2240 | 0.0692 | 0.9336 | 0.9581 | 0.9754 | 0.9253 | 0.9909 | 0.8986 | 0.9686 |
| 0.0283 | 376.67 | 2260 | 0.0680 | 0.9329 | 0.9578 | 0.9752 | 0.9252 | 0.9905 | 0.8975 | 0.9683 |
| 0.1312 | 380.0 | 2280 | 0.0742 | 0.9353 | 0.9581 | 0.9761 | 0.9241 | 0.9922 | 0.9010 | 0.9695 |
| 0.13 | 383.33 | 2300 | 0.0700 | 0.9327 | 0.9581 | 0.9751 | 0.9261 | 0.9901 | 0.8973 | 0.9681 |
| 0.0119 | 386.67 | 2320 | 0.0875 | 0.9370 | 0.9580 | 0.9768 | 0.9223 | 0.9936 | 0.9035 | 0.9704 |
| 0.127 | 390.0 | 2340 | 0.0742 | 0.9357 | 0.9579 | 0.9763 | 0.9230 | 0.9927 | 0.9016 | 0.9698 |
| 0.012 | 393.33 | 2360 | 0.0816 | 0.9368 | 0.9584 | 0.9767 | 0.9237 | 0.9930 | 0.9032 | 0.9703 |
| 0.1322 | 396.67 | 2380 | 0.0717 | 0.9333 | 0.9582 | 0.9753 | 0.9257 | 0.9906 | 0.8982 | 0.9685 |
| 0.0178 | 400.0 | 2400 | 0.0738 | 0.9358 | 0.9585 | 0.9763 | 0.9250 | 0.9921 | 0.9018 | 0.9697 |
| 0.0342 | 403.33 | 2420 | 0.0677 | 0.9353 | 0.9586 | 0.9761 | 0.9255 | 0.9917 | 0.9011 | 0.9695 |
| 0.0042 | 406.67 | 2440 | 0.0793 | 0.9315 | 0.9574 | 0.9746 | 0.9249 | 0.9899 | 0.8954 | 0.9675 |
| 0.018 | 410.0 | 2460 | 0.0748 | 0.9356 | 0.9574 | 0.9763 | 0.9217 | 0.9931 | 0.9014 | 0.9697 |
| 0.0041 | 413.33 | 2480 | 0.0717 | 0.9353 | 0.9581 | 0.9761 | 0.9241 | 0.9921 | 0.9010 | 0.9695 |
| 0.004 | 416.67 | 2500 | 0.0726 | 0.9348 | 0.9579 | 0.9759 | 0.9237 | 0.9920 | 0.9003 | 0.9693 |
| 0.0284 | 420.0 | 2520 | 0.0696 | 0.9359 | 0.9582 | 0.9764 | 0.9238 | 0.9926 | 0.9020 | 0.9699 |
| 0.0121 | 423.33 | 2540 | 0.0669 | 0.9335 | 0.9583 | 0.9754 | 0.9260 | 0.9906 | 0.8984 | 0.9685 |
| 0.0117 | 426.67 | 2560 | 0.0693 | 0.9319 | 0.9580 | 0.9747 | 0.9265 | 0.9895 | 0.8960 | 0.9677 |
| 0.0171 | 430.0 | 2580 | 0.0711 | 0.9343 | 0.9580 | 0.9757 | 0.9245 | 0.9915 | 0.8996 | 0.9690 |
| 0.1286 | 433.33 | 2600 | 0.0714 | 0.9350 | 0.9586 | 0.9760 | 0.9257 | 0.9914 | 0.9006 | 0.9693 |
| 0.0116 | 436.67 | 2620 | 0.0693 | 0.9329 | 0.9581 | 0.9752 | 0.9259 | 0.9903 | 0.8976 | 0.9682 |
| 0.0271 | 440.0 | 2640 | 0.0735 | 0.9348 | 0.9580 | 0.9760 | 0.9243 | 0.9918 | 0.9004 | 0.9693 |
| 0.0333 | 443.33 | 2660 | 0.0766 | 0.9360 | 0.9581 | 0.9764 | 0.9234 | 0.9927 | 0.9020 | 0.9699 |
| 0.1291 | 446.67 | 2680 | 0.0665 | 0.9344 | 0.9584 | 0.9757 | 0.9256 | 0.9912 | 0.8997 | 0.9690 |
| 0.0036 | 450.0 | 2700 | 0.0788 | 0.9350 | 0.9580 | 0.9760 | 0.9240 | 0.9920 | 0.9006 | 0.9693 |
| 0.0036 | 453.33 | 2720 | 0.0958 | 0.9342 | 0.9573 | 0.9757 | 0.9225 | 0.9921 | 0.8993 | 0.9690 |
| 0.0172 | 456.67 | 2740 | 0.0776 | 0.9383 | 0.9586 | 0.9774 | 0.9230 | 0.9941 | 0.9055 | 0.9711 |
| 0.0122 | 460.0 | 2760 | 0.0733 | 0.9353 | 0.9580 | 0.9762 | 0.9237 | 0.9923 | 0.9011 | 0.9695 |
| 0.0285 | 463.33 | 2780 | 0.0881 | 0.9341 | 0.9577 | 0.9757 | 0.9237 | 0.9916 | 0.8992 | 0.9689 |
| 0.0171 | 466.67 | 2800 | 0.0732 | 0.9307 | 0.9577 | 0.9743 | 0.9264 | 0.9890 | 0.8944 | 0.9671 |
| 0.0279 | 470.0 | 2820 | 0.0701 | 0.9330 | 0.9583 | 0.9752 | 0.9264 | 0.9902 | 0.8978 | 0.9683 |
| 0.1256 | 473.33 | 2840 | 0.0762 | 0.9342 | 0.9581 | 0.9757 | 0.9248 | 0.9913 | 0.8994 | 0.9689 |
| 0.0335 | 476.67 | 2860 | 0.0693 | 0.9360 | 0.9577 | 0.9765 | 0.9223 | 0.9931 | 0.9021 | 0.9700 |
| 0.0113 | 480.0 | 2880 | 0.0702 | 0.9352 | 0.9583 | 0.9761 | 0.9247 | 0.9919 | 0.9009 | 0.9694 |
| 0.0133 | 483.33 | 2900 | 0.0767 | 0.9352 | 0.9581 | 0.9761 | 0.9243 | 0.9920 | 0.9009 | 0.9694 |
| 0.0335 | 486.67 | 2920 | 0.0686 | 0.9354 | 0.9585 | 0.9762 | 0.9251 | 0.9919 | 0.9013 | 0.9696 |
| 0.0035 | 490.0 | 2940 | 0.0709 | 0.9355 | 0.9582 | 0.9762 | 0.9241 | 0.9923 | 0.9014 | 0.9696 |
| 0.0167 | 493.33 | 2960 | 0.0741 | 0.9351 | 0.9580 | 0.9761 | 0.9239 | 0.9921 | 0.9007 | 0.9694 |
| 0.0166 | 496.67 | 2980 | 0.0750 | 0.9361 | 0.9583 | 0.9765 | 0.9241 | 0.9926 | 0.9022 | 0.9699 |
| 0.0277 | 500.0 | 3000 | 0.0726 | 0.9369 | 0.9585 | 0.9768 | 0.9238 | 0.9931 | 0.9035 | 0.9704 |
| 0.0169 | 503.33 | 3020 | 0.0779 | 0.9377 | 0.9576 | 0.9771 | 0.9206 | 0.9945 | 0.9045 | 0.9708 |
| 0.0038 | 506.67 | 3040 | 0.0681 | 0.9348 | 0.9587 | 0.9759 | 0.9262 | 0.9912 | 0.9004 | 0.9692 |
| 0.0166 | 510.0 | 3060 | 0.0754 | 0.9355 | 0.9583 | 0.9762 | 0.9245 | 0.9921 | 0.9014 | 0.9696 |
| 0.0268 | 513.33 | 3080 | 0.0677 | 0.9358 | 0.9588 | 0.9763 | 0.9256 | 0.9919 | 0.9019 | 0.9698 |
| 0.0032 | 516.67 | 3100 | 0.0720 | 0.9360 | 0.9583 | 0.9764 | 0.9240 | 0.9925 | 0.9021 | 0.9699 |
| 0.1239 | 520.0 | 3120 | 0.0697 | 0.9356 | 0.9586 | 0.9763 | 0.9252 | 0.9920 | 0.9016 | 0.9697 |
| 0.0269 | 523.33 | 3140 | 0.0747 | 0.9352 | 0.9584 | 0.9761 | 0.9250 | 0.9918 | 0.9010 | 0.9695 |
| 0.0129 | 526.67 | 3160 | 0.0895 | 0.9354 | 0.9577 | 0.9762 | 0.9227 | 0.9927 | 0.9012 | 0.9696 |
| 0.1726 | 530.0 | 3180 | 0.0636 | 0.9339 | 0.9587 | 0.9755 | 0.9268 | 0.9905 | 0.8991 | 0.9687 |
| 0.0332 | 533.33 | 3200 | 0.0998 | 0.9370 | 0.9577 | 0.9769 | 0.9215 | 0.9939 | 0.9035 | 0.9705 |
| 0.0115 | 536.67 | 3220 | 0.0778 | 0.9361 | 0.9585 | 0.9765 | 0.9246 | 0.9924 | 0.9023 | 0.9699 |
| 0.0167 | 540.0 | 3240 | 0.0767 | 0.9360 | 0.9582 | 0.9764 | 0.9238 | 0.9926 | 0.9021 | 0.9699 |
| 0.0109 | 543.33 | 3260 | 0.0725 | 0.9361 | 0.9584 | 0.9764 | 0.9243 | 0.9925 | 0.9022 | 0.9699 |
| 0.0264 | 546.67 | 3280 | 0.0687 | 0.9351 | 0.9584 | 0.9761 | 0.9251 | 0.9917 | 0.9008 | 0.9694 |
| 0.1259 | 550.0 | 3300 | 0.0751 | 0.9343 | 0.9582 | 0.9757 | 0.9250 | 0.9914 | 0.8997 | 0.9690 |
| 0.0111 | 553.33 | 3320 | 0.0714 | 0.9348 | 0.9583 | 0.9759 | 0.9250 | 0.9916 | 0.9003 | 0.9692 |
| 0.1257 | 556.67 | 3340 | 0.0656 | 0.9356 | 0.9588 | 0.9762 | 0.9259 | 0.9917 | 0.9015 | 0.9696 |
| 0.0163 | 560.0 | 3360 | 0.0724 | 0.9353 | 0.9583 | 0.9761 | 0.9246 | 0.9920 | 0.9011 | 0.9695 |
| 0.0111 | 563.33 | 3380 | 0.0787 | 0.9327 | 0.9578 | 0.9751 | 0.9250 | 0.9905 | 0.8973 | 0.9682 |
| 0.011 | 566.67 | 3400 | 0.0679 | 0.9336 | 0.9582 | 0.9754 | 0.9256 | 0.9908 | 0.8986 | 0.9686 |
| 0.1369 | 570.0 | 3420 | 0.0859 | 0.9331 | 0.9579 | 0.9753 | 0.9250 | 0.9907 | 0.8979 | 0.9684 |
| 0.0325 | 573.33 | 3440 | 0.0907 | 0.9361 | 0.9573 | 0.9765 | 0.9208 | 0.9937 | 0.9022 | 0.9701 |
| 0.132 | 576.67 | 3460 | 0.0849 | 0.9330 | 0.9579 | 0.9752 | 0.9252 | 0.9906 | 0.8977 | 0.9683 |
| 0.0111 | 580.0 | 3480 | 0.0908 | 0.9341 | 0.9581 | 0.9756 | 0.9249 | 0.9913 | 0.8993 | 0.9689 |
| 0.0109 | 583.33 | 3500 | 0.0809 | 0.9349 | 0.9584 | 0.9760 | 0.9253 | 0.9915 | 0.9005 | 0.9693 |
| 0.0264 | 586.67 | 3520 | 0.0821 | 0.9360 | 0.9584 | 0.9764 | 0.9243 | 0.9924 | 0.9021 | 0.9699 |
| 0.0163 | 590.0 | 3540 | 0.0690 | 0.9351 | 0.9584 | 0.9761 | 0.9249 | 0.9918 | 0.9008 | 0.9694 |
| 0.0028 | 593.33 | 3560 | 0.0742 | 0.9346 | 0.9581 | 0.9758 | 0.9246 | 0.9916 | 0.9000 | 0.9691 |
| 0.0274 | 596.67 | 3580 | 0.0717 | 0.9347 | 0.9582 | 0.9759 | 0.9247 | 0.9916 | 0.9002 | 0.9692 |
| 0.0266 | 600.0 | 3600 | 0.0873 | 0.9345 | 0.9578 | 0.9758 | 0.9237 | 0.9919 | 0.8999 | 0.9691 |
| 0.0164 | 603.33 | 3620 | 0.0735 | 0.9348 | 0.9584 | 0.9759 | 0.9254 | 0.9915 | 0.9004 | 0.9692 |
| 0.0162 | 606.67 | 3640 | 0.0757 | 0.9347 | 0.9583 | 0.9759 | 0.9250 | 0.9915 | 0.9001 | 0.9692 |
| 0.0109 | 610.0 | 3660 | 0.0853 | 0.9334 | 0.9580 | 0.9754 | 0.9252 | 0.9908 | 0.8983 | 0.9685 |
| 0.0109 | 613.33 | 3680 | 0.0741 | 0.9354 | 0.9583 | 0.9762 | 0.9244 | 0.9921 | 0.9013 | 0.9696 |
| 0.0027 | 616.67 | 3700 | 0.0751 | 0.9355 | 0.9583 | 0.9762 | 0.9244 | 0.9921 | 0.9014 | 0.9696 |
| 0.0266 | 620.0 | 3720 | 0.0728 | 0.9341 | 0.9580 | 0.9757 | 0.9247 | 0.9913 | 0.8993 | 0.9689 |
| 0.124 | 623.33 | 3740 | 0.0727 | 0.9323 | 0.9582 | 0.9749 | 0.9268 | 0.9897 | 0.8967 | 0.9679 |
| 0.0026 | 626.67 | 3760 | 0.0912 | 0.9366 | 0.9586 | 0.9766 | 0.9246 | 0.9927 | 0.9030 | 0.9702 |
| 0.0177 | 630.0 | 3780 | 0.0825 | 0.9366 | 0.9581 | 0.9767 | 0.9230 | 0.9932 | 0.9030 | 0.9702 |
| 0.0262 | 633.33 | 3800 | 0.0758 | 0.9368 | 0.9583 | 0.9767 | 0.9236 | 0.9931 | 0.9032 | 0.9703 |
| 0.0106 | 636.67 | 3820 | 0.0851 | 0.9339 | 0.9580 | 0.9756 | 0.9249 | 0.9912 | 0.8991 | 0.9688 |
| 0.1289 | 640.0 | 3840 | 0.0711 | 0.9349 | 0.9588 | 0.9760 | 0.9265 | 0.9912 | 0.9006 | 0.9693 |
| 0.0326 | 643.33 | 3860 | 0.0808 | 0.9343 | 0.9581 | 0.9757 | 0.9247 | 0.9914 | 0.8996 | 0.9690 |
| 0.016 | 646.67 | 3880 | 0.0709 | 0.9334 | 0.9583 | 0.9754 | 0.9261 | 0.9905 | 0.8983 | 0.9685 |
| 0.0106 | 650.0 | 3900 | 0.0812 | 0.9332 | 0.9578 | 0.9753 | 0.9247 | 0.9909 | 0.8979 | 0.9684 |
| 0.0104 | 653.33 | 3920 | 0.0794 | 0.9338 | 0.9579 | 0.9755 | 0.9247 | 0.9912 | 0.8989 | 0.9687 |
| 0.0326 | 656.67 | 3940 | 0.0746 | 0.9349 | 0.9582 | 0.9760 | 0.9247 | 0.9917 | 0.9005 | 0.9693 |
| 0.0322 | 660.0 | 3960 | 0.0788 | 0.9345 | 0.9581 | 0.9758 | 0.9247 | 0.9916 | 0.9000 | 0.9691 |
| 0.0318 | 663.33 | 3980 | 0.0815 | 0.9356 | 0.9581 | 0.9763 | 0.9238 | 0.9924 | 0.9015 | 0.9697 |
| 0.0157 | 666.67 | 4000 | 0.0759 | 0.9356 | 0.9582 | 0.9763 | 0.9242 | 0.9923 | 0.9015 | 0.9697 |
| 0.0024 | 670.0 | 4020 | 0.0748 | 0.9352 | 0.9583 | 0.9761 | 0.9248 | 0.9919 | 0.9010 | 0.9695 |
| 0.0315 | 673.33 | 4040 | 0.0850 | 0.9351 | 0.9582 | 0.9760 | 0.9246 | 0.9919 | 0.9007 | 0.9694 |
| 0.0324 | 676.67 | 4060 | 0.0820 | 0.9351 | 0.9580 | 0.9761 | 0.9239 | 0.9921 | 0.9007 | 0.9694 |
| 0.0258 | 680.0 | 4080 | 0.0711 | 0.9359 | 0.9585 | 0.9764 | 0.9247 | 0.9923 | 0.9020 | 0.9698 |
| 0.0315 | 683.33 | 4100 | 0.0876 | 0.9352 | 0.9579 | 0.9761 | 0.9235 | 0.9923 | 0.9009 | 0.9695 |
| 0.026 | 686.67 | 4120 | 0.0752 | 0.9355 | 0.9583 | 0.9762 | 0.9246 | 0.9921 | 0.9013 | 0.9696 |
| 0.0258 | 690.0 | 4140 | 0.0715 | 0.9353 | 0.9584 | 0.9761 | 0.9250 | 0.9919 | 0.9011 | 0.9695 |
| 0.0258 | 693.33 | 4160 | 0.0794 | 0.9363 | 0.9584 | 0.9765 | 0.9243 | 0.9926 | 0.9025 | 0.9700 |
| 0.1233 | 696.67 | 4180 | 0.0707 | 0.9358 | 0.9586 | 0.9763 | 0.9251 | 0.9921 | 0.9019 | 0.9698 |
| 0.0116 | 700.0 | 4200 | 0.0984 | 0.9335 | 0.9580 | 0.9754 | 0.9251 | 0.9908 | 0.8984 | 0.9686 |
| 0.013 | 703.33 | 4220 | 0.0754 | 0.9373 | 0.9586 | 0.9769 | 0.9239 | 0.9933 | 0.9041 | 0.9706 |
| 0.0024 | 706.67 | 4240 | 0.0810 | 0.9344 | 0.9584 | 0.9758 | 0.9255 | 0.9912 | 0.8998 | 0.9690 |
| 0.0315 | 710.0 | 4260 | 0.0745 | 0.9351 | 0.9584 | 0.9761 | 0.9252 | 0.9917 | 0.9008 | 0.9694 |
| 0.1223 | 713.33 | 4280 | 0.0743 | 0.9343 | 0.9583 | 0.9757 | 0.9252 | 0.9913 | 0.8997 | 0.9690 |
| 0.0315 | 716.67 | 4300 | 0.0954 | 0.9353 | 0.9579 | 0.9762 | 0.9234 | 0.9924 | 0.9011 | 0.9696 |
| 0.1263 | 720.0 | 4320 | 0.0793 | 0.9365 | 0.9586 | 0.9766 | 0.9246 | 0.9926 | 0.9029 | 0.9701 |
| 0.0155 | 723.33 | 4340 | 0.0923 | 0.9348 | 0.9584 | 0.9759 | 0.9252 | 0.9915 | 0.9004 | 0.9693 |
| 0.0256 | 726.67 | 4360 | 0.0794 | 0.9368 | 0.9586 | 0.9768 | 0.9243 | 0.9929 | 0.9034 | 0.9703 |
| 0.0316 | 730.0 | 4380 | 0.0853 | 0.9379 | 0.9584 | 0.9772 | 0.9229 | 0.9939 | 0.9049 | 0.9709 |
| 0.0155 | 733.33 | 4400 | 0.0688 | 0.9363 | 0.9592 | 0.9765 | 0.9264 | 0.9919 | 0.9026 | 0.9700 |
| 0.1215 | 736.67 | 4420 | 0.0720 | 0.9376 | 0.9586 | 0.9771 | 0.9239 | 0.9934 | 0.9045 | 0.9707 |
| 0.0105 | 740.0 | 4440 | 0.0838 | 0.9362 | 0.9587 | 0.9765 | 0.9251 | 0.9923 | 0.9024 | 0.9700 |
| 0.0326 | 743.33 | 4460 | 0.0901 | 0.9381 | 0.9585 | 0.9773 | 0.9230 | 0.9939 | 0.9052 | 0.9710 |
| 0.1212 | 746.67 | 4480 | 0.0755 | 0.9367 | 0.9586 | 0.9767 | 0.9245 | 0.9927 | 0.9031 | 0.9702 |
| 0.0313 | 750.0 | 4500 | 0.0770 | 0.9358 | 0.9585 | 0.9763 | 0.9248 | 0.9922 | 0.9018 | 0.9698 |
| 0.0311 | 753.33 | 4520 | 0.0747 | 0.9370 | 0.9585 | 0.9768 | 0.9239 | 0.9931 | 0.9036 | 0.9704 |
| 0.0255 | 756.67 | 4540 | 0.0728 | 0.9378 | 0.9586 | 0.9771 | 0.9236 | 0.9936 | 0.9047 | 0.9708 |
| 0.031 | 760.0 | 4560 | 0.0722 | 0.9354 | 0.9583 | 0.9762 | 0.9247 | 0.9920 | 0.9013 | 0.9696 |
| 0.0022 | 763.33 | 4580 | 0.0692 | 0.9348 | 0.9587 | 0.9759 | 0.9261 | 0.9912 | 0.9004 | 0.9692 |
| 0.031 | 766.67 | 4600 | 0.0740 | 0.9363 | 0.9584 | 0.9765 | 0.9242 | 0.9926 | 0.9025 | 0.9700 |
| 0.0151 | 770.0 | 4620 | 0.0738 | 0.9362 | 0.9585 | 0.9765 | 0.9244 | 0.9925 | 0.9024 | 0.9700 |
| 0.015 | 773.33 | 4640 | 0.0719 | 0.9358 | 0.9585 | 0.9763 | 0.9249 | 0.9921 | 0.9018 | 0.9697 |
| 0.0153 | 776.67 | 4660 | 0.0767 | 0.9339 | 0.9579 | 0.9756 | 0.9245 | 0.9913 | 0.8990 | 0.9688 |
| 0.1215 | 780.0 | 4680 | 0.0732 | 0.9353 | 0.9583 | 0.9761 | 0.9246 | 0.9920 | 0.9011 | 0.9695 |
| 0.0022 | 783.33 | 4700 | 0.0724 | 0.9359 | 0.9583 | 0.9764 | 0.9243 | 0.9924 | 0.9019 | 0.9698 |
| 0.0022 | 786.67 | 4720 | 0.0698 | 0.9360 | 0.9587 | 0.9764 | 0.9253 | 0.9921 | 0.9021 | 0.9698 |
| 0.0022 | 790.0 | 4740 | 0.0736 | 0.9356 | 0.9583 | 0.9763 | 0.9243 | 0.9922 | 0.9015 | 0.9697 |
| 0.0322 | 793.33 | 4760 | 0.0697 | 0.9376 | 0.9583 | 0.9771 | 0.9230 | 0.9937 | 0.9044 | 0.9707 |
| 0.0132 | 796.67 | 4780 | 0.0748 | 0.9355 | 0.9582 | 0.9762 | 0.9241 | 0.9922 | 0.9014 | 0.9696 |
| 0.0024 | 800.0 | 4800 | 0.0671 | 0.9360 | 0.9588 | 0.9764 | 0.9256 | 0.9920 | 0.9022 | 0.9698 |
| 0.0106 | 803.33 | 4820 | 0.0735 | 0.9361 | 0.9584 | 0.9765 | 0.9244 | 0.9925 | 0.9023 | 0.9699 |
| 0.015 | 806.67 | 4840 | 0.0673 | 0.9333 | 0.9589 | 0.9753 | 0.9280 | 0.9898 | 0.8982 | 0.9684 |
| 0.1232 | 810.0 | 4860 | 0.0811 | 0.9312 | 0.9579 | 0.9744 | 0.9267 | 0.9891 | 0.8950 | 0.9673 |
| 0.0262 | 813.33 | 4880 | 0.0716 | 0.9365 | 0.9588 | 0.9766 | 0.9252 | 0.9924 | 0.9028 | 0.9701 |
| 0.0254 | 816.67 | 4900 | 0.0743 | 0.9364 | 0.9585 | 0.9766 | 0.9242 | 0.9927 | 0.9027 | 0.9701 |
| 0.1216 | 820.0 | 4920 | 0.0689 | 0.9360 | 0.9590 | 0.9764 | 0.9261 | 0.9918 | 0.9021 | 0.9698 |
| 0.0105 | 823.33 | 4940 | 0.0798 | 0.9358 | 0.9583 | 0.9763 | 0.9244 | 0.9923 | 0.9018 | 0.9698 |
| 0.0251 | 826.67 | 4960 | 0.0713 | 0.9367 | 0.9588 | 0.9767 | 0.9249 | 0.9926 | 0.9032 | 0.9702 |
| 0.0021 | 830.0 | 4980 | 0.0701 | 0.9365 | 0.9590 | 0.9766 | 0.9257 | 0.9922 | 0.9029 | 0.9701 |
| 0.0021 | 833.33 | 5000 | 0.0723 | 0.9350 | 0.9587 | 0.9760 | 0.9261 | 0.9914 | 0.9007 | 0.9693 |
| 0.0148 | 836.67 | 5020 | 0.0720 | 0.9363 | 0.9588 | 0.9765 | 0.9255 | 0.9922 | 0.9025 | 0.9700 |
| 0.002 | 840.0 | 5040 | 0.0728 | 0.9361 | 0.9587 | 0.9764 | 0.9253 | 0.9922 | 0.9023 | 0.9699 |
| 0.0021 | 843.33 | 5060 | 0.0745 | 0.9360 | 0.9587 | 0.9764 | 0.9254 | 0.9921 | 0.9021 | 0.9698 |
| 0.0169 | 846.67 | 5080 | 0.0810 | 0.9365 | 0.9589 | 0.9766 | 0.9254 | 0.9923 | 0.9028 | 0.9701 |
| 0.0148 | 850.0 | 5100 | 0.0766 | 0.9353 | 0.9587 | 0.9761 | 0.9259 | 0.9916 | 0.9012 | 0.9695 |
| 0.0147 | 853.33 | 5120 | 0.0806 | 0.9368 | 0.9586 | 0.9767 | 0.9243 | 0.9928 | 0.9033 | 0.9703 |
| 0.0147 | 856.67 | 5140 | 0.0718 | 0.9366 | 0.9587 | 0.9767 | 0.9248 | 0.9926 | 0.9030 | 0.9702 |
| 0.0152 | 860.0 | 5160 | 0.0682 | 0.9353 | 0.9591 | 0.9761 | 0.9272 | 0.9911 | 0.9011 | 0.9694 |
| 0.025 | 863.33 | 5180 | 0.0730 | 0.9341 | 0.9586 | 0.9756 | 0.9264 | 0.9908 | 0.8993 | 0.9688 |
| 0.031 | 866.67 | 5200 | 0.0773 | 0.9369 | 0.9588 | 0.9768 | 0.9248 | 0.9927 | 0.9034 | 0.9703 |
| 0.0252 | 870.0 | 5220 | 0.0883 | 0.9353 | 0.9584 | 0.9761 | 0.9251 | 0.9918 | 0.9011 | 0.9695 |
| 0.01 | 873.33 | 5240 | 0.0708 | 0.9365 | 0.9588 | 0.9766 | 0.9252 | 0.9924 | 0.9030 | 0.9701 |
| 0.002 | 876.67 | 5260 | 0.0780 | 0.9372 | 0.9587 | 0.9769 | 0.9242 | 0.9931 | 0.9039 | 0.9705 |
| 0.031 | 880.0 | 5280 | 0.0674 | 0.9365 | 0.9590 | 0.9766 | 0.9257 | 0.9923 | 0.9029 | 0.9701 |
| 0.0022 | 883.33 | 5300 | 0.0671 | 0.9356 | 0.9589 | 0.9762 | 0.9261 | 0.9917 | 0.9016 | 0.9696 |
| 0.0306 | 886.67 | 5320 | 0.0726 | 0.9370 | 0.9586 | 0.9768 | 0.9241 | 0.9931 | 0.9037 | 0.9704 |
| 0.0096 | 890.0 | 5340 | 0.0698 | 0.9361 | 0.9588 | 0.9764 | 0.9254 | 0.9921 | 0.9023 | 0.9699 |
| 0.1208 | 893.33 | 5360 | 0.0746 | 0.9365 | 0.9585 | 0.9766 | 0.9243 | 0.9927 | 0.9029 | 0.9701 |
| 0.0177 | 896.67 | 5380 | 0.0767 | 0.9365 | 0.9585 | 0.9766 | 0.9242 | 0.9927 | 0.9028 | 0.9701 |
| 0.002 | 900.0 | 5400 | 0.0677 | 0.9363 | 0.9589 | 0.9765 | 0.9257 | 0.9921 | 0.9026 | 0.9700 |
| 0.0147 | 903.33 | 5420 | 0.0753 | 0.9345 | 0.9586 | 0.9758 | 0.9260 | 0.9911 | 0.9000 | 0.9691 |
| 0.1214 | 906.67 | 5440 | 0.0796 | 0.9363 | 0.9585 | 0.9765 | 0.9244 | 0.9926 | 0.9026 | 0.9700 |
| 0.1197 | 910.0 | 5460 | 0.0756 | 0.9374 | 0.9588 | 0.9770 | 0.9244 | 0.9931 | 0.9042 | 0.9706 |
| 0.0249 | 913.33 | 5480 | 0.0756 | 0.9372 | 0.9588 | 0.9769 | 0.9245 | 0.9930 | 0.9039 | 0.9705 |
| 0.0145 | 916.67 | 5500 | 0.0754 | 0.9355 | 0.9588 | 0.9762 | 0.9259 | 0.9916 | 0.9014 | 0.9695 |
| 0.1201 | 920.0 | 5520 | 0.0781 | 0.9352 | 0.9585 | 0.9761 | 0.9252 | 0.9917 | 0.9010 | 0.9694 |
| 0.0144 | 923.33 | 5540 | 0.0868 | 0.9366 | 0.9586 | 0.9767 | 0.9244 | 0.9927 | 0.9030 | 0.9702 |
| 0.0099 | 926.67 | 5560 | 0.0664 | 0.9371 | 0.9593 | 0.9768 | 0.9263 | 0.9923 | 0.9038 | 0.9704 |
| 0.0095 | 930.0 | 5580 | 0.0745 | 0.9357 | 0.9589 | 0.9763 | 0.9260 | 0.9917 | 0.9017 | 0.9697 |
| 0.0095 | 933.33 | 5600 | 0.0745 | 0.9366 | 0.9589 | 0.9766 | 0.9254 | 0.9924 | 0.9030 | 0.9701 |
| 0.1289 | 936.67 | 5620 | 0.0651 | 0.9375 | 0.9591 | 0.9770 | 0.9253 | 0.9929 | 0.9044 | 0.9706 |
| 0.0099 | 940.0 | 5640 | 0.0755 | 0.9357 | 0.9585 | 0.9763 | 0.9250 | 0.9920 | 0.9017 | 0.9697 |
| 0.01 | 943.33 | 5660 | 0.0678 | 0.9370 | 0.9591 | 0.9768 | 0.9258 | 0.9924 | 0.9036 | 0.9703 |
| 0.0247 | 946.67 | 5680 | 0.0787 | 0.9374 | 0.9585 | 0.9770 | 0.9236 | 0.9934 | 0.9042 | 0.9706 |
| 0.0248 | 950.0 | 5700 | 0.0740 | 0.9372 | 0.9585 | 0.9769 | 0.9238 | 0.9932 | 0.9039 | 0.9705 |
| 0.1199 | 953.33 | 5720 | 0.0720 | 0.9357 | 0.9586 | 0.9763 | 0.9252 | 0.9920 | 0.9017 | 0.9697 |
| 0.0148 | 956.67 | 5740 | 0.0715 | 0.9372 | 0.9589 | 0.9769 | 0.9250 | 0.9928 | 0.9039 | 0.9704 |
| 0.0254 | 960.0 | 5760 | 0.0821 | 0.9335 | 0.9579 | 0.9754 | 0.9248 | 0.9909 | 0.8983 | 0.9686 |
| 0.1213 | 963.33 | 5780 | 0.0709 | 0.9345 | 0.9586 | 0.9758 | 0.9263 | 0.9910 | 0.8999 | 0.9690 |
| 0.0152 | 966.67 | 5800 | 0.0716 | 0.9348 | 0.9587 | 0.9759 | 0.9261 | 0.9912 | 0.9004 | 0.9692 |
| 0.0248 | 970.0 | 5820 | 0.0929 | 0.9351 | 0.9581 | 0.9761 | 0.9240 | 0.9921 | 0.9008 | 0.9694 |
| 0.002 | 973.33 | 5840 | 0.0684 | 0.9359 | 0.9586 | 0.9764 | 0.9252 | 0.9921 | 0.9020 | 0.9698 |
| 0.1195 | 976.67 | 5860 | 0.0762 | 0.9331 | 0.9584 | 0.9752 | 0.9266 | 0.9902 | 0.8979 | 0.9683 |
| 0.002 | 980.0 | 5880 | 0.0798 | 0.9346 | 0.9582 | 0.9758 | 0.9248 | 0.9915 | 0.9000 | 0.9691 |
| 0.0302 | 983.33 | 5900 | 0.0840 | 0.9359 | 0.9584 | 0.9764 | 0.9245 | 0.9923 | 0.9020 | 0.9698 |
| 0.1191 | 986.67 | 5920 | 0.0724 | 0.9340 | 0.9587 | 0.9756 | 0.9270 | 0.9905 | 0.8992 | 0.9688 |
| 0.0142 | 990.0 | 5940 | 0.0791 | 0.9342 | 0.9583 | 0.9757 | 0.9254 | 0.9912 | 0.8995 | 0.9689 |
| 0.03 | 993.33 | 5960 | 0.0772 | 0.9358 | 0.9584 | 0.9763 | 0.9245 | 0.9923 | 0.9018 | 0.9698 |
| 0.0019 | 996.67 | 5980 | 0.0695 | 0.9358 | 0.9584 | 0.9764 | 0.9246 | 0.9922 | 0.9019 | 0.9698 |
| 0.0301 | 1000.0 | 6000 | 0.0783 | 0.9355 | 0.9585 | 0.9762 | 0.9252 | 0.9919 | 0.9014 | 0.9696 |
| 0.0246 | 1003.33 | 6020 | 0.0844 | 0.9356 | 0.9583 | 0.9763 | 0.9243 | 0.9922 | 0.9015 | 0.9697 |
| 0.0147 | 1006.67 | 6040 | 0.0726 | 0.9367 | 0.9588 | 0.9767 | 0.9249 | 0.9926 | 0.9032 | 0.9702 |
| 0.0021 | 1010.0 | 6060 | 0.0746 | 0.9347 | 0.9585 | 0.9759 | 0.9258 | 0.9912 | 0.9002 | 0.9691 |
| 0.1189 | 1013.33 | 6080 | 0.0745 | 0.9364 | 0.9588 | 0.9766 | 0.9254 | 0.9923 | 0.9028 | 0.9701 |
| 0.0301 | 1016.67 | 6100 | 0.0884 | 0.9364 | 0.9583 | 0.9766 | 0.9239 | 0.9928 | 0.9028 | 0.9701 |
| 0.0163 | 1020.0 | 6120 | 0.0731 | 0.9356 | 0.9586 | 0.9762 | 0.9254 | 0.9918 | 0.9015 | 0.9696 |
| 0.0141 | 1023.33 | 6140 | 0.0747 | 0.9346 | 0.9586 | 0.9758 | 0.9262 | 0.9911 | 0.9001 | 0.9691 |
| 0.0019 | 1026.67 | 6160 | 0.0662 | 0.9352 | 0.9594 | 0.9761 | 0.9279 | 0.9909 | 0.9011 | 0.9694 |
| 0.1197 | 1030.0 | 6180 | 0.0789 | 0.9352 | 0.9584 | 0.9761 | 0.9250 | 0.9918 | 0.9009 | 0.9694 |
| 0.1191 | 1033.33 | 6200 | 0.0681 | 0.9369 | 0.9591 | 0.9767 | 0.9258 | 0.9924 | 0.9034 | 0.9703 |
| 0.0093 | 1036.67 | 6220 | 0.0679 | 0.9360 | 0.9591 | 0.9764 | 0.9263 | 0.9918 | 0.9022 | 0.9698 |
| 0.1189 | 1040.0 | 6240 | 0.0753 | 0.9354 | 0.9586 | 0.9762 | 0.9254 | 0.9918 | 0.9013 | 0.9696 |
| 0.0092 | 1043.33 | 6260 | 0.0718 | 0.9354 | 0.9589 | 0.9762 | 0.9263 | 0.9915 | 0.9013 | 0.9695 |
| 0.1185 | 1046.67 | 6280 | 0.0757 | 0.9364 | 0.9588 | 0.9766 | 0.9253 | 0.9923 | 0.9027 | 0.9700 |
| 0.0315 | 1050.0 | 6300 | 0.0793 | 0.9358 | 0.9585 | 0.9763 | 0.9249 | 0.9921 | 0.9019 | 0.9698 |
| 0.0307 | 1053.33 | 6320 | 0.0770 | 0.9363 | 0.9584 | 0.9766 | 0.9241 | 0.9927 | 0.9026 | 0.9700 |
| 0.0247 | 1056.67 | 6340 | 0.0755 | 0.9348 | 0.9586 | 0.9759 | 0.9259 | 0.9913 | 0.9005 | 0.9692 |
| 0.0092 | 1060.0 | 6360 | 0.0788 | 0.9360 | 0.9586 | 0.9764 | 0.9251 | 0.9922 | 0.9022 | 0.9699 |
| 0.0093 | 1063.33 | 6380 | 0.0696 | 0.9358 | 0.9591 | 0.9763 | 0.9267 | 0.9915 | 0.9018 | 0.9697 |
| 0.0298 | 1066.67 | 6400 | 0.0707 | 0.9359 | 0.9586 | 0.9764 | 0.9251 | 0.9922 | 0.9021 | 0.9698 |
| 0.0018 | 1070.0 | 6420 | 0.0725 | 0.9343 | 0.9586 | 0.9757 | 0.9265 | 0.9908 | 0.8996 | 0.9689 |
| 0.1184 | 1073.33 | 6440 | 0.0669 | 0.9355 | 0.9594 | 0.9762 | 0.9277 | 0.9911 | 0.9015 | 0.9695 |
| 0.014 | 1076.67 | 6460 | 0.0857 | 0.9358 | 0.9584 | 0.9763 | 0.9244 | 0.9923 | 0.9019 | 0.9698 |
| 0.0138 | 1080.0 | 6480 | 0.0764 | 0.9347 | 0.9583 | 0.9759 | 0.9251 | 0.9915 | 0.9003 | 0.9692 |
| 0.0256 | 1083.33 | 6500 | 0.0807 | 0.9365 | 0.9582 | 0.9766 | 0.9234 | 0.9930 | 0.9028 | 0.9701 |
| 0.0136 | 1086.67 | 6520 | 0.0800 | 0.9359 | 0.9584 | 0.9764 | 0.9244 | 0.9924 | 0.9020 | 0.9698 |
| 0.0018 | 1090.0 | 6540 | 0.0701 | 0.9358 | 0.9589 | 0.9763 | 0.9262 | 0.9917 | 0.9019 | 0.9697 |
| 0.031 | 1093.33 | 6560 | 0.0699 | 0.9373 | 0.9588 | 0.9769 | 0.9245 | 0.9931 | 0.9041 | 0.9705 |
| 0.0243 | 1096.67 | 6580 | 0.0709 | 0.9368 | 0.9586 | 0.9768 | 0.9242 | 0.9929 | 0.9034 | 0.9703 |
| 0.0138 | 1100.0 | 6600 | 0.0735 | 0.9372 | 0.9587 | 0.9769 | 0.9244 | 0.9930 | 0.9039 | 0.9705 |
| 0.0096 | 1103.33 | 6620 | 0.0872 | 0.9358 | 0.9587 | 0.9763 | 0.9255 | 0.9919 | 0.9018 | 0.9697 |
| 0.0138 | 1106.67 | 6640 | 0.0762 | 0.9352 | 0.9585 | 0.9761 | 0.9255 | 0.9916 | 0.9009 | 0.9694 |
| 0.0308 | 1110.0 | 6660 | 0.0740 | 0.9373 | 0.9587 | 0.9769 | 0.9244 | 0.9931 | 0.9041 | 0.9705 |
| 0.0243 | 1113.33 | 6680 | 0.0817 | 0.9375 | 0.9585 | 0.9770 | 0.9235 | 0.9935 | 0.9043 | 0.9706 |
| 0.0296 | 1116.67 | 6700 | 0.0703 | 0.9370 | 0.9587 | 0.9768 | 0.9244 | 0.9929 | 0.9036 | 0.9704 |
| 0.0092 | 1120.0 | 6720 | 0.0744 | 0.9364 | 0.9590 | 0.9766 | 0.9259 | 0.9921 | 0.9028 | 0.9700 |
| 0.0091 | 1123.33 | 6740 | 0.0707 | 0.9351 | 0.9589 | 0.9760 | 0.9266 | 0.9912 | 0.9009 | 0.9694 |
| 0.0108 | 1126.67 | 6760 | 0.0740 | 0.9366 | 0.9589 | 0.9766 | 0.9253 | 0.9924 | 0.9031 | 0.9702 |
| 0.0018 | 1130.0 | 6780 | 0.0685 | 0.9345 | 0.9590 | 0.9758 | 0.9274 | 0.9906 | 0.9000 | 0.9690 |
| 0.0018 | 1133.33 | 6800 | 0.0701 | 0.9349 | 0.9592 | 0.9759 | 0.9276 | 0.9908 | 0.9006 | 0.9692 |
| 0.0017 | 1136.67 | 6820 | 0.0781 | 0.9360 | 0.9589 | 0.9764 | 0.9258 | 0.9920 | 0.9022 | 0.9698 |
| 0.0242 | 1140.0 | 6840 | 0.0773 | 0.9366 | 0.9586 | 0.9767 | 0.9244 | 0.9927 | 0.9031 | 0.9702 |
| 0.0263 | 1143.33 | 6860 | 0.0915 | 0.9375 | 0.9581 | 0.9770 | 0.9223 | 0.9939 | 0.9043 | 0.9707 |
| 0.0305 | 1146.67 | 6880 | 0.0737 | 0.9368 | 0.9588 | 0.9767 | 0.9249 | 0.9927 | 0.9033 | 0.9703 |
| 0.0017 | 1150.0 | 6900 | 0.0687 | 0.9369 | 0.9591 | 0.9768 | 0.9258 | 0.9925 | 0.9036 | 0.9703 |
| 0.0243 | 1153.33 | 6920 | 0.0791 | 0.9355 | 0.9586 | 0.9762 | 0.9253 | 0.9919 | 0.9014 | 0.9696 |
| 0.0253 | 1156.67 | 6940 | 0.0811 | 0.9359 | 0.9587 | 0.9764 | 0.9254 | 0.9921 | 0.9021 | 0.9698 |
| 0.1184 | 1160.0 | 6960 | 0.0724 | 0.9369 | 0.9591 | 0.9767 | 0.9258 | 0.9924 | 0.9035 | 0.9703 |
| 0.035 | 1163.33 | 6980 | 0.0781 | 0.9383 | 0.9590 | 0.9773 | 0.9244 | 0.9936 | 0.9055 | 0.9710 |
| 0.0296 | 1166.67 | 7000 | 0.0875 | 0.9366 | 0.9584 | 0.9767 | 0.9239 | 0.9929 | 0.9030 | 0.9702 |
| 0.0135 | 1170.0 | 7020 | 0.0847 | 0.9361 | 0.9586 | 0.9765 | 0.9250 | 0.9923 | 0.9023 | 0.9699 |
| 0.1182 | 1173.33 | 7040 | 0.0681 | 0.9375 | 0.9591 | 0.9770 | 0.9254 | 0.9929 | 0.9044 | 0.9706 |
| 0.0017 | 1176.67 | 7060 | 0.0674 | 0.9366 | 0.9594 | 0.9766 | 0.9269 | 0.9919 | 0.9032 | 0.9701 |
| 0.0089 | 1180.0 | 7080 | 0.0767 | 0.9364 | 0.9587 | 0.9766 | 0.9250 | 0.9924 | 0.9027 | 0.9701 |
| 0.0017 | 1183.33 | 7100 | 0.0720 | 0.9372 | 0.9590 | 0.9769 | 0.9252 | 0.9928 | 0.9039 | 0.9704 |
| 0.0294 | 1186.67 | 7120 | 0.0827 | 0.9371 | 0.9587 | 0.9769 | 0.9243 | 0.9930 | 0.9038 | 0.9704 |
| 0.0293 | 1190.0 | 7140 | 0.0723 | 0.9368 | 0.9590 | 0.9767 | 0.9257 | 0.9924 | 0.9033 | 0.9702 |
| 0.0018 | 1193.33 | 7160 | 0.0713 | 0.9361 | 0.9589 | 0.9764 | 0.9259 | 0.9920 | 0.9023 | 0.9699 |
| 0.0089 | 1196.67 | 7180 | 0.0852 | 0.9364 | 0.9586 | 0.9766 | 0.9248 | 0.9925 | 0.9028 | 0.9701 |
| 0.0017 | 1200.0 | 7200 | 0.0730 | 0.9371 | 0.9592 | 0.9768 | 0.9258 | 0.9925 | 0.9038 | 0.9704 |
| 0.0088 | 1203.33 | 7220 | 0.0662 | 0.9370 | 0.9596 | 0.9768 | 0.9271 | 0.9920 | 0.9037 | 0.9703 |
| 0.0088 | 1206.67 | 7240 | 0.0768 | 0.9358 | 0.9589 | 0.9763 | 0.9260 | 0.9918 | 0.9019 | 0.9697 |
| 0.0294 | 1210.0 | 7260 | 0.0731 | 0.9371 | 0.9589 | 0.9768 | 0.9250 | 0.9928 | 0.9038 | 0.9704 |
| 0.0017 | 1213.33 | 7280 | 0.0647 | 0.9372 | 0.9596 | 0.9769 | 0.9271 | 0.9922 | 0.9040 | 0.9704 |
| 0.009 | 1216.67 | 7300 | 0.0737 | 0.9371 | 0.9594 | 0.9768 | 0.9265 | 0.9923 | 0.9038 | 0.9703 |
| 0.0136 | 1220.0 | 7320 | 0.0722 | 0.9361 | 0.9590 | 0.9764 | 0.9263 | 0.9918 | 0.9023 | 0.9698 |
| 0.0019 | 1223.33 | 7340 | 0.0684 | 0.9373 | 0.9591 | 0.9769 | 0.9255 | 0.9927 | 0.9041 | 0.9705 |
| 0.0133 | 1226.67 | 7360 | 0.0911 | 0.9369 | 0.9589 | 0.9768 | 0.9252 | 0.9926 | 0.9035 | 0.9703 |
| 0.0018 | 1230.0 | 7380 | 0.0656 | 0.9369 | 0.9591 | 0.9767 | 0.9257 | 0.9924 | 0.9034 | 0.9703 |
| 0.0137 | 1233.33 | 7400 | 0.0677 | 0.9371 | 0.9597 | 0.9768 | 0.9274 | 0.9920 | 0.9038 | 0.9703 |
| 0.0309 | 1236.67 | 7420 | 0.0830 | 0.9370 | 0.9587 | 0.9768 | 0.9245 | 0.9929 | 0.9036 | 0.9704 |
| 0.025 | 1240.0 | 7440 | 0.0694 | 0.9375 | 0.9593 | 0.9770 | 0.9259 | 0.9927 | 0.9045 | 0.9706 |
| 0.0238 | 1243.33 | 7460 | 0.0720 | 0.9371 | 0.9593 | 0.9768 | 0.9261 | 0.9924 | 0.9038 | 0.9704 |
| 0.0087 | 1246.67 | 7480 | 0.0650 | 0.9374 | 0.9595 | 0.9769 | 0.9266 | 0.9924 | 0.9042 | 0.9705 |
| 0.0089 | 1250.0 | 7500 | 0.0750 | 0.9379 | 0.9589 | 0.9772 | 0.9245 | 0.9934 | 0.9049 | 0.9708 |
| 0.0131 | 1253.33 | 7520 | 0.0745 | 0.9365 | 0.9592 | 0.9766 | 0.9262 | 0.9921 | 0.9030 | 0.9701 |
| 0.0296 | 1256.67 | 7540 | 0.0782 | 0.9361 | 0.9587 | 0.9765 | 0.9253 | 0.9922 | 0.9024 | 0.9699 |
| 0.0017 | 1260.0 | 7560 | 0.0727 | 0.9369 | 0.9590 | 0.9768 | 0.9255 | 0.9925 | 0.9035 | 0.9703 |
| 0.0087 | 1263.33 | 7580 | 0.0738 | 0.9373 | 0.9592 | 0.9769 | 0.9256 | 0.9927 | 0.9040 | 0.9705 |
| 0.0016 | 1266.67 | 7600 | 0.0702 | 0.9372 | 0.9591 | 0.9769 | 0.9256 | 0.9927 | 0.9040 | 0.9705 |
| 0.1242 | 1270.0 | 7620 | 0.0651 | 0.9370 | 0.9597 | 0.9768 | 0.9276 | 0.9919 | 0.9038 | 0.9703 |
| 0.024 | 1273.33 | 7640 | 0.0681 | 0.9373 | 0.9598 | 0.9769 | 0.9274 | 0.9921 | 0.9041 | 0.9704 |
| 0.1179 | 1276.67 | 7660 | 0.0611 | 0.9372 | 0.9605 | 0.9768 | 0.9297 | 0.9913 | 0.9041 | 0.9703 |
| 0.0017 | 1280.0 | 7680 | 0.0737 | 0.9365 | 0.9588 | 0.9766 | 0.9253 | 0.9924 | 0.9030 | 0.9701 |
| 0.0239 | 1283.33 | 7700 | 0.0709 | 0.9367 | 0.9592 | 0.9767 | 0.9262 | 0.9922 | 0.9033 | 0.9702 |
| 0.0089 | 1286.67 | 7720 | 0.0734 | 0.9386 | 0.9585 | 0.9775 | 0.9227 | 0.9943 | 0.9060 | 0.9712 |
| 0.1184 | 1290.0 | 7740 | 0.0740 | 0.9373 | 0.9588 | 0.9769 | 0.9246 | 0.9930 | 0.9040 | 0.9705 |
| 0.0087 | 1293.33 | 7760 | 0.0721 | 0.9356 | 0.9592 | 0.9762 | 0.9271 | 0.9913 | 0.9016 | 0.9696 |
| 0.0017 | 1296.67 | 7780 | 0.0747 | 0.9366 | 0.9589 | 0.9767 | 0.9254 | 0.9924 | 0.9031 | 0.9702 |
| 0.0086 | 1300.0 | 7800 | 0.0738 | 0.9370 | 0.9591 | 0.9768 | 0.9257 | 0.9925 | 0.9037 | 0.9703 |
| 0.0237 | 1303.33 | 7820 | 0.0824 | 0.9367 | 0.9588 | 0.9767 | 0.9251 | 0.9925 | 0.9031 | 0.9702 |
| 0.0293 | 1306.67 | 7840 | 0.0725 | 0.9380 | 0.9593 | 0.9772 | 0.9254 | 0.9931 | 0.9052 | 0.9709 |
| 0.0019 | 1310.0 | 7860 | 0.0696 | 0.9385 | 0.9591 | 0.9774 | 0.9244 | 0.9937 | 0.9059 | 0.9712 |
| 0.1205 | 1313.33 | 7880 | 0.0633 | 0.9371 | 0.9596 | 0.9768 | 0.9272 | 0.9920 | 0.9038 | 0.9703 |
| 0.0239 | 1316.67 | 7900 | 0.0798 | 0.9379 | 0.9588 | 0.9772 | 0.9242 | 0.9935 | 0.9050 | 0.9708 |
| 0.0091 | 1320.0 | 7920 | 0.0695 | 0.9370 | 0.9595 | 0.9768 | 0.9268 | 0.9921 | 0.9037 | 0.9703 |
| 0.0087 | 1323.33 | 7940 | 0.0676 | 0.9374 | 0.9593 | 0.9769 | 0.9261 | 0.9926 | 0.9042 | 0.9705 |
| 0.0086 | 1326.67 | 7960 | 0.0775 | 0.9373 | 0.9589 | 0.9769 | 0.9250 | 0.9929 | 0.9040 | 0.9705 |
| 0.1182 | 1330.0 | 7980 | 0.0665 | 0.9371 | 0.9598 | 0.9768 | 0.9276 | 0.9919 | 0.9038 | 0.9703 |
| 0.1184 | 1333.33 | 8000 | 0.0726 | 0.9381 | 0.9591 | 0.9773 | 0.9247 | 0.9934 | 0.9053 | 0.9709 |
| 0.029 | 1336.67 | 8020 | 0.0818 | 0.9385 | 0.9588 | 0.9774 | 0.9236 | 0.9940 | 0.9059 | 0.9712 |
| 0.0237 | 1340.0 | 8040 | 0.0773 | 0.9362 | 0.9589 | 0.9765 | 0.9258 | 0.9920 | 0.9024 | 0.9699 |
| 0.0016 | 1343.33 | 8060 | 0.0662 | 0.9368 | 0.9598 | 0.9767 | 0.9280 | 0.9916 | 0.9034 | 0.9702 |
| 0.0016 | 1346.67 | 8080 | 0.0690 | 0.9373 | 0.9595 | 0.9769 | 0.9265 | 0.9924 | 0.9042 | 0.9705 |
| 0.0128 | 1350.0 | 8100 | 0.0797 | 0.9362 | 0.9589 | 0.9765 | 0.9257 | 0.9921 | 0.9025 | 0.9700 |
| 0.0239 | 1353.33 | 8120 | 0.0759 | 0.9374 | 0.9592 | 0.9769 | 0.9256 | 0.9927 | 0.9042 | 0.9705 |
| 0.0294 | 1356.67 | 8140 | 0.0722 | 0.9359 | 0.9593 | 0.9763 | 0.9271 | 0.9915 | 0.9020 | 0.9697 |
| 0.1299 | 1360.0 | 8160 | 0.0715 | 0.9371 | 0.9594 | 0.9768 | 0.9264 | 0.9924 | 0.9039 | 0.9704 |
| 0.0289 | 1363.33 | 8180 | 0.0711 | 0.9377 | 0.9591 | 0.9771 | 0.9252 | 0.9930 | 0.9046 | 0.9707 |
| 0.0108 | 1366.67 | 8200 | 0.0717 | 0.9376 | 0.9590 | 0.9770 | 0.9249 | 0.9931 | 0.9045 | 0.9707 |
| 0.0128 | 1370.0 | 8220 | 0.0730 | 0.9370 | 0.9592 | 0.9768 | 0.9260 | 0.9924 | 0.9037 | 0.9704 |
| 0.0128 | 1373.33 | 8240 | 0.0735 | 0.9363 | 0.9594 | 0.9765 | 0.9271 | 0.9917 | 0.9026 | 0.9699 |
| 0.0129 | 1376.67 | 8260 | 0.0699 | 0.9368 | 0.9596 | 0.9767 | 0.9273 | 0.9919 | 0.9035 | 0.9702 |
| 0.0087 | 1380.0 | 8280 | 0.0793 | 0.9367 | 0.9588 | 0.9767 | 0.9252 | 0.9925 | 0.9031 | 0.9702 |
| 0.0349 | 1383.33 | 8300 | 0.0810 | 0.9379 | 0.9588 | 0.9772 | 0.9241 | 0.9935 | 0.9049 | 0.9708 |
| 0.0127 | 1386.67 | 8320 | 0.0777 | 0.9373 | 0.9590 | 0.9769 | 0.9253 | 0.9928 | 0.9041 | 0.9705 |
| 0.0086 | 1390.0 | 8340 | 0.0737 | 0.9369 | 0.9590 | 0.9767 | 0.9256 | 0.9925 | 0.9035 | 0.9703 |
| 0.0109 | 1393.33 | 8360 | 0.0809 | 0.9377 | 0.9588 | 0.9771 | 0.9242 | 0.9934 | 0.9047 | 0.9708 |
| 0.0296 | 1396.67 | 8380 | 0.0745 | 0.9378 | 0.9585 | 0.9772 | 0.9232 | 0.9937 | 0.9048 | 0.9708 |
| 0.0016 | 1400.0 | 8400 | 0.0763 | 0.9366 | 0.9590 | 0.9766 | 0.9256 | 0.9923 | 0.9031 | 0.9701 |
| 0.0016 | 1403.33 | 8420 | 0.0722 | 0.9377 | 0.9594 | 0.9771 | 0.9260 | 0.9928 | 0.9047 | 0.9707 |
| 0.0128 | 1406.67 | 8440 | 0.0791 | 0.9373 | 0.9586 | 0.9770 | 0.9240 | 0.9932 | 0.9041 | 0.9706 |
| 0.0016 | 1410.0 | 8460 | 0.0685 | 0.9377 | 0.9593 | 0.9771 | 0.9258 | 0.9929 | 0.9048 | 0.9707 |
| 0.0086 | 1413.33 | 8480 | 0.0746 | 0.9387 | 0.9593 | 0.9775 | 0.9249 | 0.9937 | 0.9062 | 0.9712 |
| 0.0127 | 1416.67 | 8500 | 0.0859 | 0.9374 | 0.9589 | 0.9770 | 0.9248 | 0.9930 | 0.9043 | 0.9706 |
| 0.0238 | 1420.0 | 8520 | 0.0729 | 0.9377 | 0.9593 | 0.9771 | 0.9257 | 0.9928 | 0.9046 | 0.9707 |
| 0.0289 | 1423.33 | 8540 | 0.0752 | 0.9384 | 0.9591 | 0.9774 | 0.9247 | 0.9936 | 0.9057 | 0.9711 |
| 0.0236 | 1426.67 | 8560 | 0.0722 | 0.9371 | 0.9594 | 0.9768 | 0.9264 | 0.9923 | 0.9039 | 0.9704 |
| 0.0084 | 1430.0 | 8580 | 0.0708 | 0.9368 | 0.9594 | 0.9767 | 0.9268 | 0.9920 | 0.9035 | 0.9702 |
| 0.0126 | 1433.33 | 8600 | 0.0822 | 0.9380 | 0.9589 | 0.9772 | 0.9244 | 0.9935 | 0.9051 | 0.9709 |
| 0.0016 | 1436.67 | 8620 | 0.0661 | 0.9374 | 0.9599 | 0.9769 | 0.9278 | 0.9920 | 0.9044 | 0.9705 |
| 0.0126 | 1440.0 | 8640 | 0.0723 | 0.9373 | 0.9593 | 0.9769 | 0.9259 | 0.9926 | 0.9042 | 0.9705 |
| 0.1178 | 1443.33 | 8660 | 0.0646 | 0.9393 | 0.9600 | 0.9777 | 0.9266 | 0.9934 | 0.9071 | 0.9715 |
| 0.0287 | 1446.67 | 8680 | 0.0767 | 0.9383 | 0.9590 | 0.9773 | 0.9244 | 0.9936 | 0.9056 | 0.9710 |
| 0.0136 | 1450.0 | 8700 | 0.0721 | 0.9381 | 0.9591 | 0.9773 | 0.9248 | 0.9934 | 0.9053 | 0.9709 |
| 0.0234 | 1453.33 | 8720 | 0.0805 | 0.9377 | 0.9588 | 0.9771 | 0.9242 | 0.9934 | 0.9047 | 0.9707 |
| 0.0289 | 1456.67 | 8740 | 0.0667 | 0.9381 | 0.9594 | 0.9772 | 0.9258 | 0.9931 | 0.9054 | 0.9709 |
| 0.117 | 1460.0 | 8760 | 0.0719 | 0.9376 | 0.9594 | 0.9770 | 0.9260 | 0.9927 | 0.9045 | 0.9706 |
| 0.0286 | 1463.33 | 8780 | 0.0790 | 0.9385 | 0.9588 | 0.9774 | 0.9237 | 0.9940 | 0.9059 | 0.9712 |
| 0.0247 | 1466.67 | 8800 | 0.0729 | 0.9377 | 0.9593 | 0.9771 | 0.9257 | 0.9929 | 0.9047 | 0.9707 |
| 0.1171 | 1470.0 | 8820 | 0.0678 | 0.9376 | 0.9596 | 0.9770 | 0.9267 | 0.9925 | 0.9046 | 0.9706 |
| 0.0287 | 1473.33 | 8840 | 0.0693 | 0.9367 | 0.9593 | 0.9766 | 0.9266 | 0.9920 | 0.9032 | 0.9701 |
| 0.0238 | 1476.67 | 8860 | 0.0686 | 0.9368 | 0.9597 | 0.9767 | 0.9277 | 0.9917 | 0.9034 | 0.9702 |
| 0.0084 | 1480.0 | 8880 | 0.0754 | 0.9374 | 0.9592 | 0.9769 | 0.9256 | 0.9927 | 0.9042 | 0.9705 |
| 0.0015 | 1483.33 | 8900 | 0.0641 | 0.9372 | 0.9604 | 0.9768 | 0.9293 | 0.9914 | 0.9040 | 0.9703 |
| 0.0135 | 1486.67 | 8920 | 0.0732 | 0.9370 | 0.9594 | 0.9768 | 0.9267 | 0.9922 | 0.9037 | 0.9703 |
| 0.0234 | 1490.0 | 8940 | 0.0735 | 0.9378 | 0.9594 | 0.9771 | 0.9261 | 0.9928 | 0.9049 | 0.9708 |
| 0.0234 | 1493.33 | 8960 | 0.0764 | 0.9380 | 0.9596 | 0.9772 | 0.9265 | 0.9927 | 0.9051 | 0.9708 |
| 0.0016 | 1496.67 | 8980 | 0.0684 | 0.9376 | 0.9597 | 0.9770 | 0.9271 | 0.9923 | 0.9045 | 0.9706 |
| 0.1167 | 1500.0 | 9000 | 0.0688 | 0.9379 | 0.9596 | 0.9771 | 0.9266 | 0.9927 | 0.9050 | 0.9708 |
| 0.0232 | 1503.33 | 9020 | 0.0777 | 0.9381 | 0.9593 | 0.9772 | 0.9254 | 0.9932 | 0.9053 | 0.9709 |
| 0.1176 | 1506.67 | 9040 | 0.0665 | 0.9374 | 0.9603 | 0.9769 | 0.9289 | 0.9917 | 0.9044 | 0.9704 |
| 0.0286 | 1510.0 | 9060 | 0.0777 | 0.9374 | 0.9591 | 0.9770 | 0.9254 | 0.9928 | 0.9043 | 0.9706 |
| 0.0128 | 1513.33 | 9080 | 0.0824 | 0.9376 | 0.9588 | 0.9771 | 0.9243 | 0.9933 | 0.9045 | 0.9707 |
| 0.0016 | 1516.67 | 9100 | 0.0700 | 0.9370 | 0.9598 | 0.9768 | 0.9278 | 0.9918 | 0.9037 | 0.9703 |
| 0.0232 | 1520.0 | 9120 | 0.0791 | 0.9377 | 0.9593 | 0.9771 | 0.9257 | 0.9928 | 0.9047 | 0.9707 |
| 0.1192 | 1523.33 | 9140 | 0.0650 | 0.9386 | 0.9604 | 0.9774 | 0.9282 | 0.9925 | 0.9062 | 0.9711 |
| 0.0016 | 1526.67 | 9160 | 0.0683 | 0.9394 | 0.9596 | 0.9777 | 0.9255 | 0.9938 | 0.9072 | 0.9716 |
| 0.0251 | 1530.0 | 9180 | 0.0619 | 0.9376 | 0.9599 | 0.9770 | 0.9277 | 0.9922 | 0.9047 | 0.9706 |
| 0.0017 | 1533.33 | 9200 | 0.0529 | 0.9455 | 0.9689 | 0.9798 | 0.9483 | 0.9895 | 0.9170 | 0.9740 |
| 0.0312 | 1536.67 | 9220 | 0.0746 | 0.9351 | 0.9583 | 0.9761 | 0.9248 | 0.9918 | 0.9008 | 0.9694 |
| 0.029 | 1540.0 | 9240 | 0.0729 | 0.9374 | 0.9591 | 0.9770 | 0.9255 | 0.9928 | 0.9042 | 0.9705 |
| 0.0238 | 1543.33 | 9260 | 0.0821 | 0.9372 | 0.9587 | 0.9769 | 0.9244 | 0.9930 | 0.9039 | 0.9705 |
| 0.0233 | 1546.67 | 9280 | 0.0675 | 0.9372 | 0.9595 | 0.9768 | 0.9268 | 0.9922 | 0.9040 | 0.9704 |
| 0.0016 | 1550.0 | 9300 | 0.0679 | 0.9379 | 0.9594 | 0.9772 | 0.9258 | 0.9930 | 0.9050 | 0.9708 |
| 0.0082 | 1553.33 | 9320 | 0.0742 | 0.9375 | 0.9592 | 0.9770 | 0.9255 | 0.9928 | 0.9044 | 0.9706 |
| 0.0116 | 1556.67 | 9340 | 0.0679 | 0.9371 | 0.9597 | 0.9768 | 0.9273 | 0.9921 | 0.9039 | 0.9704 |
| 0.0163 | 1560.0 | 9360 | 0.0653 | 0.9381 | 0.9597 | 0.9772 | 0.9266 | 0.9928 | 0.9053 | 0.9709 |
| 0.0015 | 1563.33 | 9380 | 0.0666 | 0.9378 | 0.9594 | 0.9771 | 0.9259 | 0.9929 | 0.9049 | 0.9708 |
| 0.1187 | 1566.67 | 9400 | 0.0668 | 0.9381 | 0.9597 | 0.9772 | 0.9266 | 0.9928 | 0.9053 | 0.9709 |
| 0.0126 | 1570.0 | 9420 | 0.0750 | 0.9376 | 0.9593 | 0.9770 | 0.9258 | 0.9928 | 0.9045 | 0.9706 |
| 0.0017 | 1573.33 | 9440 | 0.0698 | 0.9378 | 0.9598 | 0.9771 | 0.9271 | 0.9925 | 0.9049 | 0.9707 |
| 0.0082 | 1576.67 | 9460 | 0.0862 | 0.9377 | 0.9588 | 0.9771 | 0.9241 | 0.9934 | 0.9047 | 0.9708 |
| 0.0285 | 1580.0 | 9480 | 0.0760 | 0.9383 | 0.9592 | 0.9773 | 0.9250 | 0.9934 | 0.9056 | 0.9710 |
| 0.0234 | 1583.33 | 9500 | 0.0781 | 0.9379 | 0.9591 | 0.9772 | 0.9250 | 0.9932 | 0.9050 | 0.9708 |
| 0.0082 | 1586.67 | 9520 | 0.0732 | 0.9387 | 0.9593 | 0.9775 | 0.9248 | 0.9937 | 0.9062 | 0.9713 |
| 0.0234 | 1590.0 | 9540 | 0.0695 | 0.9378 | 0.9598 | 0.9771 | 0.9273 | 0.9924 | 0.9049 | 0.9707 |
| 0.0232 | 1593.33 | 9560 | 0.0712 | 0.9374 | 0.9598 | 0.9769 | 0.9275 | 0.9921 | 0.9042 | 0.9705 |
| 0.1178 | 1596.67 | 9580 | 0.0705 | 0.9378 | 0.9594 | 0.9771 | 0.9258 | 0.9929 | 0.9049 | 0.9707 |
| 0.0231 | 1600.0 | 9600 | 0.0712 | 0.9377 | 0.9597 | 0.9770 | 0.9269 | 0.9925 | 0.9047 | 0.9707 |
| 0.0125 | 1603.33 | 9620 | 0.0696 | 0.9374 | 0.9599 | 0.9769 | 0.9276 | 0.9921 | 0.9043 | 0.9705 |
| 0.0283 | 1606.67 | 9640 | 0.0731 | 0.9382 | 0.9594 | 0.9773 | 0.9257 | 0.9931 | 0.9054 | 0.9709 |
| 0.0083 | 1610.0 | 9660 | 0.0686 | 0.9379 | 0.9599 | 0.9771 | 0.9274 | 0.9924 | 0.9050 | 0.9707 |
| 0.0083 | 1613.33 | 9680 | 0.0660 | 0.9377 | 0.9601 | 0.9770 | 0.9282 | 0.9920 | 0.9048 | 0.9706 |
| 0.0081 | 1616.67 | 9700 | 0.0737 | 0.9383 | 0.9593 | 0.9773 | 0.9253 | 0.9933 | 0.9056 | 0.9710 |
| 0.0123 | 1620.0 | 9720 | 0.0627 | 0.9382 | 0.9606 | 0.9772 | 0.9293 | 0.9919 | 0.9055 | 0.9708 |
| 0.0285 | 1623.33 | 9740 | 0.0708 | 0.9382 | 0.9590 | 0.9773 | 0.9244 | 0.9936 | 0.9054 | 0.9710 |
| 0.0182 | 1626.67 | 9760 | 0.0700 | 0.9383 | 0.9598 | 0.9773 | 0.9267 | 0.9928 | 0.9057 | 0.9710 |
| 0.0104 | 1630.0 | 9780 | 0.0782 | 0.9375 | 0.9592 | 0.9770 | 0.9254 | 0.9929 | 0.9045 | 0.9706 |
| 0.1166 | 1633.33 | 9800 | 0.0622 | 0.9389 | 0.9603 | 0.9775 | 0.9278 | 0.9928 | 0.9065 | 0.9712 |
| 0.1165 | 1636.67 | 9820 | 0.0646 | 0.9372 | 0.9602 | 0.9768 | 0.9289 | 0.9915 | 0.9040 | 0.9703 |
| 0.0284 | 1640.0 | 9840 | 0.0744 | 0.9377 | 0.9594 | 0.9771 | 0.9259 | 0.9928 | 0.9047 | 0.9707 |
| 0.002 | 1643.33 | 9860 | 0.0681 | 0.9366 | 0.9599 | 0.9766 | 0.9283 | 0.9914 | 0.9032 | 0.9701 |
| 0.0229 | 1646.67 | 9880 | 0.0705 | 0.9381 | 0.9597 | 0.9772 | 0.9266 | 0.9928 | 0.9053 | 0.9709 |
| 0.0081 | 1650.0 | 9900 | 0.0729 | 0.9383 | 0.9595 | 0.9773 | 0.9259 | 0.9931 | 0.9056 | 0.9710 |
| 0.1155 | 1653.33 | 9920 | 0.0614 | 0.9386 | 0.9609 | 0.9774 | 0.9297 | 0.9920 | 0.9062 | 0.9711 |
| 0.012 | 1656.67 | 9940 | 0.0686 | 0.9376 | 0.9599 | 0.9770 | 0.9277 | 0.9922 | 0.9047 | 0.9706 |
| 0.0015 | 1660.0 | 9960 | 0.0717 | 0.9377 | 0.9596 | 0.9771 | 0.9268 | 0.9925 | 0.9047 | 0.9707 |
| 0.1171 | 1663.33 | 9980 | 0.0645 | 0.9386 | 0.9604 | 0.9774 | 0.9284 | 0.9924 | 0.9061 | 0.9711 |
| 0.0085 | 1666.67 | 10000 | 0.0788 | 0.9348 | 0.9590 | 0.9759 | 0.9272 | 0.9909 | 0.9004 | 0.9692 |
| 0.03 | 1670.0 | 10020 | 0.0763 | 0.9383 | 0.9594 | 0.9773 | 0.9256 | 0.9932 | 0.9056 | 0.9710 |
| 0.0121 | 1673.33 | 10040 | 0.0725 | 0.9387 | 0.9596 | 0.9775 | 0.9259 | 0.9934 | 0.9063 | 0.9712 |
| 0.0121 | 1676.67 | 10060 | 0.0653 | 0.9383 | 0.9601 | 0.9773 | 0.9276 | 0.9925 | 0.9057 | 0.9709 |
| 0.008 | 1680.0 | 10080 | 0.0793 | 0.9378 | 0.9592 | 0.9771 | 0.9254 | 0.9930 | 0.9049 | 0.9708 |
| 0.0015 | 1683.33 | 10100 | 0.0653 | 0.9386 | 0.9602 | 0.9774 | 0.9278 | 0.9927 | 0.9062 | 0.9711 |
| 0.0015 | 1686.67 | 10120 | 0.0634 | 0.9380 | 0.9607 | 0.9771 | 0.9296 | 0.9917 | 0.9053 | 0.9707 |
| 0.0087 | 1690.0 | 10140 | 0.0640 | 0.9401 | 0.9610 | 0.9780 | 0.9291 | 0.9930 | 0.9084 | 0.9718 |
| 0.0017 | 1693.33 | 10160 | 0.0614 | 0.9385 | 0.9595 | 0.9774 | 0.9257 | 0.9933 | 0.9058 | 0.9711 |
| 0.023 | 1696.67 | 10180 | 0.0724 | 0.9362 | 0.9596 | 0.9764 | 0.9278 | 0.9914 | 0.9025 | 0.9699 |
| 0.1198 | 1700.0 | 10200 | 0.0716 | 0.9384 | 0.9595 | 0.9774 | 0.9258 | 0.9932 | 0.9058 | 0.9711 |
| 0.0015 | 1703.33 | 10220 | 0.0664 | 0.9384 | 0.9601 | 0.9773 | 0.9277 | 0.9926 | 0.9058 | 0.9710 |
| 0.008 | 1706.67 | 10240 | 0.0643 | 0.9376 | 0.9599 | 0.9770 | 0.9276 | 0.9922 | 0.9046 | 0.9706 |
| 0.0095 | 1710.0 | 10260 | 0.0660 | 0.9386 | 0.9602 | 0.9774 | 0.9278 | 0.9926 | 0.9061 | 0.9711 |
| 0.0123 | 1713.33 | 10280 | 0.0619 | 0.9388 | 0.9604 | 0.9775 | 0.9280 | 0.9927 | 0.9065 | 0.9712 |
| 0.0248 | 1716.67 | 10300 | 0.0704 | 0.9376 | 0.9594 | 0.9770 | 0.9262 | 0.9927 | 0.9046 | 0.9707 |
| 0.023 | 1720.0 | 10320 | 0.0697 | 0.9389 | 0.9598 | 0.9775 | 0.9263 | 0.9933 | 0.9065 | 0.9713 |
| 0.0285 | 1723.33 | 10340 | 0.0745 | 0.9389 | 0.9592 | 0.9776 | 0.9246 | 0.9938 | 0.9065 | 0.9713 |
| 0.0081 | 1726.67 | 10360 | 0.0760 | 0.9379 | 0.9595 | 0.9772 | 0.9261 | 0.9928 | 0.9050 | 0.9708 |
| 0.0228 | 1730.0 | 10380 | 0.0666 | 0.9385 | 0.9599 | 0.9774 | 0.9269 | 0.9929 | 0.9060 | 0.9711 |
| 0.0015 | 1733.33 | 10400 | 0.0693 | 0.9387 | 0.9596 | 0.9775 | 0.9260 | 0.9933 | 0.9062 | 0.9712 |
| 0.028 | 1736.67 | 10420 | 0.0742 | 0.9380 | 0.9597 | 0.9772 | 0.9266 | 0.9927 | 0.9052 | 0.9708 |
| 0.012 | 1740.0 | 10440 | 0.0681 | 0.9379 | 0.9599 | 0.9771 | 0.9275 | 0.9924 | 0.9050 | 0.9707 |
| 0.1159 | 1743.33 | 10460 | 0.0688 | 0.9390 | 0.9597 | 0.9776 | 0.9260 | 0.9934 | 0.9066 | 0.9713 |
| 0.0149 | 1746.67 | 10480 | 0.0674 | 0.9383 | 0.9599 | 0.9773 | 0.9271 | 0.9927 | 0.9057 | 0.9710 |
| 0.0084 | 1750.0 | 10500 | 0.0671 | 0.9385 | 0.9597 | 0.9774 | 0.9262 | 0.9931 | 0.9059 | 0.9711 |
| 0.1153 | 1753.33 | 10520 | 0.0691 | 0.9391 | 0.9595 | 0.9776 | 0.9252 | 0.9937 | 0.9068 | 0.9714 |
| 0.023 | 1756.67 | 10540 | 0.0780 | 0.9381 | 0.9595 | 0.9772 | 0.9261 | 0.9929 | 0.9053 | 0.9709 |
| 0.0285 | 1760.0 | 10560 | 0.0738 | 0.9380 | 0.9595 | 0.9772 | 0.9262 | 0.9929 | 0.9052 | 0.9709 |
| 0.0015 | 1763.33 | 10580 | 0.0688 | 0.9383 | 0.9595 | 0.9773 | 0.9258 | 0.9932 | 0.9056 | 0.9710 |
| 0.0016 | 1766.67 | 10600 | 0.0679 | 0.9384 | 0.9599 | 0.9773 | 0.9269 | 0.9928 | 0.9058 | 0.9710 |
| 0.1155 | 1770.0 | 10620 | 0.0673 | 0.9390 | 0.9597 | 0.9776 | 0.9260 | 0.9934 | 0.9066 | 0.9713 |
| 0.1172 | 1773.33 | 10640 | 0.0621 | 0.9396 | 0.9601 | 0.9778 | 0.9267 | 0.9935 | 0.9075 | 0.9716 |
| 0.0082 | 1776.67 | 10660 | 0.0661 | 0.9394 | 0.9598 | 0.9778 | 0.9259 | 0.9937 | 0.9073 | 0.9716 |
| 0.0015 | 1780.0 | 10680 | 0.0613 | 0.9391 | 0.9605 | 0.9776 | 0.9283 | 0.9927 | 0.9068 | 0.9713 |
| 0.0122 | 1783.33 | 10700 | 0.0666 | 0.9390 | 0.9597 | 0.9776 | 0.9261 | 0.9934 | 0.9066 | 0.9713 |
| 0.0119 | 1786.67 | 10720 | 0.0672 | 0.9383 | 0.9601 | 0.9773 | 0.9277 | 0.9925 | 0.9057 | 0.9709 |
| 0.0227 | 1790.0 | 10740 | 0.0741 | 0.9390 | 0.9596 | 0.9776 | 0.9257 | 0.9936 | 0.9067 | 0.9714 |
| 0.1157 | 1793.33 | 10760 | 0.0653 | 0.9395 | 0.9595 | 0.9778 | 0.9250 | 0.9940 | 0.9073 | 0.9716 |
| 0.0015 | 1796.67 | 10780 | 0.0672 | 0.9390 | 0.9595 | 0.9776 | 0.9255 | 0.9936 | 0.9066 | 0.9713 |
| 0.0242 | 1800.0 | 10800 | 0.0734 | 0.9385 | 0.9592 | 0.9774 | 0.9249 | 0.9935 | 0.9059 | 0.9711 |
| 0.0118 | 1803.33 | 10820 | 0.0700 | 0.9381 | 0.9597 | 0.9772 | 0.9266 | 0.9928 | 0.9054 | 0.9709 |
| 0.0079 | 1806.67 | 10840 | 0.0705 | 0.9389 | 0.9595 | 0.9776 | 0.9256 | 0.9935 | 0.9065 | 0.9713 |
| 0.0227 | 1810.0 | 10860 | 0.0671 | 0.9389 | 0.9604 | 0.9775 | 0.9281 | 0.9927 | 0.9066 | 0.9712 |
| 0.1151 | 1813.33 | 10880 | 0.0654 | 0.9384 | 0.9603 | 0.9773 | 0.9280 | 0.9925 | 0.9059 | 0.9710 |
| 0.1152 | 1816.67 | 10900 | 0.0683 | 0.9386 | 0.9596 | 0.9774 | 0.9261 | 0.9932 | 0.9060 | 0.9711 |
| 0.0283 | 1820.0 | 10920 | 0.0671 | 0.9383 | 0.9593 | 0.9773 | 0.9252 | 0.9934 | 0.9056 | 0.9710 |
| 0.1172 | 1823.33 | 10940 | 0.0585 | 0.9397 | 0.9613 | 0.9778 | 0.9302 | 0.9924 | 0.9077 | 0.9716 |
| 0.0119 | 1826.67 | 10960 | 0.0693 | 0.9379 | 0.9599 | 0.9771 | 0.9274 | 0.9924 | 0.9051 | 0.9708 |
| 0.0226 | 1830.0 | 10980 | 0.0705 | 0.9386 | 0.9597 | 0.9774 | 0.9262 | 0.9932 | 0.9060 | 0.9711 |
| 0.0015 | 1833.33 | 11000 | 0.0668 | 0.9387 | 0.9602 | 0.9774 | 0.9277 | 0.9927 | 0.9062 | 0.9711 |
| 0.023 | 1836.67 | 11020 | 0.0733 | 0.9391 | 0.9595 | 0.9776 | 0.9252 | 0.9938 | 0.9068 | 0.9714 |
| 0.0227 | 1840.0 | 11040 | 0.0701 | 0.9385 | 0.9598 | 0.9774 | 0.9265 | 0.9930 | 0.9059 | 0.9711 |
| 0.0226 | 1843.33 | 11060 | 0.0761 | 0.9385 | 0.9595 | 0.9774 | 0.9257 | 0.9933 | 0.9059 | 0.9711 |
| 0.0243 | 1846.67 | 11080 | 0.0677 | 0.9391 | 0.9602 | 0.9776 | 0.9273 | 0.9931 | 0.9068 | 0.9713 |
| 0.023 | 1850.0 | 11100 | 0.0699 | 0.9395 | 0.9597 | 0.9778 | 0.9256 | 0.9938 | 0.9073 | 0.9716 |
| 0.0014 | 1853.33 | 11120 | 0.0697 | 0.9387 | 0.9596 | 0.9775 | 0.9260 | 0.9933 | 0.9062 | 0.9712 |
| 0.0118 | 1856.67 | 11140 | 0.0681 | 0.9389 | 0.9599 | 0.9775 | 0.9265 | 0.9932 | 0.9065 | 0.9713 |
| 0.0079 | 1860.0 | 11160 | 0.0677 | 0.9388 | 0.9598 | 0.9775 | 0.9264 | 0.9932 | 0.9063 | 0.9712 |
| 0.0278 | 1863.33 | 11180 | 0.0733 | 0.9395 | 0.9592 | 0.9778 | 0.9241 | 0.9943 | 0.9073 | 0.9716 |
| 0.0078 | 1866.67 | 11200 | 0.0645 | 0.9397 | 0.9606 | 0.9778 | 0.9281 | 0.9931 | 0.9077 | 0.9716 |
| 0.0285 | 1870.0 | 11220 | 0.0637 | 0.9386 | 0.9600 | 0.9774 | 0.9271 | 0.9929 | 0.9061 | 0.9711 |
| 0.1191 | 1873.33 | 11240 | 0.0606 | 0.9385 | 0.9609 | 0.9773 | 0.9299 | 0.9919 | 0.9060 | 0.9710 |
| 0.0227 | 1876.67 | 11260 | 0.0597 | 0.9399 | 0.9609 | 0.9779 | 0.9289 | 0.9930 | 0.9081 | 0.9717 |
| 0.0015 | 1880.0 | 11280 | 0.0677 | 0.9391 | 0.9600 | 0.9776 | 0.9268 | 0.9932 | 0.9068 | 0.9714 |
| 0.0119 | 1883.33 | 11300 | 0.0696 | 0.9381 | 0.9600 | 0.9772 | 0.9274 | 0.9925 | 0.9054 | 0.9709 |
| 0.0279 | 1886.67 | 11320 | 0.0710 | 0.9391 | 0.9596 | 0.9776 | 0.9257 | 0.9936 | 0.9068 | 0.9714 |
| 0.1149 | 1890.0 | 11340 | 0.0640 | 0.9396 | 0.9605 | 0.9778 | 0.9280 | 0.9931 | 0.9076 | 0.9716 |
| 0.0278 | 1893.33 | 11360 | 0.0654 | 0.9393 | 0.9597 | 0.9777 | 0.9257 | 0.9937 | 0.9071 | 0.9715 |
| 0.0014 | 1896.67 | 11380 | 0.0678 | 0.9394 | 0.9597 | 0.9777 | 0.9257 | 0.9937 | 0.9072 | 0.9715 |
| 0.0229 | 1900.0 | 11400 | 0.0724 | 0.9392 | 0.9596 | 0.9777 | 0.9254 | 0.9938 | 0.9070 | 0.9715 |
| 0.0018 | 1903.33 | 11420 | 0.0633 | 0.9384 | 0.9612 | 0.9773 | 0.9310 | 0.9915 | 0.9059 | 0.9709 |
| 0.0226 | 1906.67 | 11440 | 0.0741 | 0.9392 | 0.9596 | 0.9777 | 0.9254 | 0.9937 | 0.9069 | 0.9715 |
| 0.0226 | 1910.0 | 11460 | 0.0709 | 0.9389 | 0.9599 | 0.9775 | 0.9266 | 0.9932 | 0.9065 | 0.9713 |
| 0.0243 | 1913.33 | 11480 | 0.0671 | 0.9392 | 0.9597 | 0.9777 | 0.9259 | 0.9936 | 0.9070 | 0.9715 |
| 0.0229 | 1916.67 | 11500 | 0.0635 | 0.9401 | 0.9602 | 0.9780 | 0.9266 | 0.9938 | 0.9084 | 0.9719 |
| 0.0015 | 1920.0 | 11520 | 0.0695 | 0.9392 | 0.9598 | 0.9777 | 0.9261 | 0.9935 | 0.9070 | 0.9714 |
| 0.0118 | 1923.33 | 11540 | 0.0713 | 0.9392 | 0.9599 | 0.9777 | 0.9263 | 0.9934 | 0.9069 | 0.9714 |
| 0.0231 | 1926.67 | 11560 | 0.0879 | 0.9383 | 0.9592 | 0.9773 | 0.9251 | 0.9934 | 0.9055 | 0.9710 |
| 0.0233 | 1930.0 | 11580 | 0.0738 | 0.9390 | 0.9595 | 0.9776 | 0.9252 | 0.9937 | 0.9067 | 0.9714 |
| 0.0127 | 1933.33 | 11600 | 0.0703 | 0.9394 | 0.9597 | 0.9777 | 0.9257 | 0.9937 | 0.9072 | 0.9715 |
| 0.0078 | 1936.67 | 11620 | 0.0746 | 0.9392 | 0.9594 | 0.9777 | 0.9250 | 0.9939 | 0.9069 | 0.9715 |
| 0.0278 | 1940.0 | 11640 | 0.0693 | 0.9388 | 0.9598 | 0.9775 | 0.9264 | 0.9932 | 0.9064 | 0.9713 |
| 0.0014 | 1943.33 | 11660 | 0.0697 | 0.9384 | 0.9597 | 0.9773 | 0.9264 | 0.9930 | 0.9057 | 0.9710 |
| 0.0148 | 1946.67 | 11680 | 0.0669 | 0.9396 | 0.9600 | 0.9778 | 0.9265 | 0.9936 | 0.9075 | 0.9716 |
| 0.0117 | 1950.0 | 11700 | 0.0768 | 0.9393 | 0.9593 | 0.9777 | 0.9245 | 0.9941 | 0.9070 | 0.9715 |
| 0.1156 | 1953.33 | 11720 | 0.0589 | 0.9406 | 0.9613 | 0.9782 | 0.9295 | 0.9931 | 0.9092 | 0.9721 |
| 0.1149 | 1956.67 | 11740 | 0.0611 | 0.9400 | 0.9606 | 0.9780 | 0.9277 | 0.9934 | 0.9083 | 0.9718 |
| 0.0081 | 1960.0 | 11760 | 0.0660 | 0.9397 | 0.9599 | 0.9779 | 0.9261 | 0.9938 | 0.9077 | 0.9717 |
| 0.0115 | 1963.33 | 11780 | 0.0662 | 0.9392 | 0.9601 | 0.9777 | 0.9269 | 0.9933 | 0.9070 | 0.9714 |
| 0.0077 | 1966.67 | 11800 | 0.0673 | 0.9395 | 0.9600 | 0.9778 | 0.9265 | 0.9935 | 0.9074 | 0.9716 |
| 0.0278 | 1970.0 | 11820 | 0.0671 | 0.9398 | 0.9599 | 0.9779 | 0.9259 | 0.9939 | 0.9079 | 0.9718 |
| 0.0225 | 1973.33 | 11840 | 0.0701 | 0.9395 | 0.9598 | 0.9778 | 0.9259 | 0.9937 | 0.9074 | 0.9716 |
| 0.0014 | 1976.67 | 11860 | 0.0602 | 0.9397 | 0.9609 | 0.9778 | 0.9290 | 0.9928 | 0.9077 | 0.9716 |
| 0.0014 | 1980.0 | 11880 | 0.0663 | 0.9394 | 0.9601 | 0.9777 | 0.9268 | 0.9934 | 0.9073 | 0.9715 |
| 0.0077 | 1983.33 | 11900 | 0.0689 | 0.9396 | 0.9601 | 0.9778 | 0.9268 | 0.9935 | 0.9075 | 0.9716 |
| 0.0079 | 1986.67 | 11920 | 0.0676 | 0.9387 | 0.9604 | 0.9774 | 0.9282 | 0.9926 | 0.9063 | 0.9712 |
| 0.1148 | 1990.0 | 11940 | 0.0661 | 0.9388 | 0.9599 | 0.9775 | 0.9266 | 0.9931 | 0.9064 | 0.9712 |
| 0.008 | 1993.33 | 11960 | 0.0693 | 0.9395 | 0.9604 | 0.9778 | 0.9275 | 0.9932 | 0.9075 | 0.9716 |
| 0.0278 | 1996.67 | 11980 | 0.0679 | 0.9400 | 0.9596 | 0.9780 | 0.9248 | 0.9943 | 0.9081 | 0.9719 |
| 0.0014 | 2000.0 | 12000 | 0.0675 | 0.9396 | 0.9601 | 0.9778 | 0.9268 | 0.9935 | 0.9075 | 0.9716 |
| 0.0115 | 2003.33 | 12020 | 0.0661 | 0.9395 | 0.9601 | 0.9778 | 0.9266 | 0.9935 | 0.9074 | 0.9716 |
| 0.1161 | 2006.67 | 12040 | 0.0570 | 0.9400 | 0.9614 | 0.9779 | 0.9302 | 0.9926 | 0.9083 | 0.9717 |
| 0.0278 | 2010.0 | 12060 | 0.0693 | 0.9393 | 0.9598 | 0.9777 | 0.9258 | 0.9937 | 0.9072 | 0.9715 |
| 0.0276 | 2013.33 | 12080 | 0.0643 | 0.9397 | 0.9605 | 0.9778 | 0.9278 | 0.9932 | 0.9077 | 0.9716 |
| 0.0114 | 2016.67 | 12100 | 0.0807 | 0.9391 | 0.9593 | 0.9776 | 0.9248 | 0.9939 | 0.9067 | 0.9714 |
| 0.0225 | 2020.0 | 12120 | 0.0693 | 0.9396 | 0.9599 | 0.9778 | 0.9259 | 0.9938 | 0.9076 | 0.9717 |
| 0.0278 | 2023.33 | 12140 | 0.0653 | 0.9394 | 0.9599 | 0.9777 | 0.9262 | 0.9936 | 0.9072 | 0.9715 |
| 0.0225 | 2026.67 | 12160 | 0.0561 | 0.9416 | 0.9624 | 0.9785 | 0.9321 | 0.9928 | 0.9107 | 0.9725 |
| 0.1167 | 2030.0 | 12180 | 0.0654 | 0.9395 | 0.9604 | 0.9777 | 0.9275 | 0.9932 | 0.9074 | 0.9715 |
| 0.0279 | 2033.33 | 12200 | 0.0597 | 0.9399 | 0.9605 | 0.9779 | 0.9277 | 0.9934 | 0.9081 | 0.9718 |
| 0.0078 | 2036.67 | 12220 | 0.0611 | 0.9393 | 0.9607 | 0.9777 | 0.9285 | 0.9928 | 0.9072 | 0.9714 |
| 0.0015 | 2040.0 | 12240 | 0.0676 | 0.9393 | 0.9601 | 0.9777 | 0.9269 | 0.9933 | 0.9071 | 0.9715 |
| 0.0117 | 2043.33 | 12260 | 0.0794 | 0.9390 | 0.9597 | 0.9776 | 0.9259 | 0.9935 | 0.9067 | 0.9714 |
| 0.0117 | 2046.67 | 12280 | 0.0758 | 0.9387 | 0.9594 | 0.9775 | 0.9251 | 0.9936 | 0.9062 | 0.9712 |
| 0.0117 | 2050.0 | 12300 | 0.0714 | 0.9386 | 0.9600 | 0.9774 | 0.9271 | 0.9929 | 0.9061 | 0.9711 |
| 0.0016 | 2053.33 | 12320 | 0.0638 | 0.9394 | 0.9603 | 0.9777 | 0.9273 | 0.9932 | 0.9072 | 0.9715 |
| 0.0015 | 2056.67 | 12340 | 0.0629 | 0.9391 | 0.9611 | 0.9775 | 0.9301 | 0.9921 | 0.9069 | 0.9713 |
| 0.0077 | 2060.0 | 12360 | 0.0648 | 0.9391 | 0.9606 | 0.9776 | 0.9285 | 0.9927 | 0.9069 | 0.9713 |
| 0.0015 | 2063.33 | 12380 | 0.0622 | 0.9392 | 0.9608 | 0.9776 | 0.9291 | 0.9925 | 0.9070 | 0.9714 |
| 0.0323 | 2066.67 | 12400 | 0.0707 | 0.9393 | 0.9597 | 0.9777 | 0.9257 | 0.9937 | 0.9071 | 0.9715 |
| 0.0121 | 2070.0 | 12420 | 0.0686 | 0.9391 | 0.9600 | 0.9776 | 0.9267 | 0.9933 | 0.9068 | 0.9714 |
| 0.1145 | 2073.33 | 12440 | 0.0650 | 0.9407 | 0.9604 | 0.9783 | 0.9268 | 0.9941 | 0.9093 | 0.9722 |
| 0.0223 | 2076.67 | 12460 | 0.0629 | 0.9401 | 0.9607 | 0.9780 | 0.9280 | 0.9933 | 0.9083 | 0.9718 |
| 0.0016 | 2080.0 | 12480 | 0.0606 | 0.9402 | 0.9610 | 0.9780 | 0.9289 | 0.9931 | 0.9086 | 0.9719 |
| 0.0014 | 2083.33 | 12500 | 0.0580 | 0.9401 | 0.9616 | 0.9780 | 0.9306 | 0.9925 | 0.9085 | 0.9718 |
| 0.0014 | 2086.67 | 12520 | 0.0597 | 0.9404 | 0.9609 | 0.9781 | 0.9284 | 0.9934 | 0.9088 | 0.9720 |
| 0.1217 | 2090.0 | 12540 | 0.0772 | 0.9386 | 0.9595 | 0.9774 | 0.9258 | 0.9933 | 0.9061 | 0.9712 |
| 0.1156 | 2093.33 | 12560 | 0.0698 | 0.9395 | 0.9596 | 0.9778 | 0.9253 | 0.9939 | 0.9073 | 0.9716 |
| 0.0078 | 2096.67 | 12580 | 0.0635 | 0.9401 | 0.9602 | 0.9780 | 0.9266 | 0.9938 | 0.9083 | 0.9719 |
| 0.1144 | 2100.0 | 12600 | 0.0641 | 0.9395 | 0.9602 | 0.9778 | 0.9271 | 0.9933 | 0.9074 | 0.9716 |
| 0.0223 | 2103.33 | 12620 | 0.0709 | 0.9394 | 0.9597 | 0.9778 | 0.9256 | 0.9938 | 0.9073 | 0.9716 |
| 0.0275 | 2106.67 | 12640 | 0.0717 | 0.9391 | 0.9599 | 0.9776 | 0.9264 | 0.9934 | 0.9069 | 0.9714 |
| 0.0223 | 2110.0 | 12660 | 0.0666 | 0.9397 | 0.9600 | 0.9779 | 0.9263 | 0.9937 | 0.9077 | 0.9717 |
| 0.0076 | 2113.33 | 12680 | 0.0716 | 0.9395 | 0.9597 | 0.9778 | 0.9257 | 0.9938 | 0.9074 | 0.9716 |
| 0.0114 | 2116.67 | 12700 | 0.0655 | 0.9395 | 0.9604 | 0.9778 | 0.9275 | 0.9932 | 0.9074 | 0.9715 |
| 0.0014 | 2120.0 | 12720 | 0.0726 | 0.9392 | 0.9597 | 0.9777 | 0.9258 | 0.9936 | 0.9070 | 0.9715 |
| 0.0077 | 2123.33 | 12740 | 0.0668 | 0.9398 | 0.9598 | 0.9779 | 0.9257 | 0.9939 | 0.9078 | 0.9717 |
| 0.0275 | 2126.67 | 12760 | 0.0719 | 0.9396 | 0.9596 | 0.9779 | 0.9250 | 0.9941 | 0.9076 | 0.9717 |
| 0.0014 | 2130.0 | 12780 | 0.0640 | 0.9400 | 0.9607 | 0.9780 | 0.9280 | 0.9933 | 0.9083 | 0.9718 |
| 0.0114 | 2133.33 | 12800 | 0.0633 | 0.9387 | 0.9605 | 0.9774 | 0.9285 | 0.9925 | 0.9063 | 0.9711 |
| 0.0114 | 2136.67 | 12820 | 0.0605 | 0.9398 | 0.9611 | 0.9779 | 0.9295 | 0.9927 | 0.9080 | 0.9717 |
| 0.0076 | 2140.0 | 12840 | 0.0651 | 0.9402 | 0.9607 | 0.9780 | 0.9279 | 0.9934 | 0.9085 | 0.9719 |
| 0.0113 | 2143.33 | 12860 | 0.0747 | 0.9394 | 0.9598 | 0.9777 | 0.9261 | 0.9936 | 0.9072 | 0.9715 |
| 0.0223 | 2146.67 | 12880 | 0.0746 | 0.9396 | 0.9596 | 0.9778 | 0.9252 | 0.9940 | 0.9076 | 0.9717 |
| 0.0276 | 2150.0 | 12900 | 0.0659 | 0.9397 | 0.9602 | 0.9778 | 0.9268 | 0.9935 | 0.9077 | 0.9717 |
| 0.0283 | 2153.33 | 12920 | 0.0748 | 0.9390 | 0.9595 | 0.9776 | 0.9254 | 0.9936 | 0.9067 | 0.9714 |
| 0.0274 | 2156.67 | 12940 | 0.0631 | 0.9400 | 0.9605 | 0.9779 | 0.9275 | 0.9934 | 0.9081 | 0.9718 |
| 0.0241 | 2160.0 | 12960 | 0.0656 | 0.9398 | 0.9603 | 0.9779 | 0.9272 | 0.9934 | 0.9079 | 0.9717 |
| 0.1143 | 2163.33 | 12980 | 0.0640 | 0.9403 | 0.9600 | 0.9781 | 0.9260 | 0.9941 | 0.9086 | 0.9720 |
| 0.0015 | 2166.67 | 13000 | 0.0585 | 0.9410 | 0.9614 | 0.9783 | 0.9294 | 0.9934 | 0.9097 | 0.9722 |
| 0.1149 | 2170.0 | 13020 | 0.0633 | 0.9396 | 0.9605 | 0.9778 | 0.9279 | 0.9931 | 0.9075 | 0.9716 |
| 0.1142 | 2173.33 | 13040 | 0.0631 | 0.9400 | 0.9605 | 0.9780 | 0.9275 | 0.9935 | 0.9082 | 0.9718 |
| 0.0276 | 2176.67 | 13060 | 0.0680 | 0.9400 | 0.9599 | 0.9780 | 0.9259 | 0.9940 | 0.9081 | 0.9718 |
| 0.0222 | 2180.0 | 13080 | 0.0621 | 0.9404 | 0.9607 | 0.9781 | 0.9278 | 0.9936 | 0.9088 | 0.9720 |
| 0.0014 | 2183.33 | 13100 | 0.0575 | 0.9407 | 0.9614 | 0.9782 | 0.9297 | 0.9931 | 0.9093 | 0.9721 |
| 0.1141 | 2186.67 | 13120 | 0.0645 | 0.9403 | 0.9603 | 0.9781 | 0.9268 | 0.9939 | 0.9086 | 0.9720 |
| 0.0274 | 2190.0 | 13140 | 0.0670 | 0.9399 | 0.9601 | 0.9779 | 0.9263 | 0.9938 | 0.9079 | 0.9718 |
| 0.0277 | 2193.33 | 13160 | 0.0688 | 0.9397 | 0.9597 | 0.9779 | 0.9254 | 0.9940 | 0.9078 | 0.9717 |
| 0.0078 | 2196.67 | 13180 | 0.0734 | 0.9399 | 0.9599 | 0.9779 | 0.9258 | 0.9939 | 0.9079 | 0.9718 |
| 0.0014 | 2200.0 | 13200 | 0.0653 | 0.9403 | 0.9604 | 0.9781 | 0.9271 | 0.9937 | 0.9086 | 0.9720 |
| 0.0278 | 2203.33 | 13220 | 0.0694 | 0.9399 | 0.9598 | 0.9780 | 0.9254 | 0.9941 | 0.9080 | 0.9718 |
| 0.0226 | 2206.67 | 13240 | 0.0636 | 0.9401 | 0.9608 | 0.9780 | 0.9285 | 0.9932 | 0.9083 | 0.9718 |
| 0.0014 | 2210.0 | 13260 | 0.0639 | 0.9404 | 0.9608 | 0.9781 | 0.9281 | 0.9935 | 0.9088 | 0.9720 |
| 0.0166 | 2213.33 | 13280 | 0.0629 | 0.9405 | 0.9608 | 0.9781 | 0.9280 | 0.9935 | 0.9089 | 0.9720 |
| 0.0077 | 2216.67 | 13300 | 0.0617 | 0.9396 | 0.9608 | 0.9778 | 0.9288 | 0.9929 | 0.9077 | 0.9716 |
| 0.1147 | 2220.0 | 13320 | 0.0673 | 0.9401 | 0.9603 | 0.9780 | 0.9269 | 0.9937 | 0.9083 | 0.9719 |
| 0.1145 | 2223.33 | 13340 | 0.0640 | 0.9404 | 0.9604 | 0.9781 | 0.9268 | 0.9939 | 0.9088 | 0.9720 |
| 0.0227 | 2226.67 | 13360 | 0.0734 | 0.9396 | 0.9595 | 0.9778 | 0.9248 | 0.9942 | 0.9075 | 0.9717 |
| 0.0014 | 2230.0 | 13380 | 0.0669 | 0.9392 | 0.9604 | 0.9776 | 0.9280 | 0.9929 | 0.9070 | 0.9714 |
| 0.0113 | 2233.33 | 13400 | 0.0673 | 0.9398 | 0.9604 | 0.9779 | 0.9275 | 0.9934 | 0.9079 | 0.9717 |
| 0.0075 | 2236.67 | 13420 | 0.0766 | 0.9397 | 0.9596 | 0.9779 | 0.9249 | 0.9942 | 0.9077 | 0.9718 |
| 0.0115 | 2240.0 | 13440 | 0.0681 | 0.9397 | 0.9601 | 0.9779 | 0.9266 | 0.9936 | 0.9078 | 0.9717 |
| 0.0114 | 2243.33 | 13460 | 0.0717 | 0.9399 | 0.9597 | 0.9780 | 0.9252 | 0.9942 | 0.9080 | 0.9718 |
| 0.1143 | 2246.67 | 13480 | 0.0580 | 0.9409 | 0.9616 | 0.9783 | 0.9302 | 0.9931 | 0.9097 | 0.9722 |
| 0.0112 | 2250.0 | 13500 | 0.0697 | 0.9398 | 0.9603 | 0.9779 | 0.9270 | 0.9935 | 0.9079 | 0.9717 |
| 0.0119 | 2253.33 | 13520 | 0.0669 | 0.9398 | 0.9603 | 0.9779 | 0.9271 | 0.9935 | 0.9080 | 0.9717 |
| 0.0112 | 2256.67 | 13540 | 0.0645 | 0.9406 | 0.9604 | 0.9782 | 0.9268 | 0.9940 | 0.9091 | 0.9721 |
| 0.1143 | 2260.0 | 13560 | 0.0645 | 0.9401 | 0.9603 | 0.9780 | 0.9270 | 0.9937 | 0.9084 | 0.9719 |
| 0.0221 | 2263.33 | 13580 | 0.0676 | 0.9400 | 0.9601 | 0.9780 | 0.9263 | 0.9939 | 0.9082 | 0.9719 |
| 0.0221 | 2266.67 | 13600 | 0.0668 | 0.9400 | 0.9606 | 0.9780 | 0.9278 | 0.9934 | 0.9082 | 0.9718 |
| 0.0077 | 2270.0 | 13620 | 0.0661 | 0.9403 | 0.9601 | 0.9781 | 0.9261 | 0.9941 | 0.9086 | 0.9720 |
| 0.0076 | 2273.33 | 13640 | 0.0639 | 0.9398 | 0.9608 | 0.9778 | 0.9286 | 0.9930 | 0.9079 | 0.9717 |
| 0.1141 | 2276.67 | 13660 | 0.0722 | 0.9405 | 0.9600 | 0.9782 | 0.9257 | 0.9943 | 0.9089 | 0.9721 |
| 0.0075 | 2280.0 | 13680 | 0.0683 | 0.9397 | 0.9603 | 0.9779 | 0.9270 | 0.9935 | 0.9078 | 0.9717 |
| 0.0222 | 2283.33 | 13700 | 0.0792 | 0.9403 | 0.9596 | 0.9781 | 0.9247 | 0.9945 | 0.9086 | 0.9721 |
| 0.0014 | 2286.67 | 13720 | 0.0616 | 0.9401 | 0.9617 | 0.9779 | 0.9311 | 0.9923 | 0.9084 | 0.9718 |
| 0.0014 | 2290.0 | 13740 | 0.0671 | 0.9398 | 0.9602 | 0.9779 | 0.9268 | 0.9936 | 0.9079 | 0.9717 |
| 0.0222 | 2293.33 | 13760 | 0.0685 | 0.9397 | 0.9607 | 0.9778 | 0.9284 | 0.9930 | 0.9078 | 0.9716 |
| 0.0276 | 2296.67 | 13780 | 0.0699 | 0.9402 | 0.9599 | 0.9781 | 0.9255 | 0.9942 | 0.9084 | 0.9720 |
| 0.1138 | 2300.0 | 13800 | 0.0660 | 0.9402 | 0.9604 | 0.9781 | 0.9270 | 0.9938 | 0.9085 | 0.9719 |
| 0.0223 | 2303.33 | 13820 | 0.0596 | 0.9408 | 0.9611 | 0.9783 | 0.9288 | 0.9935 | 0.9095 | 0.9722 |
| 0.0015 | 2306.67 | 13840 | 0.0608 | 0.9400 | 0.9607 | 0.9779 | 0.9282 | 0.9932 | 0.9081 | 0.9718 |
| 0.0014 | 2310.0 | 13860 | 0.0647 | 0.9401 | 0.9601 | 0.9780 | 0.9262 | 0.9939 | 0.9083 | 0.9719 |
| 0.1143 | 2313.33 | 13880 | 0.0615 | 0.9408 | 0.9605 | 0.9783 | 0.9269 | 0.9941 | 0.9094 | 0.9722 |
| 0.0076 | 2316.67 | 13900 | 0.0699 | 0.9396 | 0.9601 | 0.9778 | 0.9266 | 0.9936 | 0.9075 | 0.9716 |
| 0.0291 | 2320.0 | 13920 | 0.0639 | 0.9405 | 0.9601 | 0.9782 | 0.9261 | 0.9942 | 0.9088 | 0.9721 |
| 0.0111 | 2323.33 | 13940 | 0.0747 | 0.9397 | 0.9596 | 0.9779 | 0.9250 | 0.9941 | 0.9077 | 0.9718 |
| 0.0273 | 2326.67 | 13960 | 0.0678 | 0.9398 | 0.9602 | 0.9779 | 0.9267 | 0.9936 | 0.9079 | 0.9717 |
| 0.114 | 2330.0 | 13980 | 0.0659 | 0.9398 | 0.9601 | 0.9779 | 0.9266 | 0.9937 | 0.9079 | 0.9717 |
| 0.0272 | 2333.33 | 14000 | 0.0748 | 0.9400 | 0.9596 | 0.9780 | 0.9249 | 0.9943 | 0.9081 | 0.9719 |
| 0.0155 | 2336.67 | 14020 | 0.0659 | 0.9400 | 0.9601 | 0.9780 | 0.9263 | 0.9939 | 0.9082 | 0.9719 |
| 0.0112 | 2340.0 | 14040 | 0.0622 | 0.9400 | 0.9606 | 0.9780 | 0.9278 | 0.9934 | 0.9083 | 0.9718 |
| 0.0014 | 2343.33 | 14060 | 0.0452 | 0.9472 | 0.9680 | 0.9806 | 0.9443 | 0.9917 | 0.9195 | 0.9750 |
| 0.1638 | 2346.67 | 14080 | 0.0961 | 0.9376 | 0.9593 | 0.9770 | 0.9258 | 0.9928 | 0.9046 | 0.9707 |
| 0.1194 | 2350.0 | 14100 | 0.0704 | 0.9390 | 0.9596 | 0.9776 | 0.9256 | 0.9936 | 0.9066 | 0.9713 |
| 0.0084 | 2353.33 | 14120 | 0.0613 | 0.9410 | 0.9612 | 0.9783 | 0.9289 | 0.9935 | 0.9097 | 0.9723 |
| 0.0343 | 2356.67 | 14140 | 0.0730 | 0.9403 | 0.9598 | 0.9781 | 0.9253 | 0.9944 | 0.9086 | 0.9720 |
| 0.0223 | 2360.0 | 14160 | 0.0728 | 0.9396 | 0.9599 | 0.9778 | 0.9261 | 0.9937 | 0.9075 | 0.9716 |
| 0.0075 | 2363.33 | 14180 | 0.0695 | 0.9401 | 0.9602 | 0.9780 | 0.9266 | 0.9938 | 0.9083 | 0.9719 |
| 0.0221 | 2366.67 | 14200 | 0.0750 | 0.9398 | 0.9598 | 0.9779 | 0.9256 | 0.9940 | 0.9078 | 0.9717 |
| 0.0015 | 2370.0 | 14220 | 0.0717 | 0.9381 | 0.9602 | 0.9772 | 0.9280 | 0.9923 | 0.9054 | 0.9708 |
| 0.0017 | 2373.33 | 14240 | 0.0590 | 0.9401 | 0.9614 | 0.9780 | 0.9301 | 0.9927 | 0.9085 | 0.9718 |
| 0.0242 | 2376.67 | 14260 | 0.0634 | 0.9382 | 0.9604 | 0.9772 | 0.9285 | 0.9922 | 0.9055 | 0.9709 |
| 0.0014 | 2380.0 | 14280 | 0.0694 | 0.9392 | 0.9604 | 0.9776 | 0.9279 | 0.9929 | 0.9069 | 0.9714 |
| 0.0111 | 2383.33 | 14300 | 0.0734 | 0.9396 | 0.9596 | 0.9778 | 0.9252 | 0.9940 | 0.9075 | 0.9717 |
| 0.1139 | 2386.67 | 14320 | 0.0603 | 0.9408 | 0.9607 | 0.9783 | 0.9276 | 0.9938 | 0.9093 | 0.9722 |
| 0.0076 | 2390.0 | 14340 | 0.0665 | 0.9400 | 0.9605 | 0.9779 | 0.9276 | 0.9934 | 0.9082 | 0.9718 |
| 0.022 | 2393.33 | 14360 | 0.0758 | 0.9400 | 0.9597 | 0.9780 | 0.9252 | 0.9942 | 0.9081 | 0.9719 |
| 0.0074 | 2396.67 | 14380 | 0.0609 | 0.9400 | 0.9615 | 0.9779 | 0.9305 | 0.9925 | 0.9082 | 0.9717 |
| 0.1145 | 2400.0 | 14400 | 0.0643 | 0.9403 | 0.9607 | 0.9781 | 0.9279 | 0.9935 | 0.9086 | 0.9719 |
| 0.0015 | 2403.33 | 14420 | 0.0619 | 0.9403 | 0.9609 | 0.9781 | 0.9285 | 0.9933 | 0.9086 | 0.9719 |
| 0.0076 | 2406.67 | 14440 | 0.0612 | 0.9406 | 0.9611 | 0.9782 | 0.9288 | 0.9933 | 0.9092 | 0.9721 |
| 0.1154 | 2410.0 | 14460 | 0.0678 | 0.9394 | 0.9603 | 0.9777 | 0.9275 | 0.9932 | 0.9073 | 0.9715 |
| 0.0272 | 2413.33 | 14480 | 0.0730 | 0.9399 | 0.9597 | 0.9780 | 0.9252 | 0.9942 | 0.9080 | 0.9718 |
| 0.0014 | 2416.67 | 14500 | 0.0697 | 0.9396 | 0.9601 | 0.9778 | 0.9267 | 0.9935 | 0.9075 | 0.9716 |
| 0.0075 | 2420.0 | 14520 | 0.0640 | 0.9403 | 0.9608 | 0.9781 | 0.9280 | 0.9935 | 0.9087 | 0.9720 |
| 0.0343 | 2423.33 | 14540 | 0.0717 | 0.9405 | 0.9601 | 0.9782 | 0.9260 | 0.9942 | 0.9088 | 0.9721 |
| 0.0115 | 2426.67 | 14560 | 0.0716 | 0.9406 | 0.9600 | 0.9782 | 0.9256 | 0.9944 | 0.9091 | 0.9722 |
| 0.0111 | 2430.0 | 14580 | 0.0639 | 0.9397 | 0.9605 | 0.9779 | 0.9278 | 0.9932 | 0.9078 | 0.9717 |
| 0.0113 | 2433.33 | 14600 | 0.0716 | 0.9396 | 0.9602 | 0.9778 | 0.9270 | 0.9934 | 0.9076 | 0.9716 |
| 0.0272 | 2436.67 | 14620 | 0.0701 | 0.9400 | 0.9599 | 0.9780 | 0.9257 | 0.9941 | 0.9082 | 0.9719 |
| 0.1135 | 2440.0 | 14640 | 0.0658 | 0.9402 | 0.9607 | 0.9780 | 0.9279 | 0.9934 | 0.9085 | 0.9719 |
| 0.0013 | 2443.33 | 14660 | 0.0586 | 0.9408 | 0.9617 | 0.9782 | 0.9304 | 0.9929 | 0.9095 | 0.9721 |
| 0.0271 | 2446.67 | 14680 | 0.0732 | 0.9401 | 0.9597 | 0.9780 | 0.9252 | 0.9943 | 0.9083 | 0.9719 |
| 0.0143 | 2450.0 | 14700 | 0.0649 | 0.9403 | 0.9606 | 0.9781 | 0.9277 | 0.9936 | 0.9087 | 0.9720 |
| 0.0152 | 2453.33 | 14720 | 0.0640 | 0.9405 | 0.9607 | 0.9782 | 0.9278 | 0.9936 | 0.9090 | 0.9721 |
| 0.0271 | 2456.67 | 14740 | 0.0656 | 0.9406 | 0.9602 | 0.9782 | 0.9263 | 0.9942 | 0.9091 | 0.9721 |
| 0.011 | 2460.0 | 14760 | 0.0649 | 0.9404 | 0.9605 | 0.9781 | 0.9272 | 0.9938 | 0.9088 | 0.9720 |
| 0.0074 | 2463.33 | 14780 | 0.0675 | 0.9399 | 0.9605 | 0.9779 | 0.9278 | 0.9933 | 0.9080 | 0.9717 |
| 0.0271 | 2466.67 | 14800 | 0.0662 | 0.9399 | 0.9603 | 0.9779 | 0.9270 | 0.9936 | 0.9081 | 0.9718 |
| 0.1322 | 2470.0 | 14820 | 0.0658 | 0.9404 | 0.9616 | 0.9781 | 0.9305 | 0.9927 | 0.9089 | 0.9719 |
| 0.011 | 2473.33 | 14840 | 0.0702 | 0.9401 | 0.9607 | 0.9780 | 0.9281 | 0.9933 | 0.9083 | 0.9718 |
| 0.0334 | 2476.67 | 14860 | 0.0728 | 0.9410 | 0.9600 | 0.9784 | 0.9252 | 0.9948 | 0.9097 | 0.9724 |
| 0.1256 | 2480.0 | 14880 | 0.0628 | 0.9408 | 0.9612 | 0.9783 | 0.9291 | 0.9934 | 0.9095 | 0.9722 |
| 0.0073 | 2483.33 | 14900 | 0.0709 | 0.9400 | 0.9600 | 0.9780 | 0.9259 | 0.9940 | 0.9081 | 0.9718 |
| 0.022 | 2486.67 | 14920 | 0.0668 | 0.9398 | 0.9606 | 0.9779 | 0.9281 | 0.9932 | 0.9079 | 0.9717 |
| 0.0219 | 2490.0 | 14940 | 0.0650 | 0.9402 | 0.9605 | 0.9780 | 0.9275 | 0.9936 | 0.9085 | 0.9719 |
| 0.011 | 2493.33 | 14960 | 0.0712 | 0.9396 | 0.9602 | 0.9778 | 0.9269 | 0.9935 | 0.9076 | 0.9716 |
| 0.0219 | 2496.67 | 14980 | 0.0673 | 0.9399 | 0.9604 | 0.9779 | 0.9275 | 0.9934 | 0.9080 | 0.9718 |
| 0.1136 | 2500.0 | 15000 | 0.0613 | 0.9407 | 0.9610 | 0.9782 | 0.9286 | 0.9935 | 0.9093 | 0.9721 |
| 0.1133 | 2503.33 | 15020 | 0.0623 | 0.9407 | 0.9608 | 0.9782 | 0.9280 | 0.9937 | 0.9093 | 0.9722 |
| 0.0331 | 2506.67 | 15040 | 0.0640 | 0.9428 | 0.9616 | 0.9790 | 0.9287 | 0.9945 | 0.9124 | 0.9732 |
| 0.0272 | 2510.0 | 15060 | 0.0645 | 0.9400 | 0.9604 | 0.9780 | 0.9272 | 0.9936 | 0.9082 | 0.9718 |
| 0.0013 | 2513.33 | 15080 | 0.0577 | 0.9414 | 0.9616 | 0.9785 | 0.9296 | 0.9935 | 0.9104 | 0.9725 |
| 0.0013 | 2516.67 | 15100 | 0.0636 | 0.9404 | 0.9610 | 0.9781 | 0.9287 | 0.9933 | 0.9088 | 0.9720 |
| 0.0014 | 2520.0 | 15120 | 0.0637 | 0.9402 | 0.9606 | 0.9780 | 0.9278 | 0.9935 | 0.9086 | 0.9719 |
| 0.0013 | 2523.33 | 15140 | 0.0589 | 0.9411 | 0.9615 | 0.9784 | 0.9298 | 0.9933 | 0.9099 | 0.9723 |
| 0.1133 | 2526.67 | 15160 | 0.0580 | 0.9414 | 0.9617 | 0.9785 | 0.9300 | 0.9934 | 0.9103 | 0.9724 |
| 0.0108 | 2530.0 | 15180 | 0.0660 | 0.9406 | 0.9603 | 0.9782 | 0.9267 | 0.9940 | 0.9090 | 0.9721 |
| 0.0219 | 2533.33 | 15200 | 0.0662 | 0.9405 | 0.9605 | 0.9782 | 0.9271 | 0.9938 | 0.9089 | 0.9721 |
| 0.1207 | 2536.67 | 15220 | 0.0589 | 0.9407 | 0.9614 | 0.9782 | 0.9297 | 0.9931 | 0.9094 | 0.9721 |
| 0.0073 | 2540.0 | 15240 | 0.0668 | 0.9396 | 0.9602 | 0.9778 | 0.9271 | 0.9934 | 0.9076 | 0.9716 |
| 0.0218 | 2543.33 | 15260 | 0.0716 | 0.9404 | 0.9598 | 0.9781 | 0.9250 | 0.9945 | 0.9087 | 0.9721 |
| 0.1133 | 2546.67 | 15280 | 0.0575 | 0.9412 | 0.9614 | 0.9784 | 0.9293 | 0.9935 | 0.9100 | 0.9724 |
| 0.0219 | 2550.0 | 15300 | 0.0656 | 0.9406 | 0.9605 | 0.9782 | 0.9271 | 0.9939 | 0.9091 | 0.9721 |
| 0.0108 | 2553.33 | 15320 | 0.0739 | 0.9398 | 0.9598 | 0.9779 | 0.9257 | 0.9940 | 0.9079 | 0.9718 |
| 0.0222 | 2556.67 | 15340 | 0.0651 | 0.9407 | 0.9604 | 0.9782 | 0.9266 | 0.9941 | 0.9092 | 0.9722 |
| 0.1138 | 2560.0 | 15360 | 0.0644 | 0.9408 | 0.9609 | 0.9782 | 0.9282 | 0.9936 | 0.9094 | 0.9722 |
| 0.0014 | 2563.33 | 15380 | 0.0644 | 0.9398 | 0.9608 | 0.9779 | 0.9287 | 0.9930 | 0.9080 | 0.9717 |
| 0.0072 | 2566.67 | 15400 | 0.0553 | 0.9418 | 0.9624 | 0.9786 | 0.9318 | 0.9930 | 0.9111 | 0.9726 |
| 0.0218 | 2570.0 | 15420 | 0.0659 | 0.9405 | 0.9604 | 0.9782 | 0.9268 | 0.9939 | 0.9089 | 0.9721 |
| 0.1145 | 2573.33 | 15440 | 0.0610 | 0.9406 | 0.9611 | 0.9782 | 0.9289 | 0.9933 | 0.9091 | 0.9721 |
| 0.0072 | 2576.67 | 15460 | 0.0684 | 0.9408 | 0.9606 | 0.9783 | 0.9272 | 0.9940 | 0.9093 | 0.9722 |
| 0.0222 | 2580.0 | 15480 | 0.0674 | 0.9406 | 0.9604 | 0.9782 | 0.9268 | 0.9940 | 0.9091 | 0.9721 |
| 0.0218 | 2583.33 | 15500 | 0.0738 | 0.9402 | 0.9599 | 0.9780 | 0.9256 | 0.9942 | 0.9084 | 0.9719 |
| 0.1132 | 2586.67 | 15520 | 0.0662 | 0.9405 | 0.9603 | 0.9782 | 0.9265 | 0.9941 | 0.9090 | 0.9721 |
| 0.0271 | 2590.0 | 15540 | 0.0697 | 0.9402 | 0.9600 | 0.9781 | 0.9258 | 0.9941 | 0.9084 | 0.9720 |
| 0.0108 | 2593.33 | 15560 | 0.0715 | 0.9404 | 0.9599 | 0.9781 | 0.9255 | 0.9943 | 0.9087 | 0.9721 |
| 0.0273 | 2596.67 | 15580 | 0.0647 | 0.9407 | 0.9604 | 0.9783 | 0.9268 | 0.9941 | 0.9093 | 0.9722 |
| 0.0014 | 2600.0 | 15600 | 0.0701 | 0.9398 | 0.9601 | 0.9779 | 0.9266 | 0.9937 | 0.9079 | 0.9718 |
| 0.0305 | 2603.33 | 15620 | 0.0648 | 0.9401 | 0.9603 | 0.9780 | 0.9269 | 0.9937 | 0.9084 | 0.9719 |
| 0.0222 | 2606.67 | 15640 | 0.0704 | 0.9402 | 0.9600 | 0.9781 | 0.9259 | 0.9941 | 0.9085 | 0.9720 |
| 0.1133 | 2610.0 | 15660 | 0.0637 | 0.9409 | 0.9606 | 0.9783 | 0.9271 | 0.9940 | 0.9095 | 0.9723 |
| 0.0218 | 2613.33 | 15680 | 0.0713 | 0.9405 | 0.9598 | 0.9782 | 0.9252 | 0.9945 | 0.9089 | 0.9721 |
| 0.0073 | 2616.67 | 15700 | 0.0665 | 0.9400 | 0.9608 | 0.9779 | 0.9284 | 0.9931 | 0.9082 | 0.9718 |
| 0.0273 | 2620.0 | 15720 | 0.0706 | 0.9402 | 0.9606 | 0.9780 | 0.9277 | 0.9935 | 0.9084 | 0.9719 |
| 0.0087 | 2623.33 | 15740 | 0.0651 | 0.9402 | 0.9608 | 0.9780 | 0.9284 | 0.9933 | 0.9086 | 0.9719 |
| 0.1133 | 2626.67 | 15760 | 0.0695 | 0.9404 | 0.9601 | 0.9781 | 0.9261 | 0.9942 | 0.9088 | 0.9721 |
| 0.0218 | 2630.0 | 15780 | 0.0658 | 0.9406 | 0.9610 | 0.9782 | 0.9285 | 0.9935 | 0.9092 | 0.9721 |
| 0.1131 | 2633.33 | 15800 | 0.0618 | 0.9408 | 0.9612 | 0.9783 | 0.9291 | 0.9934 | 0.9095 | 0.9722 |
| 0.0219 | 2636.67 | 15820 | 0.0665 | 0.9405 | 0.9608 | 0.9781 | 0.9280 | 0.9935 | 0.9089 | 0.9720 |
| 0.0073 | 2640.0 | 15840 | 0.0699 | 0.9405 | 0.9600 | 0.9782 | 0.9258 | 0.9943 | 0.9089 | 0.9721 |
| 0.0219 | 2643.33 | 15860 | 0.0628 | 0.9413 | 0.9610 | 0.9785 | 0.9282 | 0.9939 | 0.9102 | 0.9724 |
| 0.0108 | 2646.67 | 15880 | 0.0607 | 0.9410 | 0.9614 | 0.9783 | 0.9295 | 0.9933 | 0.9098 | 0.9723 |
| 0.0072 | 2650.0 | 15900 | 0.0653 | 0.9405 | 0.9607 | 0.9781 | 0.9277 | 0.9936 | 0.9089 | 0.9720 |
| 0.0218 | 2653.33 | 15920 | 0.0702 | 0.9403 | 0.9603 | 0.9781 | 0.9267 | 0.9939 | 0.9087 | 0.9720 |
| 0.0013 | 2656.67 | 15940 | 0.0680 | 0.9410 | 0.9603 | 0.9784 | 0.9262 | 0.9944 | 0.9096 | 0.9723 |
| 0.113 | 2660.0 | 15960 | 0.0646 | 0.9407 | 0.9607 | 0.9782 | 0.9275 | 0.9938 | 0.9092 | 0.9722 |
| 0.0269 | 2663.33 | 15980 | 0.0699 | 0.9406 | 0.9601 | 0.9782 | 0.9258 | 0.9943 | 0.9090 | 0.9722 |
| 0.0234 | 2666.67 | 16000 | 0.0638 | 0.9405 | 0.9609 | 0.9781 | 0.9283 | 0.9934 | 0.9089 | 0.9720 |
| 0.1179 | 2670.0 | 16020 | 0.0754 | 0.9391 | 0.9600 | 0.9776 | 0.9268 | 0.9933 | 0.9069 | 0.9714 |
| 0.011 | 2673.33 | 16040 | 0.0740 | 0.9396 | 0.9602 | 0.9778 | 0.9269 | 0.9935 | 0.9076 | 0.9716 |
| 0.0013 | 2676.67 | 16060 | 0.0598 | 0.9411 | 0.9619 | 0.9783 | 0.9308 | 0.9930 | 0.9100 | 0.9723 |
| 0.0107 | 2680.0 | 16080 | 0.0618 | 0.9406 | 0.9608 | 0.9782 | 0.9279 | 0.9936 | 0.9091 | 0.9721 |
| 0.0269 | 2683.33 | 16100 | 0.0745 | 0.9404 | 0.9599 | 0.9781 | 0.9255 | 0.9943 | 0.9087 | 0.9720 |
| 0.0013 | 2686.67 | 16120 | 0.0717 | 0.9404 | 0.9599 | 0.9782 | 0.9253 | 0.9944 | 0.9087 | 0.9721 |
| 0.1179 | 2690.0 | 16140 | 0.0636 | 0.9408 | 0.9607 | 0.9783 | 0.9275 | 0.9939 | 0.9094 | 0.9722 |
| 0.0269 | 2693.33 | 16160 | 0.0675 | 0.9405 | 0.9604 | 0.9782 | 0.9270 | 0.9939 | 0.9089 | 0.9721 |
| 0.0273 | 2696.67 | 16180 | 0.0676 | 0.9409 | 0.9603 | 0.9783 | 0.9262 | 0.9944 | 0.9095 | 0.9723 |
| 0.0109 | 2700.0 | 16200 | 0.0631 | 0.9407 | 0.9609 | 0.9782 | 0.9283 | 0.9936 | 0.9093 | 0.9722 |
| 0.0013 | 2703.33 | 16220 | 0.0655 | 0.9400 | 0.9608 | 0.9779 | 0.9285 | 0.9932 | 0.9083 | 0.9718 |
| 0.0108 | 2706.67 | 16240 | 0.0545 | 0.9430 | 0.9632 | 0.9790 | 0.9334 | 0.9931 | 0.9128 | 0.9731 |
| 0.0271 | 2710.0 | 16260 | 0.0652 | 0.9405 | 0.9606 | 0.9781 | 0.9275 | 0.9937 | 0.9089 | 0.9720 |
| 0.0107 | 2713.33 | 16280 | 0.0642 | 0.9400 | 0.9609 | 0.9779 | 0.9287 | 0.9931 | 0.9083 | 0.9718 |
| 0.0219 | 2716.67 | 16300 | 0.0675 | 0.9405 | 0.9607 | 0.9781 | 0.9277 | 0.9936 | 0.9089 | 0.9720 |
| 0.0269 | 2720.0 | 16320 | 0.0645 | 0.9407 | 0.9606 | 0.9782 | 0.9274 | 0.9938 | 0.9092 | 0.9721 |
| 0.0217 | 2723.33 | 16340 | 0.0716 | 0.9403 | 0.9600 | 0.9781 | 0.9260 | 0.9941 | 0.9086 | 0.9720 |
| 0.0233 | 2726.67 | 16360 | 0.0699 | 0.9405 | 0.9603 | 0.9782 | 0.9265 | 0.9941 | 0.9090 | 0.9721 |
| 0.1134 | 2730.0 | 16380 | 0.0602 | 0.9411 | 0.9609 | 0.9784 | 0.9279 | 0.9939 | 0.9098 | 0.9723 |
| 0.0106 | 2733.33 | 16400 | 0.0588 | 0.9410 | 0.9619 | 0.9783 | 0.9310 | 0.9928 | 0.9098 | 0.9722 |
| 0.1134 | 2736.67 | 16420 | 0.0620 | 0.9408 | 0.9610 | 0.9782 | 0.9286 | 0.9935 | 0.9094 | 0.9722 |
| 0.0013 | 2740.0 | 16440 | 0.0626 | 0.9408 | 0.9614 | 0.9782 | 0.9297 | 0.9932 | 0.9094 | 0.9721 |
| 0.0073 | 2743.33 | 16460 | 0.0667 | 0.9406 | 0.9603 | 0.9782 | 0.9265 | 0.9941 | 0.9091 | 0.9722 |
| 0.0269 | 2746.67 | 16480 | 0.0627 | 0.9411 | 0.9610 | 0.9784 | 0.9282 | 0.9938 | 0.9098 | 0.9723 |
| 0.0013 | 2750.0 | 16500 | 0.0581 | 0.9416 | 0.9623 | 0.9785 | 0.9316 | 0.9929 | 0.9107 | 0.9725 |
| 0.1129 | 2753.33 | 16520 | 0.0590 | 0.9415 | 0.9615 | 0.9785 | 0.9295 | 0.9936 | 0.9104 | 0.9725 |
| 0.0269 | 2756.67 | 16540 | 0.0651 | 0.9410 | 0.9605 | 0.9784 | 0.9269 | 0.9942 | 0.9096 | 0.9723 |
| 0.0013 | 2760.0 | 16560 | 0.0642 | 0.9406 | 0.9608 | 0.9782 | 0.9281 | 0.9936 | 0.9092 | 0.9721 |
| 0.0269 | 2763.33 | 16580 | 0.0675 | 0.9408 | 0.9603 | 0.9783 | 0.9265 | 0.9942 | 0.9093 | 0.9722 |
| 0.0106 | 2766.67 | 16600 | 0.0589 | 0.9417 | 0.9625 | 0.9785 | 0.9323 | 0.9928 | 0.9109 | 0.9725 |
| 0.0109 | 2770.0 | 16620 | 0.0656 | 0.9400 | 0.9607 | 0.9780 | 0.9281 | 0.9933 | 0.9083 | 0.9718 |
| 0.027 | 2773.33 | 16640 | 0.0730 | 0.9402 | 0.9600 | 0.9780 | 0.9260 | 0.9941 | 0.9084 | 0.9719 |
| 0.0072 | 2776.67 | 16660 | 0.0677 | 0.9410 | 0.9605 | 0.9784 | 0.9268 | 0.9942 | 0.9097 | 0.9723 |
| 0.0013 | 2780.0 | 16680 | 0.0649 | 0.9406 | 0.9609 | 0.9782 | 0.9283 | 0.9935 | 0.9091 | 0.9721 |
| 0.1129 | 2783.33 | 16700 | 0.0611 | 0.9409 | 0.9614 | 0.9783 | 0.9295 | 0.9933 | 0.9097 | 0.9722 |
| 0.0269 | 2786.67 | 16720 | 0.0611 | 0.9408 | 0.9617 | 0.9782 | 0.9306 | 0.9929 | 0.9095 | 0.9721 |
| 0.0106 | 2790.0 | 16740 | 0.0642 | 0.9402 | 0.9611 | 0.9780 | 0.9291 | 0.9930 | 0.9086 | 0.9719 |
| 0.1129 | 2793.33 | 16760 | 0.0628 | 0.9410 | 0.9613 | 0.9783 | 0.9292 | 0.9934 | 0.9097 | 0.9723 |
| 0.0014 | 2796.67 | 16780 | 0.0626 | 0.9406 | 0.9612 | 0.9782 | 0.9291 | 0.9932 | 0.9091 | 0.9721 |
| 0.0014 | 2800.0 | 16800 | 0.0627 | 0.9410 | 0.9612 | 0.9783 | 0.9288 | 0.9936 | 0.9098 | 0.9723 |
| 0.0073 | 2803.33 | 16820 | 0.0664 | 0.9405 | 0.9603 | 0.9782 | 0.9267 | 0.9940 | 0.9089 | 0.9721 |
| 0.1128 | 2806.67 | 16840 | 0.0586 | 0.9412 | 0.9619 | 0.9784 | 0.9309 | 0.9930 | 0.9101 | 0.9723 |
| 0.0072 | 2810.0 | 16860 | 0.0635 | 0.9408 | 0.9618 | 0.9782 | 0.9307 | 0.9928 | 0.9094 | 0.9721 |
| 0.0073 | 2813.33 | 16880 | 0.0602 | 0.9410 | 0.9616 | 0.9783 | 0.9302 | 0.9931 | 0.9098 | 0.9723 |
| 0.0217 | 2816.67 | 16900 | 0.0618 | 0.9406 | 0.9614 | 0.9782 | 0.9297 | 0.9930 | 0.9091 | 0.9720 |
| 0.022 | 2820.0 | 16920 | 0.0655 | 0.9401 | 0.9604 | 0.9780 | 0.9273 | 0.9936 | 0.9083 | 0.9719 |
| 0.0136 | 2823.33 | 16940 | 0.0648 | 0.9404 | 0.9609 | 0.9781 | 0.9285 | 0.9933 | 0.9088 | 0.9720 |
| 0.0013 | 2826.67 | 16960 | 0.0663 | 0.9405 | 0.9607 | 0.9781 | 0.9278 | 0.9936 | 0.9089 | 0.9720 |
| 0.0234 | 2830.0 | 16980 | 0.0668 | 0.9406 | 0.9607 | 0.9782 | 0.9277 | 0.9937 | 0.9091 | 0.9721 |
| 0.0102 | 2833.33 | 17000 | 0.0669 | 0.9403 | 0.9608 | 0.9780 | 0.9283 | 0.9933 | 0.9086 | 0.9719 |
| 0.0217 | 2836.67 | 17020 | 0.0651 | 0.9409 | 0.9609 | 0.9783 | 0.9280 | 0.9937 | 0.9095 | 0.9722 |
| 0.0071 | 2840.0 | 17040 | 0.0643 | 0.9413 | 0.9613 | 0.9784 | 0.9290 | 0.9937 | 0.9102 | 0.9724 |
| 0.0269 | 2843.33 | 17060 | 0.0589 | 0.9421 | 0.9615 | 0.9788 | 0.9288 | 0.9941 | 0.9114 | 0.9728 |
| 0.0216 | 2846.67 | 17080 | 0.0619 | 0.9413 | 0.9615 | 0.9784 | 0.9295 | 0.9935 | 0.9102 | 0.9724 |
| 0.0268 | 2850.0 | 17100 | 0.0644 | 0.9409 | 0.9610 | 0.9783 | 0.9282 | 0.9937 | 0.9096 | 0.9723 |
| 0.0216 | 2853.33 | 17120 | 0.0656 | 0.9407 | 0.9606 | 0.9782 | 0.9273 | 0.9939 | 0.9093 | 0.9722 |
| 0.0015 | 2856.67 | 17140 | 0.0656 | 0.9405 | 0.9608 | 0.9781 | 0.9282 | 0.9935 | 0.9089 | 0.9720 |
| 0.0072 | 2860.0 | 17160 | 0.0641 | 0.9408 | 0.9608 | 0.9783 | 0.9278 | 0.9938 | 0.9094 | 0.9722 |
| 0.0071 | 2863.33 | 17180 | 0.0624 | 0.9406 | 0.9614 | 0.9782 | 0.9298 | 0.9930 | 0.9091 | 0.9720 |
| 0.027 | 2866.67 | 17200 | 0.0608 | 0.9415 | 0.9617 | 0.9785 | 0.9300 | 0.9934 | 0.9104 | 0.9725 |
| 0.0272 | 2870.0 | 17220 | 0.0654 | 0.9407 | 0.9608 | 0.9782 | 0.9279 | 0.9937 | 0.9093 | 0.9722 |
| 0.1139 | 2873.33 | 17240 | 0.0590 | 0.9412 | 0.9618 | 0.9784 | 0.9306 | 0.9931 | 0.9101 | 0.9723 |
| 0.0268 | 2876.67 | 17260 | 0.0666 | 0.9409 | 0.9603 | 0.9783 | 0.9264 | 0.9943 | 0.9095 | 0.9723 |
| 0.1126 | 2880.0 | 17280 | 0.0635 | 0.9407 | 0.9609 | 0.9782 | 0.9281 | 0.9936 | 0.9093 | 0.9721 |
| 0.0218 | 2883.33 | 17300 | 0.0603 | 0.9412 | 0.9614 | 0.9784 | 0.9294 | 0.9935 | 0.9100 | 0.9724 |
| 0.0071 | 2886.67 | 17320 | 0.0623 | 0.9403 | 0.9613 | 0.9780 | 0.9297 | 0.9929 | 0.9087 | 0.9719 |
| 0.0013 | 2890.0 | 17340 | 0.0590 | 0.9417 | 0.9616 | 0.9786 | 0.9294 | 0.9937 | 0.9108 | 0.9726 |
| 0.0269 | 2893.33 | 17360 | 0.0637 | 0.9413 | 0.9607 | 0.9785 | 0.9271 | 0.9943 | 0.9101 | 0.9725 |
| 0.113 | 2896.67 | 17380 | 0.0668 | 0.9411 | 0.9602 | 0.9784 | 0.9259 | 0.9945 | 0.9097 | 0.9724 |
| 0.1133 | 2900.0 | 17400 | 0.0632 | 0.9411 | 0.9611 | 0.9784 | 0.9284 | 0.9937 | 0.9099 | 0.9723 |
| 0.0013 | 2903.33 | 17420 | 0.0540 | 0.9428 | 0.9634 | 0.9790 | 0.9339 | 0.9928 | 0.9126 | 0.9731 |
| 0.0076 | 2906.67 | 17440 | 0.0617 | 0.9412 | 0.9613 | 0.9784 | 0.9292 | 0.9935 | 0.9100 | 0.9724 |
| 0.0013 | 2910.0 | 17460 | 0.0633 | 0.9406 | 0.9613 | 0.9782 | 0.9295 | 0.9931 | 0.9092 | 0.9721 |
| 0.0267 | 2913.33 | 17480 | 0.0707 | 0.9407 | 0.9600 | 0.9783 | 0.9257 | 0.9944 | 0.9092 | 0.9722 |
| 0.0271 | 2916.67 | 17500 | 0.0590 | 0.9417 | 0.9613 | 0.9786 | 0.9286 | 0.9940 | 0.9108 | 0.9727 |
| 0.0276 | 2920.0 | 17520 | 0.0671 | 0.9394 | 0.9599 | 0.9777 | 0.9262 | 0.9936 | 0.9072 | 0.9715 |
| 0.1134 | 2923.33 | 17540 | 0.0598 | 0.9416 | 0.9618 | 0.9786 | 0.9303 | 0.9934 | 0.9107 | 0.9725 |
| 0.0105 | 2926.67 | 17560 | 0.0649 | 0.9409 | 0.9609 | 0.9783 | 0.9282 | 0.9937 | 0.9096 | 0.9722 |
| 0.1126 | 2930.0 | 17580 | 0.0616 | 0.9412 | 0.9610 | 0.9784 | 0.9282 | 0.9938 | 0.9100 | 0.9724 |
| 0.0115 | 2933.33 | 17600 | 0.0587 | 0.9416 | 0.9616 | 0.9786 | 0.9296 | 0.9936 | 0.9106 | 0.9726 |
| 0.0013 | 2936.67 | 17620 | 0.0560 | 0.9422 | 0.9625 | 0.9787 | 0.9320 | 0.9931 | 0.9116 | 0.9728 |
| 0.1125 | 2940.0 | 17640 | 0.0633 | 0.9412 | 0.9612 | 0.9784 | 0.9287 | 0.9937 | 0.9100 | 0.9724 |
| 0.1127 | 2943.33 | 17660 | 0.0624 | 0.9414 | 0.9608 | 0.9785 | 0.9273 | 0.9943 | 0.9103 | 0.9725 |
| 0.0013 | 2946.67 | 17680 | 0.0593 | 0.9419 | 0.9617 | 0.9787 | 0.9296 | 0.9937 | 0.9110 | 0.9727 |
| 0.0177 | 2950.0 | 17700 | 0.0674 | 0.9414 | 0.9610 | 0.9785 | 0.9280 | 0.9940 | 0.9104 | 0.9725 |
| 0.0216 | 2953.33 | 17720 | 0.0690 | 0.9407 | 0.9605 | 0.9782 | 0.9269 | 0.9940 | 0.9092 | 0.9722 |
| 0.0105 | 2956.67 | 17740 | 0.0651 | 0.9407 | 0.9609 | 0.9782 | 0.9281 | 0.9937 | 0.9093 | 0.9722 |
| 0.0266 | 2960.0 | 17760 | 0.0670 | 0.9409 | 0.9606 | 0.9783 | 0.9273 | 0.9940 | 0.9095 | 0.9723 |
| 0.1135 | 2963.33 | 17780 | 0.0543 | 0.9432 | 0.9632 | 0.9791 | 0.9332 | 0.9932 | 0.9131 | 0.9732 |
| 0.0013 | 2966.67 | 17800 | 0.0632 | 0.9409 | 0.9609 | 0.9783 | 0.9281 | 0.9937 | 0.9096 | 0.9722 |
| 0.0013 | 2970.0 | 17820 | 0.0645 | 0.9408 | 0.9612 | 0.9782 | 0.9292 | 0.9933 | 0.9094 | 0.9721 |
| 0.0071 | 2973.33 | 17840 | 0.0689 | 0.9406 | 0.9602 | 0.9782 | 0.9261 | 0.9942 | 0.9091 | 0.9722 |
| 0.0267 | 2976.67 | 17860 | 0.0644 | 0.9409 | 0.9607 | 0.9783 | 0.9274 | 0.9940 | 0.9096 | 0.9723 |
| 0.0013 | 2980.0 | 17880 | 0.0639 | 0.9407 | 0.9611 | 0.9782 | 0.9286 | 0.9935 | 0.9093 | 0.9722 |
| 0.0234 | 2983.33 | 17900 | 0.0677 | 0.9408 | 0.9606 | 0.9783 | 0.9272 | 0.9940 | 0.9093 | 0.9722 |
| 0.1126 | 2986.67 | 17920 | 0.0720 | 0.9414 | 0.9599 | 0.9786 | 0.9247 | 0.9951 | 0.9102 | 0.9726 |
| 0.022 | 2990.0 | 17940 | 0.0667 | 0.9402 | 0.9605 | 0.9780 | 0.9274 | 0.9936 | 0.9085 | 0.9719 |
| 0.0268 | 2993.33 | 17960 | 0.0585 | 0.9417 | 0.9612 | 0.9786 | 0.9282 | 0.9941 | 0.9108 | 0.9727 |
| 0.0013 | 2996.67 | 17980 | 0.0621 | 0.9406 | 0.9612 | 0.9782 | 0.9291 | 0.9932 | 0.9091 | 0.9721 |
| 0.1125 | 3000.0 | 18000 | 0.0633 | 0.9406 | 0.9609 | 0.9782 | 0.9284 | 0.9935 | 0.9092 | 0.9721 |
| 0.0272 | 3003.33 | 18020 | 0.0666 | 0.9402 | 0.9610 | 0.9780 | 0.9289 | 0.9931 | 0.9086 | 0.9719 |
| 0.0216 | 3006.67 | 18040 | 0.0743 | 0.9405 | 0.9601 | 0.9782 | 0.9258 | 0.9943 | 0.9089 | 0.9721 |
| 0.1124 | 3010.0 | 18060 | 0.0606 | 0.9404 | 0.9621 | 0.9781 | 0.9321 | 0.9922 | 0.9090 | 0.9719 |
| 0.0266 | 3013.33 | 18080 | 0.0712 | 0.9406 | 0.9602 | 0.9782 | 0.9261 | 0.9942 | 0.9090 | 0.9721 |
| 0.0013 | 3016.67 | 18100 | 0.0589 | 0.9413 | 0.9617 | 0.9784 | 0.9301 | 0.9933 | 0.9102 | 0.9724 |
| 0.0266 | 3020.0 | 18120 | 0.0644 | 0.9409 | 0.9608 | 0.9783 | 0.9276 | 0.9939 | 0.9095 | 0.9723 |
| 0.0013 | 3023.33 | 18140 | 0.0599 | 0.9415 | 0.9618 | 0.9785 | 0.9301 | 0.9934 | 0.9105 | 0.9725 |
| 0.0108 | 3026.67 | 18160 | 0.0627 | 0.9412 | 0.9616 | 0.9784 | 0.9300 | 0.9933 | 0.9100 | 0.9723 |
| 0.0109 | 3030.0 | 18180 | 0.0639 | 0.9403 | 0.9607 | 0.9781 | 0.9280 | 0.9934 | 0.9086 | 0.9719 |
| 0.0267 | 3033.33 | 18200 | 0.0623 | 0.9417 | 0.9615 | 0.9786 | 0.9292 | 0.9938 | 0.9107 | 0.9726 |
| 0.1126 | 3036.67 | 18220 | 0.0588 | 0.9417 | 0.9618 | 0.9786 | 0.9301 | 0.9935 | 0.9108 | 0.9726 |
| 0.0013 | 3040.0 | 18240 | 0.0630 | 0.9409 | 0.9611 | 0.9783 | 0.9288 | 0.9935 | 0.9095 | 0.9722 |
| 0.0013 | 3043.33 | 18260 | 0.0617 | 0.9409 | 0.9615 | 0.9783 | 0.9298 | 0.9932 | 0.9097 | 0.9722 |
| 0.0215 | 3046.67 | 18280 | 0.0747 | 0.9402 | 0.9601 | 0.9781 | 0.9262 | 0.9940 | 0.9085 | 0.9720 |
| 0.0106 | 3050.0 | 18300 | 0.0679 | 0.9398 | 0.9607 | 0.9779 | 0.9283 | 0.9931 | 0.9079 | 0.9717 |
| 0.1129 | 3053.33 | 18320 | 0.0622 | 0.9411 | 0.9610 | 0.9784 | 0.9281 | 0.9939 | 0.9099 | 0.9724 |
| 0.0104 | 3056.67 | 18340 | 0.0636 | 0.9409 | 0.9609 | 0.9783 | 0.9281 | 0.9937 | 0.9095 | 0.9722 |
| 0.0013 | 3060.0 | 18360 | 0.0616 | 0.9416 | 0.9612 | 0.9786 | 0.9283 | 0.9940 | 0.9106 | 0.9726 |
| 0.0217 | 3063.33 | 18380 | 0.0654 | 0.9404 | 0.9609 | 0.9781 | 0.9285 | 0.9934 | 0.9089 | 0.9720 |
| 0.007 | 3066.67 | 18400 | 0.0693 | 0.9403 | 0.9610 | 0.9781 | 0.9289 | 0.9932 | 0.9087 | 0.9719 |
| 0.1128 | 3070.0 | 18420 | 0.0619 | 0.9411 | 0.9611 | 0.9784 | 0.9285 | 0.9937 | 0.9099 | 0.9724 |
| 0.0107 | 3073.33 | 18440 | 0.0640 | 0.9400 | 0.9610 | 0.9779 | 0.9292 | 0.9929 | 0.9082 | 0.9717 |
| 0.027 | 3076.67 | 18460 | 0.0668 | 0.9406 | 0.9606 | 0.9782 | 0.9274 | 0.9938 | 0.9091 | 0.9721 |
| 0.0071 | 3080.0 | 18480 | 0.0650 | 0.9409 | 0.9608 | 0.9783 | 0.9279 | 0.9938 | 0.9096 | 0.9723 |
| 0.0215 | 3083.33 | 18500 | 0.0701 | 0.9411 | 0.9605 | 0.9784 | 0.9266 | 0.9943 | 0.9098 | 0.9724 |
| 0.0216 | 3086.67 | 18520 | 0.0629 | 0.9414 | 0.9612 | 0.9785 | 0.9286 | 0.9938 | 0.9103 | 0.9725 |
| 0.0072 | 3090.0 | 18540 | 0.0664 | 0.9409 | 0.9607 | 0.9783 | 0.9276 | 0.9939 | 0.9096 | 0.9723 |
| 0.0072 | 3093.33 | 18560 | 0.0631 | 0.9408 | 0.9616 | 0.9782 | 0.9303 | 0.9930 | 0.9095 | 0.9721 |
| 0.0105 | 3096.67 | 18580 | 0.0670 | 0.9406 | 0.9608 | 0.9782 | 0.9279 | 0.9936 | 0.9090 | 0.9721 |
| 0.0268 | 3100.0 | 18600 | 0.0655 | 0.9406 | 0.9609 | 0.9782 | 0.9282 | 0.9936 | 0.9091 | 0.9721 |
| 0.1121 | 3103.33 | 18620 | 0.0588 | 0.9422 | 0.9620 | 0.9788 | 0.9303 | 0.9937 | 0.9115 | 0.9728 |
| 0.0104 | 3106.67 | 18640 | 0.0616 | 0.9408 | 0.9618 | 0.9782 | 0.9309 | 0.9928 | 0.9095 | 0.9721 |
| 0.1121 | 3110.0 | 18660 | 0.0626 | 0.9408 | 0.9616 | 0.9782 | 0.9303 | 0.9930 | 0.9095 | 0.9721 |
| 0.1177 | 3113.33 | 18680 | 0.0612 | 0.9414 | 0.9609 | 0.9785 | 0.9278 | 0.9941 | 0.9102 | 0.9725 |
| 0.0013 | 3116.67 | 18700 | 0.0624 | 0.9407 | 0.9614 | 0.9782 | 0.9298 | 0.9931 | 0.9093 | 0.9721 |
| 0.007 | 3120.0 | 18720 | 0.0681 | 0.9404 | 0.9607 | 0.9781 | 0.9278 | 0.9936 | 0.9088 | 0.9720 |
| 0.0105 | 3123.33 | 18740 | 0.0664 | 0.9405 | 0.9610 | 0.9781 | 0.9286 | 0.9933 | 0.9089 | 0.9720 |
| 0.0106 | 3126.67 | 18760 | 0.0636 | 0.9412 | 0.9613 | 0.9784 | 0.9290 | 0.9936 | 0.9101 | 0.9724 |
| 0.0266 | 3130.0 | 18780 | 0.0692 | 0.9405 | 0.9605 | 0.9782 | 0.9273 | 0.9938 | 0.9089 | 0.9721 |
| 0.0214 | 3133.33 | 18800 | 0.0681 | 0.9405 | 0.9606 | 0.9782 | 0.9276 | 0.9937 | 0.9089 | 0.9721 |
| 0.0105 | 3136.67 | 18820 | 0.0656 | 0.9408 | 0.9607 | 0.9783 | 0.9274 | 0.9939 | 0.9094 | 0.9722 |
| 0.007 | 3140.0 | 18840 | 0.0639 | 0.9410 | 0.9609 | 0.9784 | 0.9278 | 0.9939 | 0.9097 | 0.9723 |
| 0.0013 | 3143.33 | 18860 | 0.0554 | 0.9418 | 0.9623 | 0.9786 | 0.9314 | 0.9931 | 0.9110 | 0.9726 |
| 0.0016 | 3146.67 | 18880 | 0.0705 | 0.9405 | 0.9604 | 0.9782 | 0.9269 | 0.9939 | 0.9089 | 0.9721 |
| 0.1124 | 3150.0 | 18900 | 0.0644 | 0.9407 | 0.9610 | 0.9782 | 0.9284 | 0.9935 | 0.9092 | 0.9721 |
| 0.0268 | 3153.33 | 18920 | 0.0636 | 0.9411 | 0.9610 | 0.9784 | 0.9282 | 0.9938 | 0.9098 | 0.9723 |
| 0.0269 | 3156.67 | 18940 | 0.0687 | 0.9404 | 0.9602 | 0.9781 | 0.9263 | 0.9940 | 0.9087 | 0.9720 |
| 0.1182 | 3160.0 | 18960 | 0.0589 | 0.9416 | 0.9620 | 0.9785 | 0.9309 | 0.9932 | 0.9106 | 0.9725 |
| 0.0215 | 3163.33 | 18980 | 0.0640 | 0.9410 | 0.9614 | 0.9783 | 0.9294 | 0.9934 | 0.9098 | 0.9723 |
| 0.0265 | 3166.67 | 19000 | 0.0656 | 0.9410 | 0.9609 | 0.9784 | 0.9280 | 0.9938 | 0.9098 | 0.9723 |
| 0.0014 | 3170.0 | 19020 | 0.0576 | 0.9417 | 0.9617 | 0.9786 | 0.9299 | 0.9936 | 0.9109 | 0.9726 |
| 0.1185 | 3173.33 | 19040 | 0.0542 | 0.9427 | 0.9628 | 0.9790 | 0.9322 | 0.9934 | 0.9124 | 0.9731 |
| 0.0229 | 3176.67 | 19060 | 0.0623 | 0.9407 | 0.9607 | 0.9782 | 0.9275 | 0.9938 | 0.9093 | 0.9722 |
| 0.0215 | 3180.0 | 19080 | 0.0616 | 0.9403 | 0.9613 | 0.9781 | 0.9296 | 0.9930 | 0.9087 | 0.9719 |
| 0.0202 | 3183.33 | 19100 | 0.0734 | 0.9406 | 0.9608 | 0.9782 | 0.9279 | 0.9936 | 0.9090 | 0.9721 |
| 0.0215 | 3186.67 | 19120 | 0.0677 | 0.9407 | 0.9604 | 0.9782 | 0.9268 | 0.9940 | 0.9092 | 0.9722 |
| 0.0266 | 3190.0 | 19140 | 0.0645 | 0.9410 | 0.9609 | 0.9783 | 0.9281 | 0.9938 | 0.9097 | 0.9723 |
| 0.1176 | 3193.33 | 19160 | 0.0519 | 0.9432 | 0.9651 | 0.9790 | 0.9387 | 0.9914 | 0.9132 | 0.9731 |
| 0.0215 | 3196.67 | 19180 | 0.0634 | 0.9406 | 0.9610 | 0.9782 | 0.9287 | 0.9934 | 0.9091 | 0.9721 |
| 0.0013 | 3200.0 | 19200 | 0.0607 | 0.9414 | 0.9616 | 0.9785 | 0.9298 | 0.9934 | 0.9104 | 0.9725 |
| 0.0091 | 3203.33 | 19220 | 0.0638 | 0.9410 | 0.9610 | 0.9783 | 0.9284 | 0.9937 | 0.9097 | 0.9723 |
| 0.112 | 3206.67 | 19240 | 0.0608 | 0.9413 | 0.9614 | 0.9784 | 0.9291 | 0.9936 | 0.9102 | 0.9724 |
| 0.0071 | 3210.0 | 19260 | 0.0667 | 0.9407 | 0.9609 | 0.9782 | 0.9283 | 0.9935 | 0.9092 | 0.9721 |
| 0.0013 | 3213.33 | 19280 | 0.0559 | 0.9422 | 0.9628 | 0.9788 | 0.9326 | 0.9930 | 0.9117 | 0.9728 |
| 0.0103 | 3216.67 | 19300 | 0.0621 | 0.9414 | 0.9613 | 0.9785 | 0.9287 | 0.9938 | 0.9104 | 0.9725 |
| 0.0069 | 3220.0 | 19320 | 0.0635 | 0.9409 | 0.9612 | 0.9783 | 0.9290 | 0.9934 | 0.9095 | 0.9722 |
| 0.0069 | 3223.33 | 19340 | 0.0675 | 0.9408 | 0.9607 | 0.9783 | 0.9275 | 0.9939 | 0.9095 | 0.9722 |
| 0.1124 | 3226.67 | 19360 | 0.0586 | 0.9419 | 0.9619 | 0.9787 | 0.9303 | 0.9935 | 0.9111 | 0.9727 |
| 0.1123 | 3230.0 | 19380 | 0.0652 | 0.9410 | 0.9610 | 0.9783 | 0.9284 | 0.9937 | 0.9097 | 0.9723 |
| 0.0271 | 3233.33 | 19400 | 0.0599 | 0.9415 | 0.9613 | 0.9785 | 0.9289 | 0.9938 | 0.9105 | 0.9725 |
| 0.0102 | 3236.67 | 19420 | 0.0683 | 0.9410 | 0.9604 | 0.9784 | 0.9266 | 0.9943 | 0.9097 | 0.9724 |
| 0.0069 | 3240.0 | 19440 | 0.0621 | 0.9416 | 0.9614 | 0.9786 | 0.9289 | 0.9938 | 0.9107 | 0.9726 |
| 0.0265 | 3243.33 | 19460 | 0.0704 | 0.9411 | 0.9604 | 0.9784 | 0.9264 | 0.9944 | 0.9097 | 0.9724 |
| 0.0102 | 3246.67 | 19480 | 0.0601 | 0.9411 | 0.9615 | 0.9784 | 0.9297 | 0.9933 | 0.9099 | 0.9723 |
| 0.1121 | 3250.0 | 19500 | 0.0577 | 0.9416 | 0.9615 | 0.9785 | 0.9294 | 0.9937 | 0.9106 | 0.9725 |
| 0.0013 | 3253.33 | 19520 | 0.0534 | 0.9432 | 0.9631 | 0.9791 | 0.9329 | 0.9933 | 0.9131 | 0.9733 |
| 0.0074 | 3256.67 | 19540 | 0.0584 | 0.9420 | 0.9618 | 0.9787 | 0.9300 | 0.9937 | 0.9113 | 0.9728 |
| 0.0013 | 3260.0 | 19560 | 0.0622 | 0.9413 | 0.9613 | 0.9784 | 0.9291 | 0.9936 | 0.9102 | 0.9724 |
| 0.112 | 3263.33 | 19580 | 0.0622 | 0.9414 | 0.9614 | 0.9785 | 0.9291 | 0.9937 | 0.9103 | 0.9725 |
| 0.0091 | 3266.67 | 19600 | 0.0564 | 0.9424 | 0.9628 | 0.9788 | 0.9326 | 0.9930 | 0.9119 | 0.9729 |
| 0.0215 | 3270.0 | 19620 | 0.0627 | 0.9414 | 0.9614 | 0.9785 | 0.9293 | 0.9936 | 0.9103 | 0.9725 |
| 0.0019 | 3273.33 | 19640 | 0.0583 | 0.9417 | 0.9627 | 0.9786 | 0.9329 | 0.9926 | 0.9109 | 0.9725 |
| 0.1121 | 3276.67 | 19660 | 0.0582 | 0.9416 | 0.9617 | 0.9785 | 0.9299 | 0.9935 | 0.9106 | 0.9725 |
| 0.0266 | 3280.0 | 19680 | 0.0623 | 0.9415 | 0.9612 | 0.9785 | 0.9285 | 0.9939 | 0.9105 | 0.9725 |
| 0.1121 | 3283.33 | 19700 | 0.0621 | 0.9409 | 0.9612 | 0.9783 | 0.9290 | 0.9935 | 0.9096 | 0.9722 |
| 0.0292 | 3286.67 | 19720 | 0.0679 | 0.9412 | 0.9604 | 0.9784 | 0.9264 | 0.9944 | 0.9099 | 0.9724 |
| 0.0103 | 3290.0 | 19740 | 0.0661 | 0.9407 | 0.9610 | 0.9782 | 0.9284 | 0.9935 | 0.9093 | 0.9721 |
| 0.0268 | 3293.33 | 19760 | 0.0636 | 0.9410 | 0.9612 | 0.9784 | 0.9287 | 0.9936 | 0.9098 | 0.9723 |
| 0.1121 | 3296.67 | 19780 | 0.0639 | 0.9410 | 0.9609 | 0.9784 | 0.9279 | 0.9939 | 0.9097 | 0.9723 |
| 0.0265 | 3300.0 | 19800 | 0.0651 | 0.9411 | 0.9610 | 0.9784 | 0.9281 | 0.9939 | 0.9099 | 0.9724 |
| 0.007 | 3303.33 | 19820 | 0.0628 | 0.9408 | 0.9619 | 0.9782 | 0.9311 | 0.9927 | 0.9095 | 0.9721 |
| 0.0013 | 3306.67 | 19840 | 0.0586 | 0.9418 | 0.9620 | 0.9786 | 0.9307 | 0.9933 | 0.9109 | 0.9726 |
| 0.0266 | 3310.0 | 19860 | 0.0630 | 0.9411 | 0.9610 | 0.9784 | 0.9283 | 0.9938 | 0.9098 | 0.9723 |
| 0.0266 | 3313.33 | 19880 | 0.0643 | 0.9411 | 0.9608 | 0.9784 | 0.9277 | 0.9940 | 0.9099 | 0.9724 |
| 0.0268 | 3316.67 | 19900 | 0.0602 | 0.9419 | 0.9614 | 0.9787 | 0.9287 | 0.9941 | 0.9111 | 0.9727 |
| 0.0229 | 3320.0 | 19920 | 0.0620 | 0.9409 | 0.9613 | 0.9783 | 0.9292 | 0.9934 | 0.9096 | 0.9722 |
| 0.0265 | 3323.33 | 19940 | 0.0646 | 0.9419 | 0.9611 | 0.9787 | 0.9280 | 0.9943 | 0.9110 | 0.9727 |
| 0.0013 | 3326.67 | 19960 | 0.0639 | 0.9415 | 0.9613 | 0.9785 | 0.9287 | 0.9938 | 0.9105 | 0.9725 |
| 0.0013 | 3330.0 | 19980 | 0.0615 | 0.9412 | 0.9615 | 0.9784 | 0.9297 | 0.9933 | 0.9100 | 0.9723 |
| 0.0013 | 3333.33 | 20000 | 0.0567 | 0.9417 | 0.9627 | 0.9786 | 0.9327 | 0.9926 | 0.9109 | 0.9725 |
| 0.0102 | 3336.67 | 20020 | 0.0629 | 0.9413 | 0.9613 | 0.9785 | 0.9290 | 0.9937 | 0.9102 | 0.9724 |
| 0.0069 | 3340.0 | 20040 | 0.0651 | 0.9408 | 0.9608 | 0.9783 | 0.9279 | 0.9938 | 0.9095 | 0.9722 |
| 0.1119 | 3343.33 | 20060 | 0.0632 | 0.9413 | 0.9608 | 0.9785 | 0.9275 | 0.9941 | 0.9101 | 0.9725 |
| 0.022 | 3346.67 | 20080 | 0.0699 | 0.9410 | 0.9606 | 0.9783 | 0.9270 | 0.9941 | 0.9096 | 0.9723 |
| 0.007 | 3350.0 | 20100 | 0.0645 | 0.9411 | 0.9614 | 0.9783 | 0.9294 | 0.9934 | 0.9098 | 0.9723 |
| 0.1118 | 3353.33 | 20120 | 0.0588 | 0.9418 | 0.9624 | 0.9786 | 0.9318 | 0.9930 | 0.9110 | 0.9726 |
| 0.1118 | 3356.67 | 20140 | 0.0601 | 0.9416 | 0.9619 | 0.9785 | 0.9305 | 0.9933 | 0.9106 | 0.9725 |
| 0.0069 | 3360.0 | 20160 | 0.0653 | 0.9411 | 0.9611 | 0.9784 | 0.9285 | 0.9937 | 0.9099 | 0.9724 |
| 0.0264 | 3363.33 | 20180 | 0.0591 | 0.9421 | 0.9614 | 0.9787 | 0.9286 | 0.9942 | 0.9113 | 0.9728 |
| 0.0013 | 3366.67 | 20200 | 0.0570 | 0.9426 | 0.9627 | 0.9789 | 0.9321 | 0.9933 | 0.9123 | 0.9730 |
| 0.0102 | 3370.0 | 20220 | 0.0611 | 0.9415 | 0.9616 | 0.9785 | 0.9296 | 0.9935 | 0.9105 | 0.9725 |
| 0.0013 | 3373.33 | 20240 | 0.0535 | 0.9433 | 0.9637 | 0.9791 | 0.9345 | 0.9929 | 0.9133 | 0.9733 |
| 0.112 | 3376.67 | 20260 | 0.0618 | 0.9420 | 0.9613 | 0.9787 | 0.9284 | 0.9942 | 0.9111 | 0.9728 |
| 0.0214 | 3380.0 | 20280 | 0.0658 | 0.9411 | 0.9610 | 0.9784 | 0.9283 | 0.9938 | 0.9098 | 0.9723 |
| 0.0132 | 3383.33 | 20300 | 0.0681 | 0.9410 | 0.9605 | 0.9784 | 0.9268 | 0.9942 | 0.9097 | 0.9723 |
| 0.0216 | 3386.67 | 20320 | 0.0778 | 0.9407 | 0.9600 | 0.9783 | 0.9254 | 0.9945 | 0.9092 | 0.9722 |
| 0.0013 | 3390.0 | 20340 | 0.0616 | 0.9415 | 0.9615 | 0.9785 | 0.9294 | 0.9936 | 0.9105 | 0.9725 |
| 0.1123 | 3393.33 | 20360 | 0.0609 | 0.9419 | 0.9613 | 0.9787 | 0.9286 | 0.9941 | 0.9111 | 0.9727 |
| 0.0069 | 3396.67 | 20380 | 0.0577 | 0.9424 | 0.9623 | 0.9788 | 0.9310 | 0.9935 | 0.9118 | 0.9729 |
| 0.0133 | 3400.0 | 20400 | 0.0610 | 0.9414 | 0.9615 | 0.9785 | 0.9293 | 0.9936 | 0.9104 | 0.9725 |
| 0.0104 | 3403.33 | 20420 | 0.0587 | 0.9419 | 0.9622 | 0.9787 | 0.9311 | 0.9933 | 0.9112 | 0.9727 |
| 0.1173 | 3406.67 | 20440 | 0.0629 | 0.9413 | 0.9609 | 0.9785 | 0.9276 | 0.9941 | 0.9102 | 0.9725 |
| 0.1152 | 3410.0 | 20460 | 0.0618 | 0.9413 | 0.9616 | 0.9784 | 0.9297 | 0.9934 | 0.9102 | 0.9724 |
| 0.0013 | 3413.33 | 20480 | 0.0623 | 0.9419 | 0.9616 | 0.9787 | 0.9295 | 0.9938 | 0.9111 | 0.9727 |
| 0.007 | 3416.67 | 20500 | 0.0620 | 0.9425 | 0.9615 | 0.9789 | 0.9287 | 0.9944 | 0.9120 | 0.9730 |
| 0.0013 | 3420.0 | 20520 | 0.0638 | 0.9417 | 0.9614 | 0.9786 | 0.9290 | 0.9939 | 0.9108 | 0.9726 |
| 0.0214 | 3423.33 | 20540 | 0.0683 | 0.9411 | 0.9609 | 0.9784 | 0.9277 | 0.9940 | 0.9099 | 0.9724 |
| 0.0104 | 3426.67 | 20560 | 0.0658 | 0.9410 | 0.9611 | 0.9783 | 0.9285 | 0.9937 | 0.9098 | 0.9723 |
| 0.0267 | 3430.0 | 20580 | 0.0651 | 0.9415 | 0.9614 | 0.9785 | 0.9292 | 0.9937 | 0.9105 | 0.9725 |
| 0.0107 | 3433.33 | 20600 | 0.0587 | 0.9425 | 0.9632 | 0.9789 | 0.9335 | 0.9928 | 0.9121 | 0.9729 |
| 0.0013 | 3436.67 | 20620 | 0.0568 | 0.9428 | 0.9623 | 0.9790 | 0.9307 | 0.9938 | 0.9124 | 0.9731 |
| 0.1127 | 3440.0 | 20640 | 0.0620 | 0.9425 | 0.9620 | 0.9789 | 0.9300 | 0.9939 | 0.9119 | 0.9730 |
| 0.0013 | 3443.33 | 20660 | 0.0564 | 0.9432 | 0.9636 | 0.9791 | 0.9343 | 0.9929 | 0.9132 | 0.9732 |
| 0.1123 | 3446.67 | 20680 | 0.0631 | 0.9410 | 0.9610 | 0.9783 | 0.9283 | 0.9937 | 0.9097 | 0.9723 |
| 0.0213 | 3450.0 | 20700 | 0.0615 | 0.9415 | 0.9613 | 0.9785 | 0.9288 | 0.9938 | 0.9105 | 0.9725 |
| 0.1119 | 3453.33 | 20720 | 0.0572 | 0.9428 | 0.9628 | 0.9790 | 0.9324 | 0.9933 | 0.9125 | 0.9731 |
| 0.0264 | 3456.67 | 20740 | 0.0731 | 0.9410 | 0.9603 | 0.9784 | 0.9260 | 0.9945 | 0.9097 | 0.9724 |
| 0.0069 | 3460.0 | 20760 | 0.0635 | 0.9412 | 0.9613 | 0.9784 | 0.9290 | 0.9936 | 0.9100 | 0.9724 |
| 0.0264 | 3463.33 | 20780 | 0.0727 | 0.9410 | 0.9603 | 0.9784 | 0.9262 | 0.9944 | 0.9097 | 0.9724 |
| 0.0215 | 3466.67 | 20800 | 0.0605 | 0.9419 | 0.9612 | 0.9787 | 0.9283 | 0.9942 | 0.9110 | 0.9727 |
| 0.0264 | 3470.0 | 20820 | 0.0591 | 0.9426 | 0.9623 | 0.9789 | 0.9309 | 0.9937 | 0.9122 | 0.9730 |
| 0.0101 | 3473.33 | 20840 | 0.0618 | 0.9410 | 0.9616 | 0.9783 | 0.9302 | 0.9931 | 0.9098 | 0.9722 |
| 0.0213 | 3476.67 | 20860 | 0.0578 | 0.9418 | 0.9624 | 0.9786 | 0.9318 | 0.9930 | 0.9111 | 0.9726 |
| 0.0013 | 3480.0 | 20880 | 0.0627 | 0.9412 | 0.9618 | 0.9784 | 0.9304 | 0.9931 | 0.9101 | 0.9723 |
| 0.0212 | 3483.33 | 20900 | 0.0708 | 0.9408 | 0.9607 | 0.9783 | 0.9274 | 0.9939 | 0.9095 | 0.9722 |
| 0.0213 | 3486.67 | 20920 | 0.0636 | 0.9415 | 0.9613 | 0.9785 | 0.9289 | 0.9938 | 0.9104 | 0.9725 |
| 0.1116 | 3490.0 | 20940 | 0.0595 | 0.9420 | 0.9620 | 0.9787 | 0.9304 | 0.9936 | 0.9113 | 0.9727 |
| 0.0212 | 3493.33 | 20960 | 0.0614 | 0.9417 | 0.9616 | 0.9786 | 0.9296 | 0.9937 | 0.9108 | 0.9726 |
| 0.1254 | 3496.67 | 20980 | 0.0556 | 0.9435 | 0.9643 | 0.9792 | 0.9361 | 0.9925 | 0.9137 | 0.9734 |
| 0.0212 | 3500.0 | 21000 | 0.0636 | 0.9414 | 0.9614 | 0.9785 | 0.9291 | 0.9937 | 0.9104 | 0.9725 |
| 0.0264 | 3503.33 | 21020 | 0.0640 | 0.9416 | 0.9610 | 0.9786 | 0.9280 | 0.9941 | 0.9106 | 0.9726 |
| 0.0013 | 3506.67 | 21040 | 0.0547 | 0.9432 | 0.9628 | 0.9792 | 0.9319 | 0.9937 | 0.9132 | 0.9733 |
| 0.0264 | 3510.0 | 21060 | 0.0643 | 0.9415 | 0.9610 | 0.9786 | 0.9280 | 0.9941 | 0.9105 | 0.9726 |
| 0.0101 | 3513.33 | 21080 | 0.0624 | 0.9412 | 0.9614 | 0.9784 | 0.9294 | 0.9935 | 0.9101 | 0.9724 |
| 0.0212 | 3516.67 | 21100 | 0.0639 | 0.9412 | 0.9615 | 0.9784 | 0.9296 | 0.9934 | 0.9101 | 0.9724 |
| 0.0013 | 3520.0 | 21120 | 0.0650 | 0.9410 | 0.9612 | 0.9783 | 0.9288 | 0.9936 | 0.9097 | 0.9723 |
| 0.0069 | 3523.33 | 21140 | 0.0633 | 0.9417 | 0.9611 | 0.9786 | 0.9281 | 0.9942 | 0.9108 | 0.9727 |
| 0.007 | 3526.67 | 21160 | 0.0578 | 0.9423 | 0.9620 | 0.9788 | 0.9303 | 0.9937 | 0.9117 | 0.9729 |
| 0.0013 | 3530.0 | 21180 | 0.0610 | 0.9414 | 0.9617 | 0.9785 | 0.9299 | 0.9934 | 0.9104 | 0.9725 |
| 0.0266 | 3533.33 | 21200 | 0.0613 | 0.9419 | 0.9613 | 0.9787 | 0.9285 | 0.9941 | 0.9111 | 0.9728 |
| 0.0212 | 3536.67 | 21220 | 0.0646 | 0.9411 | 0.9613 | 0.9784 | 0.9291 | 0.9935 | 0.9099 | 0.9723 |
| 0.01 | 3540.0 | 21240 | 0.0591 | 0.9421 | 0.9620 | 0.9787 | 0.9304 | 0.9936 | 0.9115 | 0.9728 |
| 0.113 | 3543.33 | 21260 | 0.0590 | 0.9421 | 0.9620 | 0.9787 | 0.9303 | 0.9936 | 0.9114 | 0.9728 |
| 0.0013 | 3546.67 | 21280 | 0.0584 | 0.9429 | 0.9624 | 0.9790 | 0.9310 | 0.9938 | 0.9126 | 0.9732 |
| 0.0105 | 3550.0 | 21300 | 0.0629 | 0.9412 | 0.9613 | 0.9784 | 0.9292 | 0.9935 | 0.9100 | 0.9724 |
| 0.0212 | 3553.33 | 21320 | 0.0663 | 0.9411 | 0.9611 | 0.9784 | 0.9285 | 0.9937 | 0.9098 | 0.9723 |
| 0.1184 | 3556.67 | 21340 | 0.0591 | 0.9420 | 0.9622 | 0.9787 | 0.9310 | 0.9934 | 0.9113 | 0.9727 |
| 0.0013 | 3560.0 | 21360 | 0.0609 | 0.9418 | 0.9617 | 0.9786 | 0.9298 | 0.9936 | 0.9110 | 0.9727 |
| 0.1122 | 3563.33 | 21380 | 0.0533 | 0.9439 | 0.9639 | 0.9794 | 0.9348 | 0.9931 | 0.9143 | 0.9736 |
| 0.0013 | 3566.67 | 21400 | 0.0533 | 0.9441 | 0.9645 | 0.9794 | 0.9362 | 0.9927 | 0.9146 | 0.9736 |
| 0.0265 | 3570.0 | 21420 | 0.0650 | 0.9408 | 0.9607 | 0.9783 | 0.9275 | 0.9939 | 0.9094 | 0.9722 |
| 0.0101 | 3573.33 | 21440 | 0.0611 | 0.9422 | 0.9622 | 0.9788 | 0.9309 | 0.9935 | 0.9116 | 0.9728 |
| 0.0264 | 3576.67 | 21460 | 0.0586 | 0.9420 | 0.9619 | 0.9787 | 0.9301 | 0.9937 | 0.9113 | 0.9728 |
| 0.0089 | 3580.0 | 21480 | 0.0564 | 0.9429 | 0.9626 | 0.9790 | 0.9316 | 0.9936 | 0.9127 | 0.9732 |
| 0.0071 | 3583.33 | 21500 | 0.0625 | 0.9418 | 0.9615 | 0.9786 | 0.9291 | 0.9939 | 0.9109 | 0.9727 |
| 0.1116 | 3586.67 | 21520 | 0.0631 | 0.9416 | 0.9613 | 0.9786 | 0.9287 | 0.9939 | 0.9106 | 0.9726 |
| 0.0101 | 3590.0 | 21540 | 0.0599 | 0.9422 | 0.9620 | 0.9788 | 0.9304 | 0.9936 | 0.9115 | 0.9728 |
| 0.0068 | 3593.33 | 21560 | 0.0660 | 0.9409 | 0.9611 | 0.9783 | 0.9286 | 0.9936 | 0.9096 | 0.9722 |
| 0.0212 | 3596.67 | 21580 | 0.0595 | 0.9421 | 0.9620 | 0.9787 | 0.9304 | 0.9936 | 0.9114 | 0.9728 |
| 0.0212 | 3600.0 | 21600 | 0.0584 | 0.9425 | 0.9623 | 0.9789 | 0.9309 | 0.9936 | 0.9120 | 0.9730 |
| 0.0264 | 3603.33 | 21620 | 0.0697 | 0.9417 | 0.9605 | 0.9786 | 0.9264 | 0.9947 | 0.9106 | 0.9727 |
| 0.0014 | 3606.67 | 21640 | 0.0575 | 0.9424 | 0.9621 | 0.9788 | 0.9306 | 0.9937 | 0.9118 | 0.9729 |
| 0.0068 | 3610.0 | 21660 | 0.0594 | 0.9421 | 0.9618 | 0.9787 | 0.9298 | 0.9938 | 0.9113 | 0.9728 |
| 0.0015 | 3613.33 | 21680 | 0.0595 | 0.9417 | 0.9631 | 0.9785 | 0.9340 | 0.9922 | 0.9110 | 0.9725 |
| 0.0263 | 3616.67 | 21700 | 0.0613 | 0.9418 | 0.9617 | 0.9786 | 0.9299 | 0.9936 | 0.9109 | 0.9726 |
| 0.0069 | 3620.0 | 21720 | 0.0670 | 0.9410 | 0.9608 | 0.9783 | 0.9278 | 0.9939 | 0.9096 | 0.9723 |
| 0.0216 | 3623.33 | 21740 | 0.0644 | 0.9415 | 0.9613 | 0.9785 | 0.9289 | 0.9938 | 0.9105 | 0.9725 |
| 0.0068 | 3626.67 | 21760 | 0.0620 | 0.9417 | 0.9615 | 0.9786 | 0.9292 | 0.9938 | 0.9109 | 0.9726 |
| 0.0212 | 3630.0 | 21780 | 0.0624 | 0.9416 | 0.9614 | 0.9786 | 0.9291 | 0.9938 | 0.9107 | 0.9726 |
| 0.0215 | 3633.33 | 21800 | 0.0612 | 0.9416 | 0.9618 | 0.9785 | 0.9302 | 0.9934 | 0.9106 | 0.9725 |
| 0.1175 | 3636.67 | 21820 | 0.0625 | 0.9417 | 0.9615 | 0.9786 | 0.9293 | 0.9938 | 0.9108 | 0.9726 |
| 0.01 | 3640.0 | 21840 | 0.0580 | 0.9424 | 0.9622 | 0.9789 | 0.9306 | 0.9937 | 0.9119 | 0.9730 |
| 0.1181 | 3643.33 | 21860 | 0.0555 | 0.9432 | 0.9630 | 0.9792 | 0.9326 | 0.9935 | 0.9132 | 0.9733 |
| 0.1124 | 3646.67 | 21880 | 0.0574 | 0.9427 | 0.9628 | 0.9789 | 0.9322 | 0.9933 | 0.9123 | 0.9730 |
| 0.0068 | 3650.0 | 21900 | 0.0571 | 0.9427 | 0.9629 | 0.9790 | 0.9324 | 0.9933 | 0.9124 | 0.9731 |
| 0.0266 | 3653.33 | 21920 | 0.0644 | 0.9416 | 0.9612 | 0.9786 | 0.9283 | 0.9940 | 0.9106 | 0.9726 |
| 0.0087 | 3656.67 | 21940 | 0.0520 | 0.9437 | 0.9637 | 0.9793 | 0.9341 | 0.9932 | 0.9140 | 0.9735 |
| 0.0069 | 3660.0 | 21960 | 0.0654 | 0.9414 | 0.9608 | 0.9785 | 0.9275 | 0.9942 | 0.9102 | 0.9725 |
| 0.0014 | 3663.33 | 21980 | 0.0545 | 0.9435 | 0.9635 | 0.9793 | 0.9336 | 0.9933 | 0.9137 | 0.9734 |
| 0.1183 | 3666.67 | 22000 | 0.0593 | 0.9421 | 0.9613 | 0.9788 | 0.9282 | 0.9943 | 0.9114 | 0.9728 |
| 0.0013 | 3670.0 | 22020 | 0.0610 | 0.9418 | 0.9616 | 0.9786 | 0.9294 | 0.9938 | 0.9110 | 0.9727 |
| 0.0013 | 3673.33 | 22040 | 0.0624 | 0.9418 | 0.9616 | 0.9786 | 0.9296 | 0.9937 | 0.9110 | 0.9727 |
| 0.01 | 3676.67 | 22060 | 0.0613 | 0.9417 | 0.9614 | 0.9786 | 0.9290 | 0.9939 | 0.9108 | 0.9726 |
| 0.0264 | 3680.0 | 22080 | 0.0626 | 0.9417 | 0.9611 | 0.9786 | 0.9281 | 0.9942 | 0.9108 | 0.9727 |
| 0.1114 | 3683.33 | 22100 | 0.0580 | 0.9423 | 0.9623 | 0.9788 | 0.9312 | 0.9935 | 0.9118 | 0.9729 |
| 0.0013 | 3686.67 | 22120 | 0.0607 | 0.9419 | 0.9617 | 0.9787 | 0.9297 | 0.9937 | 0.9111 | 0.9727 |
| 0.0211 | 3690.0 | 22140 | 0.0633 | 0.9416 | 0.9615 | 0.9786 | 0.9292 | 0.9937 | 0.9107 | 0.9726 |
| 0.0102 | 3693.33 | 22160 | 0.0608 | 0.9422 | 0.9622 | 0.9787 | 0.9310 | 0.9934 | 0.9115 | 0.9728 |
| 0.0013 | 3696.67 | 22180 | 0.0659 | 0.9412 | 0.9611 | 0.9784 | 0.9284 | 0.9938 | 0.9101 | 0.9724 |
| 0.01 | 3700.0 | 22200 | 0.0622 | 0.9417 | 0.9616 | 0.9786 | 0.9295 | 0.9937 | 0.9108 | 0.9726 |
| 0.0068 | 3703.33 | 22220 | 0.0671 | 0.9414 | 0.9606 | 0.9785 | 0.9269 | 0.9944 | 0.9103 | 0.9725 |
| 0.023 | 3706.67 | 22240 | 0.0575 | 0.9421 | 0.9618 | 0.9788 | 0.9298 | 0.9938 | 0.9114 | 0.9728 |
| 0.0265 | 3710.0 | 22260 | 0.0585 | 0.9419 | 0.9616 | 0.9787 | 0.9295 | 0.9938 | 0.9111 | 0.9727 |
| 0.01 | 3713.33 | 22280 | 0.0662 | 0.9411 | 0.9610 | 0.9784 | 0.9281 | 0.9939 | 0.9099 | 0.9724 |
| 0.1114 | 3716.67 | 22300 | 0.0593 | 0.9422 | 0.9618 | 0.9788 | 0.9299 | 0.9938 | 0.9116 | 0.9729 |
| 0.1114 | 3720.0 | 22320 | 0.0585 | 0.9423 | 0.9623 | 0.9788 | 0.9311 | 0.9935 | 0.9117 | 0.9729 |
| 0.1116 | 3723.33 | 22340 | 0.0562 | 0.9427 | 0.9626 | 0.9790 | 0.9318 | 0.9935 | 0.9124 | 0.9731 |
| 0.0266 | 3726.67 | 22360 | 0.0677 | 0.9417 | 0.9609 | 0.9786 | 0.9275 | 0.9943 | 0.9107 | 0.9727 |
| 0.0265 | 3730.0 | 22380 | 0.0628 | 0.9417 | 0.9613 | 0.9786 | 0.9285 | 0.9940 | 0.9107 | 0.9726 |
| 0.0102 | 3733.33 | 22400 | 0.0633 | 0.9415 | 0.9610 | 0.9785 | 0.9279 | 0.9941 | 0.9105 | 0.9726 |
| 0.1115 | 3736.67 | 22420 | 0.0613 | 0.9418 | 0.9613 | 0.9787 | 0.9284 | 0.9941 | 0.9110 | 0.9727 |
| 0.0263 | 3740.0 | 22440 | 0.0633 | 0.9414 | 0.9613 | 0.9785 | 0.9289 | 0.9937 | 0.9103 | 0.9725 |
| 0.0264 | 3743.33 | 22460 | 0.0655 | 0.9412 | 0.9608 | 0.9784 | 0.9274 | 0.9941 | 0.9100 | 0.9724 |
| 0.0069 | 3746.67 | 22480 | 0.0620 | 0.9417 | 0.9617 | 0.9786 | 0.9299 | 0.9936 | 0.9108 | 0.9726 |
| 0.0211 | 3750.0 | 22500 | 0.0636 | 0.9413 | 0.9612 | 0.9785 | 0.9285 | 0.9938 | 0.9102 | 0.9725 |
| 0.0212 | 3753.33 | 22520 | 0.0602 | 0.9413 | 0.9620 | 0.9784 | 0.9310 | 0.9930 | 0.9102 | 0.9723 |
| 0.01 | 3756.67 | 22540 | 0.0625 | 0.9421 | 0.9621 | 0.9787 | 0.9308 | 0.9934 | 0.9114 | 0.9728 |
| 0.0068 | 3760.0 | 22560 | 0.0639 | 0.9413 | 0.9611 | 0.9784 | 0.9285 | 0.9938 | 0.9101 | 0.9724 |
| 0.0263 | 3763.33 | 22580 | 0.0646 | 0.9416 | 0.9611 | 0.9786 | 0.9280 | 0.9942 | 0.9107 | 0.9726 |
| 0.0081 | 3766.67 | 22600 | 0.0555 | 0.9426 | 0.9621 | 0.9789 | 0.9304 | 0.9939 | 0.9122 | 0.9730 |
| 0.0101 | 3770.0 | 22620 | 0.0622 | 0.9416 | 0.9613 | 0.9786 | 0.9288 | 0.9938 | 0.9106 | 0.9726 |
| 0.0263 | 3773.33 | 22640 | 0.0686 | 0.9417 | 0.9605 | 0.9786 | 0.9263 | 0.9947 | 0.9106 | 0.9727 |
| 0.0068 | 3776.67 | 22660 | 0.0552 | 0.9430 | 0.9634 | 0.9790 | 0.9340 | 0.9929 | 0.9128 | 0.9731 |
| 0.01 | 3780.0 | 22680 | 0.0595 | 0.9420 | 0.9620 | 0.9787 | 0.9305 | 0.9935 | 0.9113 | 0.9727 |
| 0.0013 | 3783.33 | 22700 | 0.0545 | 0.9431 | 0.9636 | 0.9791 | 0.9344 | 0.9928 | 0.9130 | 0.9732 |
| 0.0212 | 3786.67 | 22720 | 0.0677 | 0.9413 | 0.9608 | 0.9785 | 0.9276 | 0.9941 | 0.9101 | 0.9724 |
| 0.1114 | 3790.0 | 22740 | 0.0598 | 0.9419 | 0.9619 | 0.9786 | 0.9304 | 0.9935 | 0.9111 | 0.9727 |
| 0.0013 | 3793.33 | 22760 | 0.0571 | 0.9425 | 0.9628 | 0.9789 | 0.9325 | 0.9931 | 0.9121 | 0.9729 |
| 0.0068 | 3796.67 | 22780 | 0.0703 | 0.9415 | 0.9606 | 0.9785 | 0.9267 | 0.9945 | 0.9103 | 0.9726 |
| 0.0068 | 3800.0 | 22800 | 0.0614 | 0.9417 | 0.9615 | 0.9786 | 0.9292 | 0.9938 | 0.9108 | 0.9726 |
| 0.1114 | 3803.33 | 22820 | 0.0592 | 0.9423 | 0.9613 | 0.9788 | 0.9282 | 0.9944 | 0.9116 | 0.9729 |
| 0.1116 | 3806.67 | 22840 | 0.0582 | 0.9425 | 0.9617 | 0.9789 | 0.9291 | 0.9943 | 0.9120 | 0.9730 |
| 0.007 | 3810.0 | 22860 | 0.0621 | 0.9417 | 0.9615 | 0.9786 | 0.9293 | 0.9937 | 0.9107 | 0.9726 |
| 0.0068 | 3813.33 | 22880 | 0.0590 | 0.9420 | 0.9625 | 0.9787 | 0.9320 | 0.9930 | 0.9113 | 0.9727 |
| 0.0265 | 3816.67 | 22900 | 0.0697 | 0.9409 | 0.9607 | 0.9783 | 0.9274 | 0.9939 | 0.9095 | 0.9723 |
| 0.0263 | 3820.0 | 22920 | 0.0605 | 0.9418 | 0.9617 | 0.9786 | 0.9299 | 0.9936 | 0.9109 | 0.9726 |
| 0.1114 | 3823.33 | 22940 | 0.0545 | 0.9434 | 0.9633 | 0.9792 | 0.9334 | 0.9933 | 0.9134 | 0.9734 |
| 0.1114 | 3826.67 | 22960 | 0.0618 | 0.9418 | 0.9612 | 0.9787 | 0.9283 | 0.9941 | 0.9109 | 0.9727 |
| 0.0211 | 3830.0 | 22980 | 0.0647 | 0.9413 | 0.9611 | 0.9784 | 0.9283 | 0.9939 | 0.9101 | 0.9724 |
| 0.0228 | 3833.33 | 23000 | 0.0603 | 0.9420 | 0.9619 | 0.9787 | 0.9303 | 0.9936 | 0.9113 | 0.9728 |
| 0.0263 | 3836.67 | 23020 | 0.0618 | 0.9419 | 0.9614 | 0.9787 | 0.9288 | 0.9940 | 0.9110 | 0.9727 |
| 0.0211 | 3840.0 | 23040 | 0.0551 | 0.9428 | 0.9630 | 0.9790 | 0.9327 | 0.9932 | 0.9126 | 0.9731 |
| 0.0211 | 3843.33 | 23060 | 0.0624 | 0.9410 | 0.9614 | 0.9783 | 0.9295 | 0.9933 | 0.9097 | 0.9723 |
| 0.0103 | 3846.67 | 23080 | 0.0592 | 0.9424 | 0.9621 | 0.9789 | 0.9304 | 0.9937 | 0.9118 | 0.9729 |
| 0.0263 | 3850.0 | 23100 | 0.0584 | 0.9423 | 0.9619 | 0.9788 | 0.9299 | 0.9939 | 0.9118 | 0.9729 |
| 0.0211 | 3853.33 | 23120 | 0.0650 | 0.9414 | 0.9613 | 0.9785 | 0.9288 | 0.9938 | 0.9103 | 0.9725 |
| 0.0067 | 3856.67 | 23140 | 0.0642 | 0.9414 | 0.9613 | 0.9785 | 0.9287 | 0.9938 | 0.9104 | 0.9725 |
| 0.0221 | 3860.0 | 23160 | 0.0635 | 0.9416 | 0.9613 | 0.9786 | 0.9286 | 0.9939 | 0.9107 | 0.9726 |
| 0.0099 | 3863.33 | 23180 | 0.0643 | 0.9416 | 0.9613 | 0.9786 | 0.9288 | 0.9939 | 0.9107 | 0.9726 |
| 0.0068 | 3866.67 | 23200 | 0.0622 | 0.9413 | 0.9619 | 0.9784 | 0.9306 | 0.9931 | 0.9102 | 0.9724 |
| 0.0228 | 3870.0 | 23220 | 0.0613 | 0.9419 | 0.9613 | 0.9787 | 0.9284 | 0.9941 | 0.9110 | 0.9727 |
| 0.0277 | 3873.33 | 23240 | 0.0575 | 0.9427 | 0.9621 | 0.9790 | 0.9302 | 0.9940 | 0.9124 | 0.9731 |
| 0.0013 | 3876.67 | 23260 | 0.0589 | 0.9420 | 0.9624 | 0.9787 | 0.9319 | 0.9930 | 0.9112 | 0.9727 |
| 0.1115 | 3880.0 | 23280 | 0.0569 | 0.9422 | 0.9624 | 0.9788 | 0.9316 | 0.9933 | 0.9116 | 0.9728 |
| 0.0014 | 3883.33 | 23300 | 0.0537 | 0.9436 | 0.9639 | 0.9793 | 0.9349 | 0.9929 | 0.9138 | 0.9734 |
| 0.1115 | 3886.67 | 23320 | 0.0629 | 0.9415 | 0.9616 | 0.9785 | 0.9296 | 0.9935 | 0.9105 | 0.9725 |
| 0.0013 | 3890.0 | 23340 | 0.0626 | 0.9418 | 0.9616 | 0.9786 | 0.9295 | 0.9937 | 0.9110 | 0.9727 |
| 0.1113 | 3893.33 | 23360 | 0.0575 | 0.9426 | 0.9618 | 0.9789 | 0.9295 | 0.9941 | 0.9121 | 0.9731 |
| 0.0013 | 3896.67 | 23380 | 0.0560 | 0.9428 | 0.9627 | 0.9790 | 0.9320 | 0.9934 | 0.9124 | 0.9731 |
| 0.0086 | 3900.0 | 23400 | 0.0595 | 0.9420 | 0.9621 | 0.9787 | 0.9308 | 0.9934 | 0.9112 | 0.9727 |
| 0.0262 | 3903.33 | 23420 | 0.0720 | 0.9412 | 0.9605 | 0.9784 | 0.9267 | 0.9943 | 0.9100 | 0.9724 |
| 0.0099 | 3906.67 | 23440 | 0.0640 | 0.9414 | 0.9614 | 0.9785 | 0.9293 | 0.9936 | 0.9104 | 0.9725 |
| 0.1117 | 3910.0 | 23460 | 0.0605 | 0.9419 | 0.9619 | 0.9786 | 0.9304 | 0.9935 | 0.9110 | 0.9727 |
| 0.1112 | 3913.33 | 23480 | 0.0568 | 0.9426 | 0.9631 | 0.9789 | 0.9332 | 0.9929 | 0.9122 | 0.9730 |
| 0.0099 | 3916.67 | 23500 | 0.0607 | 0.9419 | 0.9618 | 0.9786 | 0.9299 | 0.9936 | 0.9110 | 0.9727 |
| 0.1112 | 3920.0 | 23520 | 0.0613 | 0.9417 | 0.9618 | 0.9786 | 0.9302 | 0.9934 | 0.9107 | 0.9726 |
| 0.0265 | 3923.33 | 23540 | 0.0656 | 0.9416 | 0.9611 | 0.9786 | 0.9281 | 0.9941 | 0.9106 | 0.9726 |
| 0.0129 | 3926.67 | 23560 | 0.0611 | 0.9417 | 0.9620 | 0.9786 | 0.9306 | 0.9933 | 0.9109 | 0.9726 |
| 0.0099 | 3930.0 | 23580 | 0.0634 | 0.9412 | 0.9618 | 0.9784 | 0.9306 | 0.9931 | 0.9100 | 0.9723 |
| 0.1113 | 3933.33 | 23600 | 0.0545 | 0.9435 | 0.9637 | 0.9792 | 0.9345 | 0.9930 | 0.9136 | 0.9734 |
| 0.0013 | 3936.67 | 23620 | 0.0564 | 0.9432 | 0.9631 | 0.9791 | 0.9328 | 0.9934 | 0.9131 | 0.9733 |
| 0.0067 | 3940.0 | 23640 | 0.0629 | 0.9415 | 0.9615 | 0.9785 | 0.9293 | 0.9936 | 0.9104 | 0.9725 |
| 0.1112 | 3943.33 | 23660 | 0.0596 | 0.9422 | 0.9619 | 0.9788 | 0.9299 | 0.9938 | 0.9115 | 0.9728 |
| 0.0214 | 3946.67 | 23680 | 0.0595 | 0.9424 | 0.9615 | 0.9789 | 0.9288 | 0.9943 | 0.9118 | 0.9730 |
| 0.0013 | 3950.0 | 23700 | 0.0614 | 0.9419 | 0.9616 | 0.9787 | 0.9294 | 0.9938 | 0.9111 | 0.9727 |
| 0.0067 | 3953.33 | 23720 | 0.0656 | 0.9415 | 0.9612 | 0.9785 | 0.9286 | 0.9939 | 0.9105 | 0.9726 |
| 0.0014 | 3956.67 | 23740 | 0.0588 | 0.9419 | 0.9617 | 0.9787 | 0.9297 | 0.9937 | 0.9111 | 0.9727 |
| 0.1112 | 3960.0 | 23760 | 0.0571 | 0.9426 | 0.9626 | 0.9789 | 0.9318 | 0.9934 | 0.9121 | 0.9730 |
| 0.0262 | 3963.33 | 23780 | 0.0679 | 0.9415 | 0.9607 | 0.9786 | 0.9270 | 0.9944 | 0.9105 | 0.9726 |
| 0.0099 | 3966.67 | 23800 | 0.0715 | 0.9411 | 0.9606 | 0.9784 | 0.9271 | 0.9941 | 0.9098 | 0.9724 |
| 0.0013 | 3970.0 | 23820 | 0.0582 | 0.9425 | 0.9623 | 0.9789 | 0.9310 | 0.9936 | 0.9121 | 0.9730 |
| 0.0107 | 3973.33 | 23840 | 0.0591 | 0.9422 | 0.9626 | 0.9787 | 0.9322 | 0.9930 | 0.9116 | 0.9728 |
| 0.0098 | 3976.67 | 23860 | 0.0735 | 0.9410 | 0.9606 | 0.9784 | 0.9272 | 0.9941 | 0.9097 | 0.9723 |
| 0.0013 | 3980.0 | 23880 | 0.0589 | 0.9420 | 0.9622 | 0.9787 | 0.9310 | 0.9934 | 0.9113 | 0.9727 |
| 0.0013 | 3983.33 | 23900 | 0.0576 | 0.9424 | 0.9626 | 0.9788 | 0.9319 | 0.9932 | 0.9118 | 0.9729 |
| 0.0013 | 3986.67 | 23920 | 0.0607 | 0.9417 | 0.9621 | 0.9786 | 0.9311 | 0.9932 | 0.9109 | 0.9726 |
| 0.0264 | 3990.0 | 23940 | 0.0635 | 0.9424 | 0.9610 | 0.9789 | 0.9272 | 0.9948 | 0.9117 | 0.9730 |
| 0.0128 | 3993.33 | 23960 | 0.0678 | 0.9413 | 0.9609 | 0.9784 | 0.9278 | 0.9940 | 0.9101 | 0.9724 |
| 0.021 | 3996.67 | 23980 | 0.0658 | 0.9411 | 0.9614 | 0.9783 | 0.9295 | 0.9933 | 0.9098 | 0.9723 |
| 0.0013 | 4000.0 | 24000 | 0.0570 | 0.9424 | 0.9629 | 0.9788 | 0.9328 | 0.9930 | 0.9119 | 0.9729 |
| 0.0067 | 4003.33 | 24020 | 0.0628 | 0.9420 | 0.9614 | 0.9787 | 0.9289 | 0.9940 | 0.9112 | 0.9728 |
| 0.0018 | 4006.67 | 24040 | 0.0536 | 0.9445 | 0.9651 | 0.9796 | 0.9379 | 0.9924 | 0.9153 | 0.9738 |
| 0.0068 | 4010.0 | 24060 | 0.0611 | 0.9418 | 0.9616 | 0.9786 | 0.9294 | 0.9938 | 0.9110 | 0.9727 |
| 0.0067 | 4013.33 | 24080 | 0.0609 | 0.9419 | 0.9620 | 0.9787 | 0.9306 | 0.9935 | 0.9112 | 0.9727 |
| 0.1112 | 4016.67 | 24100 | 0.0590 | 0.9421 | 0.9621 | 0.9787 | 0.9307 | 0.9935 | 0.9114 | 0.9728 |
| 0.0211 | 4020.0 | 24120 | 0.0601 | 0.9419 | 0.9618 | 0.9787 | 0.9299 | 0.9937 | 0.9112 | 0.9727 |
| 0.0211 | 4023.33 | 24140 | 0.0675 | 0.9411 | 0.9611 | 0.9784 | 0.9285 | 0.9937 | 0.9098 | 0.9723 |
| 0.0262 | 4026.67 | 24160 | 0.0579 | 0.9427 | 0.9623 | 0.9790 | 0.9308 | 0.9938 | 0.9124 | 0.9731 |
| 0.0013 | 4030.0 | 24180 | 0.0616 | 0.9419 | 0.9619 | 0.9787 | 0.9301 | 0.9936 | 0.9111 | 0.9727 |
| 0.1111 | 4033.33 | 24200 | 0.0547 | 0.9431 | 0.9635 | 0.9791 | 0.9341 | 0.9929 | 0.9130 | 0.9732 |
| 0.0068 | 4036.67 | 24220 | 0.0640 | 0.9415 | 0.9613 | 0.9785 | 0.9287 | 0.9938 | 0.9105 | 0.9725 |
| 0.1111 | 4040.0 | 24240 | 0.0603 | 0.9419 | 0.9617 | 0.9787 | 0.9298 | 0.9937 | 0.9111 | 0.9727 |
| 0.1111 | 4043.33 | 24260 | 0.0549 | 0.9434 | 0.9636 | 0.9792 | 0.9343 | 0.9930 | 0.9134 | 0.9733 |
| 0.0262 | 4046.67 | 24280 | 0.0634 | 0.9419 | 0.9614 | 0.9787 | 0.9287 | 0.9940 | 0.9111 | 0.9727 |
| 0.0099 | 4050.0 | 24300 | 0.0597 | 0.9419 | 0.9620 | 0.9787 | 0.9305 | 0.9935 | 0.9112 | 0.9727 |
| 0.0013 | 4053.33 | 24320 | 0.0550 | 0.9431 | 0.9633 | 0.9791 | 0.9335 | 0.9931 | 0.9130 | 0.9732 |
| 0.0213 | 4056.67 | 24340 | 0.0642 | 0.9417 | 0.9613 | 0.9786 | 0.9286 | 0.9940 | 0.9107 | 0.9726 |
| 0.1112 | 4060.0 | 24360 | 0.0618 | 0.9419 | 0.9617 | 0.9787 | 0.9295 | 0.9938 | 0.9111 | 0.9727 |
| 0.0211 | 4063.33 | 24380 | 0.0657 | 0.9413 | 0.9609 | 0.9785 | 0.9276 | 0.9941 | 0.9101 | 0.9724 |
| 0.0013 | 4066.67 | 24400 | 0.0636 | 0.9412 | 0.9616 | 0.9784 | 0.9299 | 0.9933 | 0.9101 | 0.9723 |
| 0.0013 | 4070.0 | 24420 | 0.0600 | 0.9419 | 0.9620 | 0.9786 | 0.9305 | 0.9934 | 0.9111 | 0.9727 |
| 0.1111 | 4073.33 | 24440 | 0.0595 | 0.9426 | 0.9623 | 0.9789 | 0.9309 | 0.9937 | 0.9122 | 0.9730 |
| 0.1162 | 4076.67 | 24460 | 0.0522 | 0.9438 | 0.9636 | 0.9794 | 0.9338 | 0.9934 | 0.9141 | 0.9736 |
| 0.1112 | 4080.0 | 24480 | 0.0575 | 0.9422 | 0.9623 | 0.9788 | 0.9312 | 0.9934 | 0.9116 | 0.9728 |
| 0.021 | 4083.33 | 24500 | 0.0651 | 0.9411 | 0.9613 | 0.9784 | 0.9289 | 0.9936 | 0.9100 | 0.9723 |
| 0.021 | 4086.67 | 24520 | 0.0585 | 0.9421 | 0.9623 | 0.9787 | 0.9312 | 0.9934 | 0.9115 | 0.9728 |
| 0.0098 | 4090.0 | 24540 | 0.0602 | 0.9418 | 0.9622 | 0.9786 | 0.9311 | 0.9932 | 0.9109 | 0.9726 |
| 0.021 | 4093.33 | 24560 | 0.0668 | 0.9411 | 0.9611 | 0.9784 | 0.9284 | 0.9938 | 0.9099 | 0.9724 |
| 0.0081 | 4096.67 | 24580 | 0.0527 | 0.9437 | 0.9638 | 0.9793 | 0.9346 | 0.9931 | 0.9140 | 0.9735 |
| 0.0013 | 4100.0 | 24600 | 0.0642 | 0.9426 | 0.9623 | 0.9789 | 0.9310 | 0.9937 | 0.9122 | 0.9730 |
| 0.0263 | 4103.33 | 24620 | 0.0634 | 0.9416 | 0.9613 | 0.9786 | 0.9287 | 0.9939 | 0.9106 | 0.9726 |
| 0.021 | 4106.67 | 24640 | 0.0679 | 0.9412 | 0.9608 | 0.9784 | 0.9275 | 0.9941 | 0.9100 | 0.9724 |
| 0.0262 | 4110.0 | 24660 | 0.0624 | 0.9419 | 0.9617 | 0.9787 | 0.9296 | 0.9937 | 0.9111 | 0.9727 |
| 0.0072 | 4113.33 | 24680 | 0.0593 | 0.9422 | 0.9626 | 0.9787 | 0.9321 | 0.9931 | 0.9115 | 0.9728 |
| 0.0104 | 4116.67 | 24700 | 0.0613 | 0.9416 | 0.9616 | 0.9786 | 0.9297 | 0.9936 | 0.9107 | 0.9726 |
| 0.0098 | 4120.0 | 24720 | 0.0588 | 0.9420 | 0.9620 | 0.9787 | 0.9306 | 0.9935 | 0.9112 | 0.9727 |
| 0.0067 | 4123.33 | 24740 | 0.0573 | 0.9424 | 0.9634 | 0.9788 | 0.9342 | 0.9925 | 0.9120 | 0.9728 |
| 0.0261 | 4126.67 | 24760 | 0.0650 | 0.9414 | 0.9610 | 0.9785 | 0.9281 | 0.9940 | 0.9103 | 0.9725 |
| 0.0067 | 4130.0 | 24780 | 0.0592 | 0.9422 | 0.9619 | 0.9788 | 0.9299 | 0.9938 | 0.9116 | 0.9729 |
| 0.0099 | 4133.33 | 24800 | 0.0634 | 0.9416 | 0.9616 | 0.9785 | 0.9295 | 0.9936 | 0.9106 | 0.9725 |
| 0.1265 | 4136.67 | 24820 | 0.0574 | 0.9430 | 0.9630 | 0.9790 | 0.9328 | 0.9933 | 0.9128 | 0.9731 |
| 0.0067 | 4140.0 | 24840 | 0.0584 | 0.9426 | 0.9625 | 0.9789 | 0.9316 | 0.9935 | 0.9122 | 0.9730 |
| 0.021 | 4143.33 | 24860 | 0.0690 | 0.9415 | 0.9606 | 0.9786 | 0.9266 | 0.9945 | 0.9104 | 0.9726 |
| 0.021 | 4146.67 | 24880 | 0.0599 | 0.9420 | 0.9621 | 0.9787 | 0.9307 | 0.9934 | 0.9113 | 0.9727 |
| 0.0068 | 4150.0 | 24900 | 0.0593 | 0.9423 | 0.9618 | 0.9788 | 0.9298 | 0.9939 | 0.9116 | 0.9729 |
| 0.0067 | 4153.33 | 24920 | 0.0587 | 0.9422 | 0.9623 | 0.9788 | 0.9313 | 0.9934 | 0.9116 | 0.9728 |
| 0.0262 | 4156.67 | 24940 | 0.0625 | 0.9418 | 0.9612 | 0.9787 | 0.9283 | 0.9941 | 0.9109 | 0.9727 |
| 0.0067 | 4160.0 | 24960 | 0.0584 | 0.9424 | 0.9626 | 0.9788 | 0.9319 | 0.9932 | 0.9119 | 0.9729 |
| 0.111 | 4163.33 | 24980 | 0.0529 | 0.9435 | 0.9638 | 0.9792 | 0.9348 | 0.9929 | 0.9137 | 0.9734 |
| 0.0098 | 4166.67 | 25000 | 0.0608 | 0.9418 | 0.9616 | 0.9786 | 0.9294 | 0.9938 | 0.9110 | 0.9727 |
| 0.0013 | 4170.0 | 25020 | 0.0587 | 0.9423 | 0.9623 | 0.9788 | 0.9311 | 0.9935 | 0.9117 | 0.9729 |
| 0.111 | 4173.33 | 25040 | 0.0585 | 0.9423 | 0.9626 | 0.9788 | 0.9320 | 0.9932 | 0.9118 | 0.9728 |
| 0.0068 | 4176.67 | 25060 | 0.0624 | 0.9417 | 0.9617 | 0.9786 | 0.9300 | 0.9935 | 0.9108 | 0.9726 |
| 0.0067 | 4180.0 | 25080 | 0.0602 | 0.9421 | 0.9619 | 0.9787 | 0.9302 | 0.9936 | 0.9114 | 0.9728 |
| 0.0098 | 4183.33 | 25100 | 0.0660 | 0.9411 | 0.9613 | 0.9784 | 0.9290 | 0.9936 | 0.9099 | 0.9723 |
| 0.0091 | 4186.67 | 25120 | 0.0655 | 0.9412 | 0.9610 | 0.9784 | 0.9282 | 0.9939 | 0.9101 | 0.9724 |
| 0.1111 | 4190.0 | 25140 | 0.0593 | 0.9420 | 0.9622 | 0.9787 | 0.9310 | 0.9934 | 0.9113 | 0.9727 |
| 0.111 | 4193.33 | 25160 | 0.0586 | 0.9421 | 0.9621 | 0.9787 | 0.9308 | 0.9934 | 0.9114 | 0.9727 |
| 0.0262 | 4196.67 | 25180 | 0.0593 | 0.9420 | 0.9617 | 0.9787 | 0.9296 | 0.9938 | 0.9112 | 0.9728 |
| 0.021 | 4200.0 | 25200 | 0.0620 | 0.9417 | 0.9617 | 0.9786 | 0.9297 | 0.9936 | 0.9108 | 0.9726 |
| 0.0013 | 4203.33 | 25220 | 0.0541 | 0.9433 | 0.9635 | 0.9792 | 0.9341 | 0.9930 | 0.9133 | 0.9733 |
| 0.1128 | 4206.67 | 25240 | 0.0593 | 0.9418 | 0.9618 | 0.9786 | 0.9301 | 0.9936 | 0.9110 | 0.9727 |
| 0.0211 | 4210.0 | 25260 | 0.0641 | 0.9414 | 0.9612 | 0.9785 | 0.9285 | 0.9939 | 0.9103 | 0.9725 |
| 0.0098 | 4213.33 | 25280 | 0.0647 | 0.9412 | 0.9614 | 0.9784 | 0.9292 | 0.9936 | 0.9101 | 0.9724 |
| 0.021 | 4216.67 | 25300 | 0.0600 | 0.9420 | 0.9621 | 0.9787 | 0.9307 | 0.9934 | 0.9113 | 0.9727 |
| 0.021 | 4220.0 | 25320 | 0.0608 | 0.9420 | 0.9620 | 0.9787 | 0.9305 | 0.9935 | 0.9113 | 0.9727 |
| 0.017 | 4223.33 | 25340 | 0.0705 | 0.9417 | 0.9612 | 0.9786 | 0.9283 | 0.9941 | 0.9108 | 0.9727 |
| 0.0264 | 4226.67 | 25360 | 0.0618 | 0.9423 | 0.9618 | 0.9788 | 0.9295 | 0.9940 | 0.9117 | 0.9729 |
| 0.0067 | 4230.0 | 25380 | 0.0595 | 0.9426 | 0.9623 | 0.9789 | 0.9309 | 0.9937 | 0.9121 | 0.9730 |
| 0.0013 | 4233.33 | 25400 | 0.0617 | 0.9418 | 0.9619 | 0.9786 | 0.9304 | 0.9935 | 0.9110 | 0.9726 |
| 0.0013 | 4236.67 | 25420 | 0.0580 | 0.9428 | 0.9627 | 0.9790 | 0.9320 | 0.9934 | 0.9125 | 0.9731 |
| 0.111 | 4240.0 | 25440 | 0.0604 | 0.9422 | 0.9618 | 0.9788 | 0.9298 | 0.9938 | 0.9115 | 0.9729 |
| 0.1113 | 4243.33 | 25460 | 0.0582 | 0.9425 | 0.9619 | 0.9789 | 0.9299 | 0.9939 | 0.9119 | 0.9730 |
| 0.0098 | 4246.67 | 25480 | 0.0526 | 0.9442 | 0.9647 | 0.9795 | 0.9369 | 0.9926 | 0.9147 | 0.9737 |
| 0.0067 | 4250.0 | 25500 | 0.0602 | 0.9421 | 0.9621 | 0.9787 | 0.9306 | 0.9936 | 0.9115 | 0.9728 |
| 0.0212 | 4253.33 | 25520 | 0.0601 | 0.9419 | 0.9618 | 0.9787 | 0.9301 | 0.9936 | 0.9111 | 0.9727 |
| 0.0274 | 4256.67 | 25540 | 0.0612 | 0.9417 | 0.9621 | 0.9786 | 0.9309 | 0.9932 | 0.9108 | 0.9725 |
| 0.0067 | 4260.0 | 25560 | 0.0600 | 0.9420 | 0.9619 | 0.9787 | 0.9301 | 0.9936 | 0.9113 | 0.9727 |
| 0.0067 | 4263.33 | 25580 | 0.0601 | 0.9420 | 0.9619 | 0.9787 | 0.9300 | 0.9937 | 0.9112 | 0.9727 |
| 0.0067 | 4266.67 | 25600 | 0.0613 | 0.9421 | 0.9617 | 0.9787 | 0.9297 | 0.9938 | 0.9113 | 0.9728 |
| 0.0012 | 4270.0 | 25620 | 0.0603 | 0.9421 | 0.9623 | 0.9787 | 0.9313 | 0.9933 | 0.9114 | 0.9728 |
| 0.0067 | 4273.33 | 25640 | 0.0634 | 0.9416 | 0.9618 | 0.9786 | 0.9301 | 0.9934 | 0.9107 | 0.9726 |
| 0.0099 | 4276.67 | 25660 | 0.0558 | 0.9430 | 0.9626 | 0.9791 | 0.9316 | 0.9937 | 0.9128 | 0.9732 |
| 0.0013 | 4280.0 | 25680 | 0.0610 | 0.9418 | 0.9618 | 0.9786 | 0.9301 | 0.9935 | 0.9109 | 0.9726 |
| 0.0262 | 4283.33 | 25700 | 0.0608 | 0.9422 | 0.9615 | 0.9788 | 0.9288 | 0.9942 | 0.9115 | 0.9729 |
| 0.0067 | 4286.67 | 25720 | 0.0639 | 0.9412 | 0.9617 | 0.9784 | 0.9302 | 0.9932 | 0.9100 | 0.9723 |
| 0.0013 | 4290.0 | 25740 | 0.0573 | 0.9425 | 0.9623 | 0.9789 | 0.9309 | 0.9936 | 0.9120 | 0.9730 |
| 0.111 | 4293.33 | 25760 | 0.0529 | 0.9440 | 0.9639 | 0.9794 | 0.9346 | 0.9932 | 0.9143 | 0.9736 |
| 0.0209 | 4296.67 | 25780 | 0.0630 | 0.9420 | 0.9613 | 0.9787 | 0.9285 | 0.9942 | 0.9113 | 0.9728 |
| 0.111 | 4300.0 | 25800 | 0.0609 | 0.9421 | 0.9620 | 0.9787 | 0.9303 | 0.9936 | 0.9114 | 0.9728 |
| 0.0215 | 4303.33 | 25820 | 0.0586 | 0.9423 | 0.9622 | 0.9788 | 0.9308 | 0.9936 | 0.9118 | 0.9729 |
| 0.1109 | 4306.67 | 25840 | 0.0556 | 0.9428 | 0.9631 | 0.9790 | 0.9332 | 0.9930 | 0.9125 | 0.9730 |
| 0.0013 | 4310.0 | 25860 | 0.0610 | 0.9419 | 0.9619 | 0.9787 | 0.9302 | 0.9935 | 0.9111 | 0.9727 |
| 0.0013 | 4313.33 | 25880 | 0.0572 | 0.9424 | 0.9621 | 0.9788 | 0.9305 | 0.9937 | 0.9118 | 0.9729 |
| 0.0067 | 4316.67 | 25900 | 0.0586 | 0.9425 | 0.9623 | 0.9789 | 0.9311 | 0.9936 | 0.9121 | 0.9730 |
| 0.0261 | 4320.0 | 25920 | 0.0615 | 0.9418 | 0.9618 | 0.9786 | 0.9302 | 0.9935 | 0.9109 | 0.9726 |
| 0.0012 | 4323.33 | 25940 | 0.0621 | 0.9412 | 0.9618 | 0.9784 | 0.9306 | 0.9931 | 0.9101 | 0.9723 |
| 0.0067 | 4326.67 | 25960 | 0.0554 | 0.9431 | 0.9635 | 0.9791 | 0.9343 | 0.9928 | 0.9130 | 0.9732 |
| 0.0212 | 4330.0 | 25980 | 0.0617 | 0.9420 | 0.9617 | 0.9787 | 0.9296 | 0.9938 | 0.9112 | 0.9728 |
| 0.0261 | 4333.33 | 26000 | 0.0636 | 0.9415 | 0.9613 | 0.9785 | 0.9288 | 0.9938 | 0.9104 | 0.9725 |
| 0.0069 | 4336.67 | 26020 | 0.0658 | 0.9413 | 0.9610 | 0.9785 | 0.9281 | 0.9940 | 0.9102 | 0.9725 |
| 0.1114 | 4340.0 | 26040 | 0.0641 | 0.9414 | 0.9612 | 0.9785 | 0.9284 | 0.9939 | 0.9104 | 0.9725 |
| 0.021 | 4343.33 | 26060 | 0.0640 | 0.9413 | 0.9612 | 0.9785 | 0.9287 | 0.9937 | 0.9102 | 0.9724 |
| 0.0212 | 4346.67 | 26080 | 0.0658 | 0.9415 | 0.9611 | 0.9785 | 0.9283 | 0.9940 | 0.9104 | 0.9725 |
| 0.0261 | 4350.0 | 26100 | 0.0649 | 0.9415 | 0.9614 | 0.9785 | 0.9290 | 0.9938 | 0.9105 | 0.9725 |
| 0.0014 | 4353.33 | 26120 | 0.0615 | 0.9419 | 0.9615 | 0.9787 | 0.9290 | 0.9939 | 0.9110 | 0.9727 |
| 0.0067 | 4356.67 | 26140 | 0.0554 | 0.9430 | 0.9629 | 0.9791 | 0.9325 | 0.9934 | 0.9129 | 0.9732 |
| 0.0263 | 4360.0 | 26160 | 0.0612 | 0.9418 | 0.9615 | 0.9786 | 0.9293 | 0.9938 | 0.9109 | 0.9726 |
| 0.0013 | 4363.33 | 26180 | 0.0529 | 0.9437 | 0.9642 | 0.9793 | 0.9357 | 0.9927 | 0.9140 | 0.9735 |
| 0.0067 | 4366.67 | 26200 | 0.0691 | 0.9415 | 0.9608 | 0.9786 | 0.9272 | 0.9944 | 0.9105 | 0.9726 |
| 0.0014 | 4370.0 | 26220 | 0.0567 | 0.9431 | 0.9630 | 0.9791 | 0.9327 | 0.9934 | 0.9129 | 0.9732 |
| 0.0261 | 4373.33 | 26240 | 0.0652 | 0.9416 | 0.9610 | 0.9786 | 0.9278 | 0.9942 | 0.9106 | 0.9726 |
| 0.0012 | 4376.67 | 26260 | 0.0594 | 0.9419 | 0.9619 | 0.9786 | 0.9304 | 0.9935 | 0.9111 | 0.9727 |
| 0.111 | 4380.0 | 26280 | 0.0596 | 0.9419 | 0.9615 | 0.9787 | 0.9291 | 0.9939 | 0.9111 | 0.9727 |
| 0.0067 | 4383.33 | 26300 | 0.0547 | 0.9428 | 0.9640 | 0.9789 | 0.9359 | 0.9922 | 0.9127 | 0.9730 |
| 0.0261 | 4386.67 | 26320 | 0.0554 | 0.9433 | 0.9630 | 0.9792 | 0.9326 | 0.9935 | 0.9132 | 0.9733 |
| 0.0067 | 4390.0 | 26340 | 0.0626 | 0.9416 | 0.9615 | 0.9786 | 0.9293 | 0.9937 | 0.9106 | 0.9726 |
| 0.0097 | 4393.33 | 26360 | 0.0704 | 0.9408 | 0.9609 | 0.9783 | 0.9281 | 0.9937 | 0.9094 | 0.9722 |
| 0.0013 | 4396.67 | 26380 | 0.0586 | 0.9424 | 0.9620 | 0.9788 | 0.9301 | 0.9938 | 0.9118 | 0.9729 |
| 0.0209 | 4400.0 | 26400 | 0.0608 | 0.9414 | 0.9620 | 0.9785 | 0.9310 | 0.9931 | 0.9104 | 0.9724 |
| 0.0209 | 4403.33 | 26420 | 0.0692 | 0.9413 | 0.9608 | 0.9785 | 0.9273 | 0.9942 | 0.9102 | 0.9725 |
| 0.0067 | 4406.67 | 26440 | 0.0611 | 0.9418 | 0.9618 | 0.9786 | 0.9300 | 0.9936 | 0.9110 | 0.9727 |
| 0.0067 | 4410.0 | 26460 | 0.0616 | 0.9420 | 0.9620 | 0.9787 | 0.9305 | 0.9935 | 0.9112 | 0.9727 |
| 0.0067 | 4413.33 | 26480 | 0.0593 | 0.9420 | 0.9626 | 0.9787 | 0.9322 | 0.9929 | 0.9112 | 0.9727 |
| 0.0261 | 4416.67 | 26500 | 0.0584 | 0.9423 | 0.9620 | 0.9788 | 0.9302 | 0.9937 | 0.9116 | 0.9729 |
| 0.1111 | 4420.0 | 26520 | 0.0588 | 0.9426 | 0.9627 | 0.9789 | 0.9320 | 0.9933 | 0.9122 | 0.9730 |
| 0.0097 | 4423.33 | 26540 | 0.0574 | 0.9424 | 0.9624 | 0.9789 | 0.9313 | 0.9935 | 0.9119 | 0.9729 |
| 0.0067 | 4426.67 | 26560 | 0.0597 | 0.9419 | 0.9623 | 0.9786 | 0.9313 | 0.9932 | 0.9111 | 0.9726 |
| 0.0209 | 4430.0 | 26580 | 0.0660 | 0.9412 | 0.9612 | 0.9784 | 0.9287 | 0.9937 | 0.9100 | 0.9724 |
| 0.0104 | 4433.33 | 26600 | 0.0610 | 0.9419 | 0.9610 | 0.9787 | 0.9275 | 0.9945 | 0.9111 | 0.9728 |
| 0.0014 | 4436.67 | 26620 | 0.0495 | 0.9446 | 0.9646 | 0.9796 | 0.9363 | 0.9930 | 0.9154 | 0.9739 |
| 0.021 | 4440.0 | 26640 | 0.0598 | 0.9418 | 0.9618 | 0.9786 | 0.9299 | 0.9936 | 0.9110 | 0.9727 |
| 0.1109 | 4443.33 | 26660 | 0.0587 | 0.9423 | 0.9623 | 0.9788 | 0.9311 | 0.9935 | 0.9118 | 0.9729 |
| 0.1109 | 4446.67 | 26680 | 0.0638 | 0.9415 | 0.9612 | 0.9785 | 0.9285 | 0.9939 | 0.9105 | 0.9726 |
| 0.0067 | 4450.0 | 26700 | 0.0595 | 0.9422 | 0.9622 | 0.9788 | 0.9309 | 0.9935 | 0.9115 | 0.9728 |
| 0.021 | 4453.33 | 26720 | 0.0633 | 0.9415 | 0.9616 | 0.9785 | 0.9296 | 0.9935 | 0.9104 | 0.9725 |
| 0.0261 | 4456.67 | 26740 | 0.0607 | 0.9423 | 0.9617 | 0.9788 | 0.9294 | 0.9940 | 0.9117 | 0.9729 |
| 0.021 | 4460.0 | 26760 | 0.0613 | 0.9419 | 0.9616 | 0.9787 | 0.9293 | 0.9939 | 0.9112 | 0.9727 |
| 0.0013 | 4463.33 | 26780 | 0.0528 | 0.9437 | 0.9644 | 0.9793 | 0.9361 | 0.9926 | 0.9140 | 0.9735 |
| 0.0262 | 4466.67 | 26800 | 0.0572 | 0.9427 | 0.9631 | 0.9789 | 0.9332 | 0.9930 | 0.9125 | 0.9730 |
| 0.0261 | 4470.0 | 26820 | 0.0559 | 0.9426 | 0.9622 | 0.9789 | 0.9308 | 0.9937 | 0.9122 | 0.9730 |
| 0.1109 | 4473.33 | 26840 | 0.0554 | 0.9429 | 0.9630 | 0.9790 | 0.9327 | 0.9933 | 0.9126 | 0.9731 |
| 0.0012 | 4476.67 | 26860 | 0.0541 | 0.9436 | 0.9634 | 0.9793 | 0.9335 | 0.9934 | 0.9138 | 0.9735 |
| 0.0013 | 4480.0 | 26880 | 0.0576 | 0.9424 | 0.9622 | 0.9789 | 0.9308 | 0.9936 | 0.9119 | 0.9729 |
| 0.0066 | 4483.33 | 26900 | 0.0642 | 0.9414 | 0.9614 | 0.9785 | 0.9292 | 0.9936 | 0.9103 | 0.9724 |
| 0.0097 | 4486.67 | 26920 | 0.0523 | 0.9440 | 0.9643 | 0.9794 | 0.9359 | 0.9928 | 0.9144 | 0.9736 |
| 0.1109 | 4490.0 | 26940 | 0.0635 | 0.9420 | 0.9615 | 0.9787 | 0.9290 | 0.9940 | 0.9112 | 0.9728 |
| 0.1109 | 4493.33 | 26960 | 0.0522 | 0.9444 | 0.9646 | 0.9795 | 0.9365 | 0.9928 | 0.9150 | 0.9738 |
| 0.0013 | 4496.67 | 26980 | 0.0592 | 0.9420 | 0.9626 | 0.9787 | 0.9323 | 0.9929 | 0.9113 | 0.9727 |
| 0.1108 | 4500.0 | 27000 | 0.0603 | 0.9422 | 0.9619 | 0.9788 | 0.9301 | 0.9937 | 0.9116 | 0.9728 |
| 0.0261 | 4503.33 | 27020 | 0.0665 | 0.9415 | 0.9607 | 0.9785 | 0.9271 | 0.9944 | 0.9104 | 0.9726 |
| 0.0013 | 4506.67 | 27040 | 0.0581 | 0.9425 | 0.9624 | 0.9789 | 0.9313 | 0.9935 | 0.9121 | 0.9730 |
| 0.1108 | 4510.0 | 27060 | 0.0579 | 0.9421 | 0.9622 | 0.9787 | 0.9309 | 0.9934 | 0.9114 | 0.9728 |
| 0.0261 | 4513.33 | 27080 | 0.0618 | 0.9413 | 0.9616 | 0.9784 | 0.9299 | 0.9933 | 0.9102 | 0.9724 |
| 0.0266 | 4516.67 | 27100 | 0.0626 | 0.9417 | 0.9616 | 0.9786 | 0.9295 | 0.9937 | 0.9108 | 0.9726 |
| 0.0013 | 4520.0 | 27120 | 0.0602 | 0.9420 | 0.9619 | 0.9787 | 0.9302 | 0.9936 | 0.9112 | 0.9727 |
| 0.0013 | 4523.33 | 27140 | 0.0602 | 0.9419 | 0.9623 | 0.9786 | 0.9316 | 0.9931 | 0.9111 | 0.9726 |
| 0.0209 | 4526.67 | 27160 | 0.0725 | 0.9411 | 0.9607 | 0.9784 | 0.9272 | 0.9941 | 0.9098 | 0.9724 |
| 0.026 | 4530.0 | 27180 | 0.0606 | 0.9420 | 0.9617 | 0.9787 | 0.9298 | 0.9937 | 0.9112 | 0.9727 |
| 0.1108 | 4533.33 | 27200 | 0.0564 | 0.9428 | 0.9620 | 0.9790 | 0.9299 | 0.9941 | 0.9125 | 0.9732 |
| 0.0097 | 4536.67 | 27220 | 0.0586 | 0.9422 | 0.9623 | 0.9788 | 0.9313 | 0.9933 | 0.9116 | 0.9728 |
| 0.026 | 4540.0 | 27240 | 0.0633 | 0.9418 | 0.9614 | 0.9786 | 0.9290 | 0.9939 | 0.9109 | 0.9727 |
| 0.021 | 4543.33 | 27260 | 0.0656 | 0.9415 | 0.9614 | 0.9785 | 0.9290 | 0.9937 | 0.9105 | 0.9725 |
| 0.0013 | 4546.67 | 27280 | 0.0576 | 0.9423 | 0.9627 | 0.9788 | 0.9323 | 0.9931 | 0.9118 | 0.9729 |
| 0.0066 | 4550.0 | 27300 | 0.0612 | 0.9418 | 0.9616 | 0.9786 | 0.9296 | 0.9937 | 0.9109 | 0.9727 |
| 0.0212 | 4553.33 | 27320 | 0.0667 | 0.9412 | 0.9612 | 0.9784 | 0.9287 | 0.9937 | 0.9100 | 0.9724 |
| 0.026 | 4556.67 | 27340 | 0.0617 | 0.9421 | 0.9615 | 0.9788 | 0.9289 | 0.9941 | 0.9114 | 0.9728 |
| 0.0209 | 4560.0 | 27360 | 0.0580 | 0.9425 | 0.9623 | 0.9789 | 0.9309 | 0.9937 | 0.9120 | 0.9730 |
| 0.0097 | 4563.33 | 27380 | 0.0652 | 0.9412 | 0.9614 | 0.9784 | 0.9294 | 0.9935 | 0.9101 | 0.9724 |
| 0.0097 | 4566.67 | 27400 | 0.0559 | 0.9430 | 0.9634 | 0.9790 | 0.9338 | 0.9929 | 0.9128 | 0.9731 |
| 0.026 | 4570.0 | 27420 | 0.0634 | 0.9417 | 0.9615 | 0.9786 | 0.9292 | 0.9938 | 0.9108 | 0.9726 |
| 0.0097 | 4573.33 | 27440 | 0.0702 | 0.9414 | 0.9607 | 0.9785 | 0.9271 | 0.9943 | 0.9102 | 0.9725 |
| 0.0097 | 4576.67 | 27460 | 0.0613 | 0.9417 | 0.9618 | 0.9786 | 0.9302 | 0.9934 | 0.9108 | 0.9726 |
| 0.0013 | 4580.0 | 27480 | 0.0588 | 0.9424 | 0.9621 | 0.9788 | 0.9306 | 0.9937 | 0.9118 | 0.9729 |
| 0.0012 | 4583.33 | 27500 | 0.0591 | 0.9423 | 0.9620 | 0.9788 | 0.9303 | 0.9938 | 0.9118 | 0.9729 |
| 0.0209 | 4586.67 | 27520 | 0.0603 | 0.9423 | 0.9618 | 0.9788 | 0.9296 | 0.9939 | 0.9116 | 0.9729 |
| 0.0014 | 4590.0 | 27540 | 0.0575 | 0.9423 | 0.9625 | 0.9788 | 0.9319 | 0.9932 | 0.9117 | 0.9728 |
| 0.0263 | 4593.33 | 27560 | 0.0638 | 0.9415 | 0.9617 | 0.9785 | 0.9300 | 0.9934 | 0.9105 | 0.9725 |
| 0.1112 | 4596.67 | 27580 | 0.0639 | 0.9417 | 0.9611 | 0.9786 | 0.9280 | 0.9942 | 0.9107 | 0.9726 |
| 0.0107 | 4600.0 | 27600 | 0.0604 | 0.9420 | 0.9619 | 0.9787 | 0.9303 | 0.9936 | 0.9112 | 0.9727 |
| 0.0209 | 4603.33 | 27620 | 0.0593 | 0.9423 | 0.9620 | 0.9788 | 0.9304 | 0.9937 | 0.9117 | 0.9729 |
| 0.1168 | 4606.67 | 27640 | 0.0612 | 0.9423 | 0.9613 | 0.9788 | 0.9283 | 0.9944 | 0.9116 | 0.9729 |
| 0.0013 | 4610.0 | 27660 | 0.0597 | 0.9424 | 0.9621 | 0.9788 | 0.9306 | 0.9937 | 0.9118 | 0.9729 |
| 0.026 | 4613.33 | 27680 | 0.0610 | 0.9419 | 0.9618 | 0.9787 | 0.9301 | 0.9936 | 0.9111 | 0.9727 |
| 0.0013 | 4616.67 | 27700 | 0.0622 | 0.9415 | 0.9618 | 0.9785 | 0.9303 | 0.9933 | 0.9105 | 0.9725 |
| 0.0097 | 4620.0 | 27720 | 0.0650 | 0.9415 | 0.9614 | 0.9785 | 0.9291 | 0.9937 | 0.9105 | 0.9725 |
| 0.0263 | 4623.33 | 27740 | 0.0626 | 0.9418 | 0.9611 | 0.9787 | 0.9280 | 0.9943 | 0.9110 | 0.9727 |
| 0.0012 | 4626.67 | 27760 | 0.0623 | 0.9419 | 0.9616 | 0.9787 | 0.9294 | 0.9938 | 0.9111 | 0.9727 |
| 0.0013 | 4630.0 | 27780 | 0.0616 | 0.9426 | 0.9625 | 0.9789 | 0.9315 | 0.9935 | 0.9122 | 0.9730 |
| 0.0099 | 4633.33 | 27800 | 0.0560 | 0.9428 | 0.9628 | 0.9790 | 0.9322 | 0.9933 | 0.9125 | 0.9731 |
| 0.0066 | 4636.67 | 27820 | 0.0571 | 0.9423 | 0.9626 | 0.9788 | 0.9320 | 0.9932 | 0.9118 | 0.9729 |
| 0.0068 | 4640.0 | 27840 | 0.0609 | 0.9418 | 0.9617 | 0.9786 | 0.9296 | 0.9937 | 0.9110 | 0.9727 |
| 0.0226 | 4643.33 | 27860 | 0.0606 | 0.9419 | 0.9617 | 0.9787 | 0.9297 | 0.9937 | 0.9111 | 0.9727 |
| 0.0066 | 4646.67 | 27880 | 0.0561 | 0.9427 | 0.9634 | 0.9789 | 0.9341 | 0.9927 | 0.9124 | 0.9730 |
| 0.0068 | 4650.0 | 27900 | 0.0571 | 0.9425 | 0.9620 | 0.9789 | 0.9302 | 0.9939 | 0.9120 | 0.9730 |
| 0.0097 | 4653.33 | 27920 | 0.0582 | 0.9424 | 0.9626 | 0.9788 | 0.9319 | 0.9933 | 0.9119 | 0.9729 |
| 0.0012 | 4656.67 | 27940 | 0.0635 | 0.9415 | 0.9613 | 0.9785 | 0.9289 | 0.9938 | 0.9105 | 0.9725 |
| 0.026 | 4660.0 | 27960 | 0.0626 | 0.9416 | 0.9615 | 0.9785 | 0.9293 | 0.9937 | 0.9106 | 0.9726 |
| 0.0013 | 4663.33 | 27980 | 0.0590 | 0.9421 | 0.9622 | 0.9787 | 0.9311 | 0.9934 | 0.9114 | 0.9728 |
| 0.0261 | 4666.67 | 28000 | 0.0646 | 0.9414 | 0.9612 | 0.9785 | 0.9286 | 0.9938 | 0.9103 | 0.9725 |
| 0.1108 | 4670.0 | 28020 | 0.0592 | 0.9420 | 0.9619 | 0.9787 | 0.9302 | 0.9936 | 0.9113 | 0.9728 |
| 0.0068 | 4673.33 | 28040 | 0.0640 | 0.9415 | 0.9613 | 0.9785 | 0.9288 | 0.9938 | 0.9105 | 0.9726 |
| 0.0013 | 4676.67 | 28060 | 0.0655 | 0.9417 | 0.9611 | 0.9786 | 0.9279 | 0.9942 | 0.9108 | 0.9727 |
| 0.0209 | 4680.0 | 28080 | 0.0614 | 0.9418 | 0.9618 | 0.9786 | 0.9299 | 0.9936 | 0.9109 | 0.9726 |
| 0.0068 | 4683.33 | 28100 | 0.0648 | 0.9416 | 0.9611 | 0.9786 | 0.9280 | 0.9941 | 0.9106 | 0.9726 |
| 0.0209 | 4686.67 | 28120 | 0.0625 | 0.9413 | 0.9619 | 0.9784 | 0.9308 | 0.9930 | 0.9102 | 0.9724 |
| 0.1165 | 4690.0 | 28140 | 0.0589 | 0.9422 | 0.9622 | 0.9788 | 0.9310 | 0.9934 | 0.9115 | 0.9728 |
| 0.0066 | 4693.33 | 28160 | 0.0644 | 0.9414 | 0.9614 | 0.9785 | 0.9290 | 0.9937 | 0.9103 | 0.9725 |
| 0.0215 | 4696.67 | 28180 | 0.0674 | 0.9418 | 0.9609 | 0.9787 | 0.9273 | 0.9945 | 0.9108 | 0.9727 |
| 0.026 | 4700.0 | 28200 | 0.0649 | 0.9415 | 0.9613 | 0.9785 | 0.9289 | 0.9938 | 0.9105 | 0.9725 |
| 0.0066 | 4703.33 | 28220 | 0.0568 | 0.9425 | 0.9632 | 0.9789 | 0.9337 | 0.9927 | 0.9121 | 0.9729 |
| 0.1108 | 4706.67 | 28240 | 0.0541 | 0.9434 | 0.9638 | 0.9792 | 0.9347 | 0.9929 | 0.9135 | 0.9733 |
| 0.0066 | 4710.0 | 28260 | 0.0589 | 0.9423 | 0.9621 | 0.9788 | 0.9307 | 0.9936 | 0.9117 | 0.9729 |
| 0.026 | 4713.33 | 28280 | 0.0646 | 0.9416 | 0.9608 | 0.9786 | 0.9271 | 0.9944 | 0.9106 | 0.9726 |
| 0.026 | 4716.67 | 28300 | 0.0579 | 0.9424 | 0.9622 | 0.9788 | 0.9308 | 0.9936 | 0.9118 | 0.9729 |
| 0.0097 | 4720.0 | 28320 | 0.0595 | 0.9418 | 0.9617 | 0.9786 | 0.9297 | 0.9937 | 0.9110 | 0.9727 |
| 0.009 | 4723.33 | 28340 | 0.0621 | 0.9418 | 0.9615 | 0.9786 | 0.9291 | 0.9939 | 0.9110 | 0.9727 |
| 0.1109 | 4726.67 | 28360 | 0.0637 | 0.9417 | 0.9613 | 0.9786 | 0.9286 | 0.9940 | 0.9107 | 0.9726 |
| 0.0209 | 4730.0 | 28380 | 0.0637 | 0.9415 | 0.9616 | 0.9785 | 0.9296 | 0.9935 | 0.9105 | 0.9725 |
| 0.0015 | 4733.33 | 28400 | 0.0518 | 0.9446 | 0.9648 | 0.9796 | 0.9367 | 0.9928 | 0.9153 | 0.9738 |
| 0.0209 | 4736.67 | 28420 | 0.0524 | 0.9436 | 0.9636 | 0.9793 | 0.9340 | 0.9932 | 0.9138 | 0.9735 |
| 0.0209 | 4740.0 | 28440 | 0.0585 | 0.9424 | 0.9622 | 0.9789 | 0.9307 | 0.9937 | 0.9119 | 0.9729 |
| 0.0013 | 4743.33 | 28460 | 0.0625 | 0.9416 | 0.9615 | 0.9786 | 0.9294 | 0.9937 | 0.9107 | 0.9726 |
| 0.0281 | 4746.67 | 28480 | 0.0661 | 0.9419 | 0.9608 | 0.9787 | 0.9271 | 0.9945 | 0.9110 | 0.9728 |
| 0.0209 | 4750.0 | 28500 | 0.0626 | 0.9415 | 0.9616 | 0.9785 | 0.9296 | 0.9936 | 0.9106 | 0.9725 |
| 0.026 | 4753.33 | 28520 | 0.0635 | 0.9418 | 0.9617 | 0.9786 | 0.9298 | 0.9937 | 0.9110 | 0.9727 |
| 0.1108 | 4756.67 | 28540 | 0.0586 | 0.9423 | 0.9624 | 0.9788 | 0.9314 | 0.9934 | 0.9117 | 0.9729 |
| 0.0209 | 4760.0 | 28560 | 0.0655 | 0.9414 | 0.9612 | 0.9785 | 0.9285 | 0.9938 | 0.9103 | 0.9725 |
| 0.0264 | 4763.33 | 28580 | 0.0691 | 0.9413 | 0.9607 | 0.9785 | 0.9272 | 0.9943 | 0.9102 | 0.9725 |
| 0.026 | 4766.67 | 28600 | 0.0591 | 0.9422 | 0.9620 | 0.9788 | 0.9303 | 0.9937 | 0.9116 | 0.9729 |
| 0.0209 | 4770.0 | 28620 | 0.0629 | 0.9417 | 0.9614 | 0.9786 | 0.9288 | 0.9939 | 0.9108 | 0.9726 |
| 0.0013 | 4773.33 | 28640 | 0.0531 | 0.9441 | 0.9641 | 0.9795 | 0.9352 | 0.9931 | 0.9146 | 0.9737 |
| 0.01 | 4776.67 | 28660 | 0.0598 | 0.9422 | 0.9615 | 0.9788 | 0.9289 | 0.9941 | 0.9115 | 0.9729 |
| 0.026 | 4780.0 | 28680 | 0.0632 | 0.9417 | 0.9615 | 0.9786 | 0.9293 | 0.9937 | 0.9108 | 0.9726 |
| 0.0066 | 4783.33 | 28700 | 0.0654 | 0.9418 | 0.9611 | 0.9786 | 0.9279 | 0.9942 | 0.9108 | 0.9727 |
| 0.0012 | 4786.67 | 28720 | 0.0581 | 0.9424 | 0.9624 | 0.9788 | 0.9315 | 0.9934 | 0.9119 | 0.9729 |
| 0.0012 | 4790.0 | 28740 | 0.0560 | 0.9429 | 0.9629 | 0.9790 | 0.9324 | 0.9934 | 0.9127 | 0.9732 |
| 0.0262 | 4793.33 | 28760 | 0.0562 | 0.9433 | 0.9628 | 0.9792 | 0.9318 | 0.9937 | 0.9132 | 0.9733 |
| 0.0209 | 4796.67 | 28780 | 0.0567 | 0.9424 | 0.9627 | 0.9788 | 0.9322 | 0.9932 | 0.9120 | 0.9729 |
| 0.026 | 4800.0 | 28800 | 0.0615 | 0.9419 | 0.9616 | 0.9787 | 0.9292 | 0.9939 | 0.9111 | 0.9727 |
| 0.0066 | 4803.33 | 28820 | 0.0647 | 0.9418 | 0.9611 | 0.9787 | 0.9279 | 0.9942 | 0.9109 | 0.9727 |
| 0.0012 | 4806.67 | 28840 | 0.0525 | 0.9441 | 0.9641 | 0.9795 | 0.9352 | 0.9931 | 0.9145 | 0.9737 |
| 0.1107 | 4810.0 | 28860 | 0.0605 | 0.9420 | 0.9617 | 0.9787 | 0.9295 | 0.9938 | 0.9112 | 0.9728 |
| 0.0209 | 4813.33 | 28880 | 0.0584 | 0.9422 | 0.9621 | 0.9788 | 0.9307 | 0.9936 | 0.9116 | 0.9728 |
| 0.0209 | 4816.67 | 28900 | 0.0638 | 0.9415 | 0.9615 | 0.9785 | 0.9294 | 0.9936 | 0.9105 | 0.9725 |
| 0.026 | 4820.0 | 28920 | 0.0613 | 0.9418 | 0.9616 | 0.9786 | 0.9293 | 0.9938 | 0.9110 | 0.9727 |
| 0.0209 | 4823.33 | 28940 | 0.0643 | 0.9420 | 0.9617 | 0.9787 | 0.9295 | 0.9939 | 0.9113 | 0.9728 |
| 0.0068 | 4826.67 | 28960 | 0.0536 | 0.9437 | 0.9637 | 0.9793 | 0.9342 | 0.9932 | 0.9140 | 0.9735 |
| 0.0096 | 4830.0 | 28980 | 0.0577 | 0.9427 | 0.9623 | 0.9790 | 0.9308 | 0.9938 | 0.9123 | 0.9731 |
| 0.026 | 4833.33 | 29000 | 0.0634 | 0.9417 | 0.9615 | 0.9786 | 0.9292 | 0.9938 | 0.9108 | 0.9726 |
| 0.0012 | 4836.67 | 29020 | 0.0638 | 0.9413 | 0.9615 | 0.9785 | 0.9295 | 0.9935 | 0.9103 | 0.9724 |
| 0.021 | 4840.0 | 29040 | 0.0627 | 0.9416 | 0.9613 | 0.9786 | 0.9288 | 0.9939 | 0.9107 | 0.9726 |
| 0.0012 | 4843.33 | 29060 | 0.0597 | 0.9419 | 0.9623 | 0.9786 | 0.9315 | 0.9932 | 0.9112 | 0.9727 |
| 0.026 | 4846.67 | 29080 | 0.0720 | 0.9414 | 0.9606 | 0.9785 | 0.9269 | 0.9944 | 0.9102 | 0.9725 |
| 0.0263 | 4850.0 | 29100 | 0.0606 | 0.9419 | 0.9616 | 0.9787 | 0.9295 | 0.9938 | 0.9111 | 0.9727 |
| 0.0067 | 4853.33 | 29120 | 0.0626 | 0.9416 | 0.9615 | 0.9786 | 0.9294 | 0.9937 | 0.9107 | 0.9726 |
| 0.027 | 4856.67 | 29140 | 0.0680 | 0.9414 | 0.9612 | 0.9785 | 0.9285 | 0.9938 | 0.9103 | 0.9725 |
| 0.1107 | 4860.0 | 29160 | 0.0586 | 0.9427 | 0.9627 | 0.9789 | 0.9321 | 0.9933 | 0.9123 | 0.9730 |
| 0.0012 | 4863.33 | 29180 | 0.0530 | 0.9442 | 0.9643 | 0.9795 | 0.9357 | 0.9929 | 0.9147 | 0.9737 |
| 0.026 | 4866.67 | 29200 | 0.0618 | 0.9421 | 0.9615 | 0.9787 | 0.9289 | 0.9941 | 0.9113 | 0.9728 |
| 0.0013 | 4870.0 | 29220 | 0.0627 | 0.9416 | 0.9618 | 0.9786 | 0.9301 | 0.9934 | 0.9107 | 0.9726 |
| 0.011 | 4873.33 | 29240 | 0.0567 | 0.9427 | 0.9624 | 0.9790 | 0.9313 | 0.9936 | 0.9123 | 0.9731 |
| 0.0096 | 4876.67 | 29260 | 0.0596 | 0.9422 | 0.9624 | 0.9788 | 0.9314 | 0.9933 | 0.9116 | 0.9728 |
| 0.0209 | 4880.0 | 29280 | 0.0610 | 0.9422 | 0.9618 | 0.9788 | 0.9299 | 0.9938 | 0.9115 | 0.9728 |
| 0.0096 | 4883.33 | 29300 | 0.0660 | 0.9417 | 0.9612 | 0.9786 | 0.9282 | 0.9941 | 0.9108 | 0.9727 |
| 0.0096 | 4886.67 | 29320 | 0.0646 | 0.9413 | 0.9618 | 0.9784 | 0.9303 | 0.9932 | 0.9102 | 0.9724 |
| 0.0099 | 4890.0 | 29340 | 0.0634 | 0.9417 | 0.9616 | 0.9786 | 0.9295 | 0.9937 | 0.9108 | 0.9726 |
| 0.1107 | 4893.33 | 29360 | 0.0541 | 0.9437 | 0.9639 | 0.9793 | 0.9349 | 0.9929 | 0.9139 | 0.9735 |
| 0.026 | 4896.67 | 29380 | 0.0604 | 0.9423 | 0.9620 | 0.9788 | 0.9301 | 0.9938 | 0.9117 | 0.9729 |
| 0.0263 | 4900.0 | 29400 | 0.0619 | 0.9420 | 0.9615 | 0.9787 | 0.9290 | 0.9940 | 0.9113 | 0.9728 |
| 0.026 | 4903.33 | 29420 | 0.0655 | 0.9418 | 0.9610 | 0.9787 | 0.9275 | 0.9944 | 0.9109 | 0.9727 |
| 0.0208 | 4906.67 | 29440 | 0.0642 | 0.9418 | 0.9613 | 0.9787 | 0.9285 | 0.9941 | 0.9109 | 0.9727 |
| 0.0012 | 4910.0 | 29460 | 0.0594 | 0.9423 | 0.9622 | 0.9788 | 0.9310 | 0.9935 | 0.9117 | 0.9729 |
| 0.0096 | 4913.33 | 29480 | 0.0658 | 0.9416 | 0.9612 | 0.9786 | 0.9285 | 0.9939 | 0.9106 | 0.9726 |
| 0.0066 | 4916.67 | 29500 | 0.0497 | 0.9453 | 0.9649 | 0.9799 | 0.9365 | 0.9933 | 0.9164 | 0.9742 |
| 0.0215 | 4920.0 | 29520 | 0.0600 | 0.9422 | 0.9618 | 0.9788 | 0.9298 | 0.9939 | 0.9115 | 0.9729 |
| 0.0066 | 4923.33 | 29540 | 0.0643 | 0.9415 | 0.9616 | 0.9785 | 0.9296 | 0.9936 | 0.9105 | 0.9725 |
| 0.0012 | 4926.67 | 29560 | 0.0626 | 0.9417 | 0.9617 | 0.9786 | 0.9297 | 0.9936 | 0.9108 | 0.9726 |
| 0.0097 | 4930.0 | 29580 | 0.0611 | 0.9422 | 0.9620 | 0.9788 | 0.9304 | 0.9936 | 0.9115 | 0.9728 |
| 0.0084 | 4933.33 | 29600 | 0.0535 | 0.9438 | 0.9635 | 0.9794 | 0.9335 | 0.9935 | 0.9141 | 0.9736 |
| 0.0066 | 4936.67 | 29620 | 0.0548 | 0.9433 | 0.9633 | 0.9792 | 0.9334 | 0.9932 | 0.9133 | 0.9733 |
| 0.0107 | 4940.0 | 29640 | 0.0583 | 0.9422 | 0.9622 | 0.9788 | 0.9308 | 0.9935 | 0.9116 | 0.9728 |
| 0.0013 | 4943.33 | 29660 | 0.0604 | 0.9417 | 0.9620 | 0.9786 | 0.9307 | 0.9933 | 0.9109 | 0.9726 |
| 0.0014 | 4946.67 | 29680 | 0.0596 | 0.9421 | 0.9618 | 0.9787 | 0.9299 | 0.9937 | 0.9114 | 0.9728 |
| 0.0013 | 4950.0 | 29700 | 0.0596 | 0.9421 | 0.9620 | 0.9787 | 0.9305 | 0.9936 | 0.9114 | 0.9728 |
| 0.026 | 4953.33 | 29720 | 0.0625 | 0.9418 | 0.9613 | 0.9786 | 0.9286 | 0.9940 | 0.9109 | 0.9727 |
| 0.0013 | 4956.67 | 29740 | 0.0581 | 0.9426 | 0.9628 | 0.9789 | 0.9325 | 0.9932 | 0.9123 | 0.9730 |
| 0.0087 | 4960.0 | 29760 | 0.0549 | 0.9437 | 0.9629 | 0.9794 | 0.9320 | 0.9939 | 0.9139 | 0.9736 |
| 0.0013 | 4963.33 | 29780 | 0.0566 | 0.9428 | 0.9629 | 0.9790 | 0.9326 | 0.9932 | 0.9125 | 0.9731 |
| 0.1117 | 4966.67 | 29800 | 0.0568 | 0.9432 | 0.9627 | 0.9791 | 0.9317 | 0.9937 | 0.9131 | 0.9733 |
| 0.011 | 4970.0 | 29820 | 0.0564 | 0.9424 | 0.9623 | 0.9789 | 0.9310 | 0.9936 | 0.9119 | 0.9729 |
| 0.1108 | 4973.33 | 29840 | 0.0556 | 0.9430 | 0.9628 | 0.9791 | 0.9322 | 0.9935 | 0.9129 | 0.9732 |
| 0.1107 | 4976.67 | 29860 | 0.0558 | 0.9431 | 0.9630 | 0.9791 | 0.9327 | 0.9934 | 0.9130 | 0.9732 |
| 0.0208 | 4980.0 | 29880 | 0.0601 | 0.9421 | 0.9622 | 0.9787 | 0.9309 | 0.9934 | 0.9115 | 0.9728 |
| 0.0066 | 4983.33 | 29900 | 0.0554 | 0.9431 | 0.9632 | 0.9791 | 0.9331 | 0.9932 | 0.9130 | 0.9732 |
| 0.0209 | 4986.67 | 29920 | 0.0627 | 0.9420 | 0.9613 | 0.9787 | 0.9286 | 0.9941 | 0.9112 | 0.9728 |
| 0.01 | 4990.0 | 29940 | 0.0583 | 0.9422 | 0.9622 | 0.9788 | 0.9308 | 0.9935 | 0.9115 | 0.9728 |
| 0.0096 | 4993.33 | 29960 | 0.0605 | 0.9418 | 0.9623 | 0.9786 | 0.9314 | 0.9931 | 0.9110 | 0.9726 |
| 0.0013 | 4996.67 | 29980 | 0.0635 | 0.9414 | 0.9616 | 0.9785 | 0.9299 | 0.9934 | 0.9104 | 0.9725 |
| 0.0096 | 5000.0 | 30000 | 0.0652 | 0.9415 | 0.9614 | 0.9785 | 0.9290 | 0.9937 | 0.9104 | 0.9725 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cpu
- Datasets 2.15.0
- Tokenizers 0.14.1
|
wonjeongho/t5-wmt16-ro-en | wonjeongho | 2024-02-15T07:26:54Z | 34 | 0 | transformers | [
"transformers",
"pytorch",
"elastic_t5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-15T07:21:39Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 27.1318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3574
- Bleu: 27.1318
- Gen Len: 42.5798
- Loss Smallest Subnet: 1.3574
- Bleu Smallest Subnet: 27.1318
- Gen Len Smallest Subnet: 42.5798
- Loss Random Subnet: 1.3574
- Loss Sum: 4.0723
- Bleu Random Subnet: 27.1318
- Bleu Sum: 81.3954
- Gen Len Random Subnet: 42.5798
- Gen Len Sum: 127.7394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 24
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 48
- total_eval_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | Loss Smallest Subnet | Bleu Smallest Subnet | Gen Len Smallest Subnet | Loss Random Subnet | Loss Sum | Bleu Random Subnet | Bleu Sum | Gen Len Random Subnet | Gen Len Sum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:--------------------:|:--------------------:|:-----------------------:|:------------------:|:--------:|:------------------:|:--------:|:---------------------:|:-----------:|
| 0.5967 | 1.0 | 12715 | 1.3820 | 26.593 | 42.4422 | 1.3820 | 26.593 | 42.4422 | 1.3820 | 4.1461 | 26.593 | 79.779 | 42.4422 | 127.3266 |
| 0.5768 | 2.0 | 25430 | 1.3728 | 26.6191 | 42.6738 | 1.3728 | 26.6191 | 42.6738 | 1.3728 | 4.1184 | 26.6191 | 79.8573 | 42.6738 | 128.0214 |
| 0.5663 | 3.0 | 38145 | 1.3616 | 26.9203 | 42.5298 | 1.3616 | 26.9203 | 42.5298 | 1.3616 | 4.0849 | 26.9203 | 80.7609 | 42.5298 | 127.5894 |
| 0.5523 | 4.0 | 50860 | 1.3570 | 27.0195 | 42.5203 | 1.3570 | 27.0195 | 42.5203 | 1.3570 | 4.0709 | 27.0195 | 81.0585 | 42.5203 | 127.5609 |
| 0.5436 | 5.0 | 63575 | 1.3574 | 27.1318 | 42.5798 | 1.3574 | 27.1318 | 42.5798 | 1.3574 | 4.0723 | 27.1318 | 81.3954 | 42.5798 | 127.7394 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nold/openbuddy-mixtral-7bx8-v18.1-32k-GGUF | nold | 2024-02-15T07:23:51Z | 11 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-02-14T20:10:36Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
***
Quantization of Model [OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-7bx8-v18.1-32k). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline [8668cbd2081063e33a128251312e6de9744d0a64]
|
Skier8402/distilbert-base-uncased-finetuned-imdb | Skier8402 | 2024-02-15T07:16:34Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"huggingface_course",
"movies",
"en",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-02-15T06:48:14Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
- huggingface_course
- movies
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
datasets:
- imdb
language:
- en
metrics:
- perplexity
library_name: transformers
pipeline_tag: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4253
- Perplexity: 11.20
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6962 | 1.0 | 157 | 2.5423 |
| 2.5701 | 2.0 | 314 | 2.4638 |
| 2.5417 | 3.0 | 471 | 2.4253 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
changok/phi-2-ko-v0.1-gguf | changok | 2024-02-15T07:10:42Z | 2 | 1 | null | [
"gguf",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T07:01:27Z | ---
license: cc-by-sa-3.0
---
This model was converted to gguf format from daekeun-ml/phi-2-ko-v0.1. |
Akimitsujiro/FurSho | Akimitsujiro | 2024-02-15T07:01:09Z | 11 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"region:us"
]
| text-to-image | 2024-02-15T07:00:54Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1000079717.webp
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: null
---
# FurSho
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Akimitsujiro/FurSho/tree/main) them in the Files & versions tab.
|
oraul/table_transformer_TSR_v1 | oraul | 2024-02-15T06:58:59Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-02-15T06:58:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/Kyllene-34B-v1.1-2.7bpw-h6-exl2 | LoneStriker | 2024-02-15T06:56:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T06:50:41Z | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
tags:
- merge
---
# Kyllene 34B v1.1

## Model Details
- A result of new merge method provided by [MergeMonster](https://github.com/Gryphe/MergeMonster/) tool with extended RPG preset.
- models used for merge:
[jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
[NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
[NousResearch_Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
[SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B)
- Method is aimed to maximize probability of certain phrases and minimize probablility of other phrases.
- RPG preset was extened with examples of typical, nonsensical output of most models like 'unbreakable bond', 'send shivers down her spine' etc.
- The resulting model has approximately 34 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/Kyllene-34B-v1.1/resolve/main/merge-config.yml) for details on the merge method used and RPG presets.
**Warning: This model can produce NSFW content!**
## Results
- produces SFW nad NSFW content without issues, switches context seamlessly.
- 200K context length
- good at following instructions
- different than [TeeZee/Kyllene-57B-v1.0](https://huggingface.co/TeeZee/Kyllene-57B-v1.0), but also surprisingly entertaining (but more tests are needed)
## Side notes
- [MergeMonster](https://github.com/Gryphe/MergeMonster/) method works, however project would benefit greatly from some more love from developers.
- In its current state MergeMonster consumes insane amounts of RAM (256GB+) or VRAM and takes a really long time to process model data, this merge took 24H on 1xADA6000
- MergeMonster is not a golden bullet, other experiments has shown that it can also produce incredibly stupid models.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> |
ribhu/mistral-7b-test-finetune | ribhu | 2024-02-15T06:55:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T06:47:44Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** ribhu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
norman-codes/transfer-learning-attempt1 | norman-codes | 2024-02-15T06:42:13Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"transfer_learning",
"en",
"dataset:izumi-lab/open-text-books",
"dataset:AlekseyKorshuk/fiction-books",
"dataset:vishnupriyavr/wiki-movie-plots-with-summaries",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T06:34:03Z | ---
datasets:
- izumi-lab/open-text-books
- AlekseyKorshuk/fiction-books
- vishnupriyavr/wiki-movie-plots-with-summaries
language:
- en
tags:
- transfer_learning
--- |
theidoldaily/kotori-minami | theidoldaily | 2024-02-15T06:41:37Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
]
| text-to-image | 2024-02-15T06:36:24Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
masterpiece, high quality, defined pupil, looking at viewer, rounded pupil,
defined iris, (soft iris:1.2),
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: images/00000-2136358392.png
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: id_kotori_minami
license: mit
---
# Kotori Minami
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_kotori_minami` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/kotori-minami/tree/main) them in the Files & versions tab.
|
hotdogs/openuka_v1_1_7B_GGUF | hotdogs | 2024-02-15T06:40:52Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mixtral",
"en",
"th",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-14T06:58:03Z | ---
license: other
language:
- en
- th
--- |
h2m/Convex-Workshop-8x7B-Adapter | h2m | 2024-02-15T06:40:48Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:other",
"region:us"
]
| null | 2024-02-15T06:37:01Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: train_2024-02-15-06-06-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_2024-02-15-06-06-50
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the 3_line dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
leoreigoto/Data2_V2_Blip2_Finetune_Caption | leoreigoto | 2024-02-15T06:14:53Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
]
| null | 2024-02-15T04:12:11Z | ---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
jan-hq/stealth-finance-v1-e1 | jan-hq | 2024-02-15T06:10:03Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T06:06:48Z | ---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training detail
You can read [here](https://huggingface.co/jan-hq/stealth-finance-v1-adapter).
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. |
silk-road/Haruhi-dialogue-action-extract-7B | silk-road | 2024-02-15T06:07:55Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"qwen",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-02-15T05:23:07Z |
# Zero凉宫春日
基于Qwen_7B_base 热启,在15w高质量的NPC抽取样本上进行2k训练
epoch=2,batch_size=64,lr=2e-5
|
Pplus/mistral-health-faq_log_50 | Pplus | 2024-02-15T05:52:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"my",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T05:13:39Z | ---
library_name: transformers
base_model: mistralai/Mistral-7B-v0.1
language:
- my
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
himanshue2e/whisper-small-dataset | himanshue2e | 2024-02-15T05:50:01Z | 60 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-14T12:10:02Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
base_model: openai/whisper-large-v3
model-index:
- name: whisper-small-dataset
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- type: wer
value: 48.5207100591716
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-dataset
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2599
- Wer: 48.5207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.6 | 10 | 0.3733 | 50.2959 |
| No log | 3.2 | 20 | 0.2663 | 52.0710 |
| 0.2997 | 4.8 | 30 | 0.2667 | 48.5207 |
| 0.2997 | 6.4 | 40 | 0.2599 | 48.5207 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ONS-AI-RESEARCH/ONS-SOLAR-10.7B-AWQ | ONS-AI-RESEARCH | 2024-02-15T05:49:21Z | 60 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SOLAR-10.7B",
"AWQ",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-02-14T07:58:34Z | ---
license: cc-by-nc-4.0
language:
- ko
tags:
- SOLAR-10.7B
- AWQ
---
# ONS-SOLAR-10.7B-AWQ
### Model Details
- Base Model: [ONS-AI-RESEARCH/ONS-SOLAR-10.7B](https://huggingface.co/ONS-AI-RESEARCH/ONS-SOLAR-10.7B)
- Quantization by AutoAWQ(https://github.com/casper-hansen/AutoAWQ) |
raucha/peft-test | raucha | 2024-02-15T05:48:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-15T05:46:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits