modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated | huihui-ai | 2024-11-25T13:42:24Z | 155 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-01T18:08:35Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated
This is an uncensored version of [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
[0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
[1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
[3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
[7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
[14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
[32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
## ollama
You can use [huihui_ai/qwen2.5-coder-abliterate:1.5b](https://ollama.com/huihui_ai/qwen2.5-coder-abliterate:1.5b) directly,
```
ollama run huihui_ai/qwen2.5-coder-abliterate:1.5b
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
## Evaluations
The following data has been re-evaluated and calculated as the average for each test.
| Benchmark | Qwen2.5-Coder-1.5B-Instruct | Qwen2.5-Coder-1.5B-Instruct-abliterated |
|-------------|-----------------------------|-----------------------------------------|
| IF_Eval | 43.43 | **45.41** |
| MMLU Pro | 21.5 | 20.57 |
| TruthfulQA | 46.07 | 41.9 |
| BBH | 36.67 | 36.09 |
| GPQA | 28.00 | 26.13 |
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/blob/main/eval.sh)
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k3_task5_organization_fold0 | MayBashendy | 2024-11-25T13:41:04Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:39:07Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k3_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k3_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5808
- Qwk: 0.7588
- Mse: 0.5808
- Rmse: 0.7621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0909 | 2 | 1.8444 | 0.1557 | 1.8444 | 1.3581 |
| No log | 0.1818 | 4 | 1.0410 | 0.0268 | 1.0410 | 1.0203 |
| No log | 0.2727 | 6 | 1.0216 | 0.2164 | 1.0216 | 1.0108 |
| No log | 0.3636 | 8 | 0.9207 | 0.2578 | 0.9207 | 0.9595 |
| No log | 0.4545 | 10 | 0.8681 | 0.2578 | 0.8681 | 0.9317 |
| No log | 0.5455 | 12 | 0.8933 | 0.3050 | 0.8933 | 0.9451 |
| No log | 0.6364 | 14 | 0.9212 | 0.4403 | 0.9212 | 0.9598 |
| No log | 0.7273 | 16 | 0.9096 | 0.4946 | 0.9096 | 0.9538 |
| No log | 0.8182 | 18 | 0.8065 | 0.5357 | 0.8065 | 0.8981 |
| No log | 0.9091 | 20 | 0.7704 | 0.5760 | 0.7704 | 0.8777 |
| No log | 1.0 | 22 | 0.6399 | 0.4403 | 0.6399 | 0.7999 |
| No log | 1.0909 | 24 | 0.7594 | 0.2578 | 0.7594 | 0.8715 |
| No log | 1.1818 | 26 | 0.8865 | 0.3407 | 0.8865 | 0.9415 |
| No log | 1.2727 | 28 | 0.8881 | 0.3407 | 0.8881 | 0.9424 |
| No log | 1.3636 | 30 | 0.8261 | 0.3448 | 0.8261 | 0.9089 |
| No log | 1.4545 | 32 | 0.6584 | 0.5195 | 0.6584 | 0.8114 |
| No log | 1.5455 | 34 | 0.5197 | 0.6375 | 0.5197 | 0.7209 |
| No log | 1.6364 | 36 | 0.8714 | 0.4379 | 0.8714 | 0.9335 |
| No log | 1.7273 | 38 | 1.1382 | 0.2857 | 1.1382 | 1.0669 |
| No log | 1.8182 | 40 | 0.9904 | 0.3314 | 0.9904 | 0.9952 |
| No log | 1.9091 | 42 | 0.6807 | 0.6960 | 0.6807 | 0.8250 |
| No log | 2.0 | 44 | 0.5777 | 0.6215 | 0.5777 | 0.7600 |
| No log | 2.0909 | 46 | 0.6061 | 0.5860 | 0.6061 | 0.7786 |
| No log | 2.1818 | 48 | 0.6522 | 0.4754 | 0.6522 | 0.8076 |
| No log | 2.2727 | 50 | 0.6795 | 0.4754 | 0.6795 | 0.8243 |
| No log | 2.3636 | 52 | 0.7260 | 0.5342 | 0.7260 | 0.8520 |
| No log | 2.4545 | 54 | 0.7495 | 0.5870 | 0.7495 | 0.8657 |
| No log | 2.5455 | 56 | 0.7779 | 0.6069 | 0.7779 | 0.8820 |
| No log | 2.6364 | 58 | 0.7597 | 0.6069 | 0.7597 | 0.8716 |
| No log | 2.7273 | 60 | 0.6645 | 0.6783 | 0.6645 | 0.8152 |
| No log | 2.8182 | 62 | 0.5246 | 0.6522 | 0.5246 | 0.7243 |
| No log | 2.9091 | 64 | 0.4944 | 0.5860 | 0.4944 | 0.7031 |
| No log | 3.0 | 66 | 0.5261 | 0.6562 | 0.5261 | 0.7253 |
| No log | 3.0909 | 68 | 0.5388 | 0.7239 | 0.5388 | 0.7341 |
| No log | 3.1818 | 70 | 0.5549 | 0.6960 | 0.5550 | 0.7449 |
| No log | 3.2727 | 72 | 0.5231 | 0.6875 | 0.5231 | 0.7232 |
| No log | 3.3636 | 74 | 0.4607 | 0.6215 | 0.4607 | 0.6788 |
| No log | 3.4545 | 76 | 0.4278 | 0.6375 | 0.4278 | 0.6540 |
| No log | 3.5455 | 78 | 0.4896 | 0.7995 | 0.4896 | 0.6997 |
| No log | 3.6364 | 80 | 0.6569 | 0.6622 | 0.6569 | 0.8105 |
| No log | 3.7273 | 82 | 0.7740 | 0.6725 | 0.7740 | 0.8798 |
| No log | 3.8182 | 84 | 0.8078 | 0.6211 | 0.8078 | 0.8988 |
| No log | 3.9091 | 86 | 0.8089 | 0.6211 | 0.8089 | 0.8994 |
| No log | 4.0 | 88 | 0.7338 | 0.6963 | 0.7338 | 0.8566 |
| No log | 4.0909 | 90 | 0.6483 | 0.7198 | 0.6483 | 0.8052 |
| No log | 4.1818 | 92 | 0.5264 | 0.7229 | 0.5264 | 0.7255 |
| No log | 4.2727 | 94 | 0.4406 | 0.6931 | 0.4406 | 0.6638 |
| No log | 4.3636 | 96 | 0.4364 | 0.6817 | 0.4364 | 0.6606 |
| No log | 4.4545 | 98 | 0.4477 | 0.6931 | 0.4477 | 0.6691 |
| No log | 4.5455 | 100 | 0.4714 | 0.6811 | 0.4714 | 0.6866 |
| No log | 4.6364 | 102 | 0.4654 | 0.6811 | 0.4654 | 0.6822 |
| No log | 4.7273 | 104 | 0.4511 | 0.6854 | 0.4511 | 0.6716 |
| No log | 4.8182 | 106 | 0.4302 | 0.6977 | 0.4302 | 0.6559 |
| No log | 4.9091 | 108 | 0.4294 | 0.6995 | 0.4294 | 0.6553 |
| No log | 5.0 | 110 | 0.4243 | 0.7290 | 0.4243 | 0.6514 |
| No log | 5.0909 | 112 | 0.3967 | 0.6694 | 0.3967 | 0.6298 |
| No log | 5.1818 | 114 | 0.4205 | 0.7267 | 0.4205 | 0.6485 |
| No log | 5.2727 | 116 | 0.4734 | 0.7018 | 0.4734 | 0.6880 |
| No log | 5.3636 | 118 | 0.5302 | 0.7556 | 0.5302 | 0.7281 |
| No log | 5.4545 | 120 | 0.5618 | 0.7351 | 0.5618 | 0.7496 |
| No log | 5.5455 | 122 | 0.5022 | 0.7588 | 0.5022 | 0.7087 |
| No log | 5.6364 | 124 | 0.4897 | 0.7588 | 0.4897 | 0.6998 |
| No log | 5.7273 | 126 | 0.5134 | 0.7588 | 0.5134 | 0.7165 |
| No log | 5.8182 | 128 | 0.5473 | 0.7588 | 0.5473 | 0.7398 |
| No log | 5.9091 | 130 | 0.6355 | 0.7351 | 0.6355 | 0.7972 |
| No log | 6.0 | 132 | 0.7419 | 0.7234 | 0.7419 | 0.8614 |
| No log | 6.0909 | 134 | 0.7913 | 0.7234 | 0.7913 | 0.8896 |
| No log | 6.1818 | 136 | 0.7310 | 0.7234 | 0.7310 | 0.8550 |
| No log | 6.2727 | 138 | 0.6173 | 0.7588 | 0.6173 | 0.7857 |
| No log | 6.3636 | 140 | 0.4788 | 0.7495 | 0.4788 | 0.6920 |
| No log | 6.4545 | 142 | 0.4385 | 0.7495 | 0.4385 | 0.6622 |
| No log | 6.5455 | 144 | 0.4446 | 0.7495 | 0.4446 | 0.6668 |
| No log | 6.6364 | 146 | 0.4475 | 0.7495 | 0.4475 | 0.6690 |
| No log | 6.7273 | 148 | 0.4780 | 0.7495 | 0.4780 | 0.6914 |
| No log | 6.8182 | 150 | 0.5332 | 0.7588 | 0.5332 | 0.7302 |
| No log | 6.9091 | 152 | 0.5475 | 0.7588 | 0.5475 | 0.7399 |
| No log | 7.0 | 154 | 0.5115 | 0.7588 | 0.5115 | 0.7152 |
| No log | 7.0909 | 156 | 0.4673 | 0.7588 | 0.4673 | 0.6836 |
| No log | 7.1818 | 158 | 0.4532 | 0.7588 | 0.4532 | 0.6732 |
| No log | 7.2727 | 160 | 0.4755 | 0.7588 | 0.4755 | 0.6896 |
| No log | 7.3636 | 162 | 0.5405 | 0.7588 | 0.5405 | 0.7352 |
| No log | 7.4545 | 164 | 0.5786 | 0.7588 | 0.5786 | 0.7607 |
| No log | 7.5455 | 166 | 0.5733 | 0.7588 | 0.5733 | 0.7571 |
| No log | 7.6364 | 168 | 0.5303 | 0.7588 | 0.5303 | 0.7282 |
| No log | 7.7273 | 170 | 0.5208 | 0.7588 | 0.5208 | 0.7217 |
| No log | 7.8182 | 172 | 0.5221 | 0.7588 | 0.5221 | 0.7225 |
| No log | 7.9091 | 174 | 0.5326 | 0.7588 | 0.5326 | 0.7298 |
| No log | 8.0 | 176 | 0.5036 | 0.7718 | 0.5036 | 0.7096 |
| No log | 8.0909 | 178 | 0.4861 | 0.7718 | 0.4861 | 0.6972 |
| No log | 8.1818 | 180 | 0.4986 | 0.7718 | 0.4986 | 0.7061 |
| No log | 8.2727 | 182 | 0.5315 | 0.7495 | 0.5315 | 0.7291 |
| No log | 8.3636 | 184 | 0.5574 | 0.7588 | 0.5574 | 0.7466 |
| No log | 8.4545 | 186 | 0.5892 | 0.7588 | 0.5892 | 0.7676 |
| No log | 8.5455 | 188 | 0.6091 | 0.7588 | 0.6091 | 0.7805 |
| No log | 8.6364 | 190 | 0.6198 | 0.7588 | 0.6198 | 0.7872 |
| No log | 8.7273 | 192 | 0.6258 | 0.7588 | 0.6258 | 0.7911 |
| No log | 8.8182 | 194 | 0.6144 | 0.7588 | 0.6144 | 0.7838 |
| No log | 8.9091 | 196 | 0.6134 | 0.7588 | 0.6134 | 0.7832 |
| No log | 9.0 | 198 | 0.5938 | 0.7588 | 0.5938 | 0.7706 |
| No log | 9.0909 | 200 | 0.5655 | 0.7588 | 0.5655 | 0.7520 |
| No log | 9.1818 | 202 | 0.5489 | 0.7588 | 0.5489 | 0.7409 |
| No log | 9.2727 | 204 | 0.5461 | 0.7588 | 0.5461 | 0.7390 |
| No log | 9.3636 | 206 | 0.5511 | 0.7588 | 0.5511 | 0.7423 |
| No log | 9.4545 | 208 | 0.5584 | 0.7588 | 0.5584 | 0.7472 |
| No log | 9.5455 | 210 | 0.5715 | 0.7588 | 0.5715 | 0.7560 |
| No log | 9.6364 | 212 | 0.5814 | 0.7588 | 0.5814 | 0.7625 |
| No log | 9.7273 | 214 | 0.5846 | 0.7588 | 0.5846 | 0.7646 |
| No log | 9.8182 | 216 | 0.5831 | 0.7588 | 0.5831 | 0.7636 |
| No log | 9.9091 | 218 | 0.5815 | 0.7588 | 0.5815 | 0.7626 |
| No log | 10.0 | 220 | 0.5808 | 0.7588 | 0.5808 | 0.7621 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated | huihui-ai | 2024-11-25T13:40:47Z | 172 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-13T06:27:29Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5-Instruct-abliterate/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-Code-0.5B-Instruct-abliterated
This is an uncensored version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
[0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
[1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
[3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
[7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
[14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
[32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
If the desired result is not achieved, you can clear the conversation and try again.
## ollama
You can use [huihui_ai/qwen2.5-coder-abliterate:0.5b](https://ollama.com/huihui_ai/qwen2.5-coder-abliterate:0.5b) directly,
```
ollama run huihui_ai/qwen2.5-coder-abliterate:0.5b
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-Code-0.5B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
|
allknowingroger/Gemmaslerp4-10B | allknowingroger | 2024-11-25T13:40:25Z | 69 | 2 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lemon07r/Gemma-2-Ataraxy-v4d-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v4d-9B",
"base_model:sam-paech/Darkest-muse-v1",
"base_model:merge:sam-paech/Darkest-muse-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-30T07:43:54Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- lemon07r/Gemma-2-Ataraxy-v4d-9B
- sam-paech/Darkest-muse-v1
model-index:
- name: GemmaSlerp4-10B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 73.26
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp4-10B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 43.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp4-10B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 17.45
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp4-10B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.76
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp4-10B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.48
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp4-10B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.11
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp4-10B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v4d-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4d-9B)
* [sam-paech/Darkest-muse-v1](https://huggingface.co/sam-paech/Darkest-muse-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: lemon07r/Gemma-2-Ataraxy-v4d-9B
- model: sam-paech/Darkest-muse-v1
merge_method: slerp
base_model: lemon07r/Gemma-2-Ataraxy-v4d-9B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__GemmaSlerp4-10B)
| Metric |Value|
|-------------------|----:|
|Avg. |33.23|
|IFEval (0-Shot) |73.26|
|BBH (3-Shot) |43.33|
|MATH Lvl 5 (4-Shot)|17.45|
|GPQA (0-shot) |13.76|
|MuSR (0-shot) |15.48|
|MMLU-PRO (5-shot) |36.11|
|
allknowingroger/Gemmaslerp-9B | allknowingroger | 2024-11-25T13:40:17Z | 64 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nbeerbower/Gemma2-Gutenberg-Doppel-9B",
"base_model:finetune:nbeerbower/Gemma2-Gutenberg-Doppel-9B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T09:52:42Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- nbeerbower/Gemma2-Gutenberg-Doppel-9B
- DreadPoor/Emu_Eggs-9B-Model_Stock
model-index:
- name: GemmaSlerp-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 70.43
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 41.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.63
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.88
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.12
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/GemmaSlerp-9B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Gemma2-Gutenberg-Doppel-9B](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B)
* [DreadPoor/Emu_Eggs-9B-Model_Stock](https://huggingface.co/DreadPoor/Emu_Eggs-9B-Model_Stock)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: DreadPoor/Emu_Eggs-9B-Model_Stock
- model: nbeerbower/Gemma2-Gutenberg-Doppel-9B
merge_method: slerp
base_model: DreadPoor/Emu_Eggs-9B-Model_Stock
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__GemmaSlerp-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |30.86|
|IFEval (0-Shot) |70.43|
|BBH (3-Shot) |41.56|
|MATH Lvl 5 (4-Shot)| 7.63|
|GPQA (0-shot) |12.53|
|MuSR (0-shot) |17.88|
|MMLU-PRO (5-shot) |35.12|
|
allknowingroger/HomerSlerp2-7B | allknowingroger | 2024-11-25T13:39:48Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:allknowingroger/Qwenslerp2-7B",
"base_model:finetune:allknowingroger/Qwenslerp2-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T10:36:40Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- allknowingroger/Qwenslerp2-7B
- newsbang/Homer-v0.4-Qwen2.5-7B
model-index:
- name: HomerSlerp2-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 44.87
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 37.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 28.55
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.28
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.85
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.05
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp2-7B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [allknowingroger/Qwenslerp2-7B](https://huggingface.co/allknowingroger/Qwenslerp2-7B)
* [newsbang/Homer-v0.4-Qwen2.5-7B](https://huggingface.co/newsbang/Homer-v0.4-Qwen2.5-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: newsbang/Homer-v0.4-Qwen2.5-7B
- model: allknowingroger/Qwenslerp2-7B
merge_method: slerp
base_model: newsbang/Homer-v0.4-Qwen2.5-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__HomerSlerp2-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |28.76|
|IFEval (0-Shot) |44.87|
|BBH (3-Shot) |37.96|
|MATH Lvl 5 (4-Shot)|28.55|
|GPQA (0-shot) | 9.28|
|MuSR (0-shot) |12.85|
|MMLU-PRO (5-shot) |39.05|
|
allknowingroger/HomerSlerp4-7B | allknowingroger | 2024-11-25T13:39:32Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:allknowingroger/HomerSlerp2-7B",
"base_model:merge:allknowingroger/HomerSlerp2-7B",
"base_model:allknowingroger/Qwen2.5-7B-task8",
"base_model:merge:allknowingroger/Qwen2.5-7B-task8",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-21T15:32:56Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- allknowingroger/Qwen2.5-7B-task8
- allknowingroger/HomerSlerp2-7B
model-index:
- name: HomerSlerp4-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 43.74
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 36.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 29.53
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.28
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.77
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.58
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp4-7B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [allknowingroger/Qwen2.5-7B-task8](https://huggingface.co/allknowingroger/Qwen2.5-7B-task8)
* [allknowingroger/HomerSlerp2-7B](https://huggingface.co/allknowingroger/HomerSlerp2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: allknowingroger/HomerSlerp2-7B
- model: allknowingroger/Qwen2.5-7B-task8
merge_method: slerp
base_model: allknowingroger/Qwen2.5-7B-task8
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__HomerSlerp4-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |28.62|
|IFEval (0-Shot) |43.74|
|BBH (3-Shot) |36.79|
|MATH Lvl 5 (4-Shot)|29.53|
|GPQA (0-shot) | 9.28|
|MuSR (0-shot) |13.77|
|MMLU-PRO (5-shot) |38.58|
|
allknowingroger/HomerSlerp3-7B | allknowingroger | 2024-11-25T13:39:20Z | 11 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:allknowingroger/HomerSlerp2-7B",
"base_model:merge:allknowingroger/HomerSlerp2-7B",
"base_model:allknowingroger/Qwen2.5-7B-task4",
"base_model:merge:allknowingroger/Qwen2.5-7B-task4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-21T15:25:04Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- allknowingroger/HomerSlerp2-7B
- allknowingroger/Qwen2.5-7B-task4
model-index:
- name: HomerSlerp3-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 43.63
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp3-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 37.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp3-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 28.1
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp3-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.95
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp3-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.37
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp3-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.27
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/HomerSlerp3-7B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [allknowingroger/HomerSlerp2-7B](https://huggingface.co/allknowingroger/HomerSlerp2-7B)
* [allknowingroger/Qwen2.5-7B-task4](https://huggingface.co/allknowingroger/Qwen2.5-7B-task4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: allknowingroger/HomerSlerp2-7B
- model: allknowingroger/Qwen2.5-7B-task4
merge_method: slerp
base_model: allknowingroger/HomerSlerp2-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__HomerSlerp3-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |28.60|
|IFEval (0-Shot) |43.63|
|BBH (3-Shot) |37.29|
|MATH Lvl 5 (4-Shot)|28.10|
|GPQA (0-shot) | 8.95|
|MuSR (0-shot) |14.37|
|MMLU-PRO (5-shot) |39.27|
|
qgallouedec/tiny-Qwen2ForCausalLM-Coder | qgallouedec | 2024-11-25T13:38:59Z | 255 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T13:38:56Z | ---
library_name: transformers
tags:
- trl
---
# Tiny Qwen2ForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k2_task5_organization_fold1 | MayBashendy | 2024-11-25T13:38:18Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:36:37Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k2_task5_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k2_task5_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8745
- Qwk: 0.7210
- Mse: 0.8745
- Rmse: 0.9351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1053 | 2 | 3.9879 | -0.0948 | 3.9879 | 1.9970 |
| No log | 0.2105 | 4 | 2.7831 | -0.2331 | 2.7831 | 1.6683 |
| No log | 0.3158 | 6 | 1.8518 | -0.2722 | 1.8518 | 1.3608 |
| No log | 0.4211 | 8 | 1.4992 | 0.1475 | 1.4992 | 1.2244 |
| No log | 0.5263 | 10 | 1.4200 | 0.1456 | 1.4200 | 1.1917 |
| No log | 0.6316 | 12 | 1.2330 | 0.0764 | 1.2330 | 1.1104 |
| No log | 0.7368 | 14 | 1.2219 | 0.1475 | 1.2219 | 1.1054 |
| No log | 0.8421 | 16 | 1.4619 | 0.2012 | 1.4619 | 1.2091 |
| No log | 0.9474 | 18 | 1.3456 | 0.2012 | 1.3456 | 1.1600 |
| No log | 1.0526 | 20 | 1.1911 | 0.2049 | 1.1911 | 1.0914 |
| No log | 1.1579 | 22 | 1.1642 | 0.2604 | 1.1642 | 1.0790 |
| No log | 1.2632 | 24 | 0.9839 | 0.2787 | 0.9839 | 0.9919 |
| No log | 1.3684 | 26 | 0.8600 | 0.4059 | 0.8600 | 0.9274 |
| No log | 1.4737 | 28 | 0.8333 | 0.4393 | 0.8333 | 0.9128 |
| No log | 1.5789 | 30 | 0.9850 | 0.3884 | 0.9850 | 0.9925 |
| No log | 1.6842 | 32 | 1.1900 | 0.2550 | 1.1900 | 1.0909 |
| No log | 1.7895 | 34 | 1.4020 | 0.1885 | 1.4020 | 1.1841 |
| No log | 1.8947 | 36 | 1.1259 | 0.2604 | 1.1259 | 1.0611 |
| No log | 2.0 | 38 | 0.9475 | 0.5362 | 0.9475 | 0.9734 |
| No log | 2.1053 | 40 | 0.9526 | 0.5362 | 0.9526 | 0.9760 |
| No log | 2.2105 | 42 | 0.8278 | 0.5362 | 0.8278 | 0.9099 |
| No log | 2.3158 | 44 | 0.7830 | 0.5362 | 0.7830 | 0.8849 |
| No log | 2.4211 | 46 | 0.7004 | 0.5756 | 0.7004 | 0.8369 |
| No log | 2.5263 | 48 | 0.7483 | 0.5355 | 0.7483 | 0.8651 |
| No log | 2.6316 | 50 | 0.7372 | 0.6134 | 0.7372 | 0.8586 |
| No log | 2.7368 | 52 | 0.7281 | 0.5990 | 0.7281 | 0.8533 |
| No log | 2.8421 | 54 | 0.7413 | 0.6143 | 0.7413 | 0.8610 |
| No log | 2.9474 | 56 | 0.7312 | 0.6134 | 0.7312 | 0.8551 |
| No log | 3.0526 | 58 | 0.8131 | 0.5355 | 0.8131 | 0.9017 |
| No log | 3.1579 | 60 | 0.8581 | 0.52 | 0.8581 | 0.9263 |
| No log | 3.2632 | 62 | 0.7933 | 0.5355 | 0.7933 | 0.8907 |
| No log | 3.3684 | 64 | 0.6914 | 0.5756 | 0.6914 | 0.8315 |
| No log | 3.4737 | 66 | 0.6821 | 0.6134 | 0.6821 | 0.8259 |
| No log | 3.5789 | 68 | 0.7476 | 0.5813 | 0.7476 | 0.8646 |
| No log | 3.6842 | 70 | 0.9542 | 0.5360 | 0.9542 | 0.9768 |
| No log | 3.7895 | 72 | 0.9212 | 0.5701 | 0.9212 | 0.9598 |
| No log | 3.8947 | 74 | 0.7594 | 0.6322 | 0.7594 | 0.8714 |
| No log | 4.0 | 76 | 0.7274 | 0.6134 | 0.7274 | 0.8528 |
| No log | 4.1053 | 78 | 0.7330 | 0.6181 | 0.7330 | 0.8562 |
| No log | 4.2105 | 80 | 0.7890 | 0.6637 | 0.7890 | 0.8883 |
| No log | 4.3158 | 82 | 1.0552 | 0.5329 | 1.0552 | 1.0272 |
| No log | 4.4211 | 84 | 1.0573 | 0.5329 | 1.0573 | 1.0283 |
| No log | 4.5263 | 86 | 0.8195 | 0.6322 | 0.8195 | 0.9053 |
| No log | 4.6316 | 88 | 0.7154 | 0.6646 | 0.7154 | 0.8458 |
| No log | 4.7368 | 90 | 0.7308 | 0.6637 | 0.7308 | 0.8549 |
| No log | 4.8421 | 92 | 0.9223 | 0.6111 | 0.9223 | 0.9603 |
| No log | 4.9474 | 94 | 1.1051 | 0.5062 | 1.1051 | 1.0512 |
| No log | 5.0526 | 96 | 1.1283 | 0.4618 | 1.1283 | 1.0622 |
| No log | 5.1579 | 98 | 0.9748 | 0.6071 | 0.9748 | 0.9873 |
| No log | 5.2632 | 100 | 0.8063 | 0.6637 | 0.8063 | 0.8979 |
| No log | 5.3684 | 102 | 0.7331 | 0.6499 | 0.7331 | 0.8562 |
| No log | 5.4737 | 104 | 0.7393 | 0.6205 | 0.7393 | 0.8598 |
| No log | 5.5789 | 106 | 0.7388 | 0.6499 | 0.7388 | 0.8595 |
| No log | 5.6842 | 108 | 0.7708 | 0.6499 | 0.7708 | 0.8780 |
| No log | 5.7895 | 110 | 0.8821 | 0.5585 | 0.8821 | 0.9392 |
| No log | 5.8947 | 112 | 0.9995 | 0.62 | 0.9995 | 0.9998 |
| No log | 6.0 | 114 | 0.9838 | 0.62 | 0.9838 | 0.9918 |
| No log | 6.1053 | 116 | 0.9564 | 0.62 | 0.9564 | 0.9779 |
| No log | 6.2105 | 118 | 0.9569 | 0.6614 | 0.9569 | 0.9782 |
| No log | 6.3158 | 120 | 0.9732 | 0.62 | 0.9732 | 0.9865 |
| No log | 6.4211 | 122 | 0.9029 | 0.5905 | 0.9029 | 0.9502 |
| No log | 6.5263 | 124 | 0.8228 | 0.6637 | 0.8228 | 0.9071 |
| No log | 6.6316 | 126 | 0.8404 | 0.6637 | 0.8404 | 0.9167 |
| No log | 6.7368 | 128 | 0.8440 | 0.6637 | 0.8440 | 0.9187 |
| No log | 6.8421 | 130 | 0.8469 | 0.6637 | 0.8469 | 0.9203 |
| No log | 6.9474 | 132 | 0.8414 | 0.6637 | 0.8414 | 0.9173 |
| No log | 7.0526 | 134 | 0.8013 | 0.6368 | 0.8013 | 0.8951 |
| No log | 7.1579 | 136 | 0.7928 | 0.6368 | 0.7928 | 0.8904 |
| No log | 7.2632 | 138 | 0.8203 | 0.6368 | 0.8203 | 0.9057 |
| No log | 7.3684 | 140 | 0.8470 | 0.6368 | 0.8470 | 0.9203 |
| No log | 7.4737 | 142 | 0.9117 | 0.7314 | 0.9117 | 0.9548 |
| No log | 7.5789 | 144 | 0.9339 | 0.7314 | 0.9339 | 0.9664 |
| No log | 7.6842 | 146 | 0.9240 | 0.7314 | 0.9240 | 0.9613 |
| No log | 7.7895 | 148 | 0.9248 | 0.7314 | 0.9248 | 0.9617 |
| No log | 7.8947 | 150 | 0.9081 | 0.7363 | 0.9081 | 0.9529 |
| No log | 8.0 | 152 | 0.9131 | 0.7314 | 0.9131 | 0.9556 |
| No log | 8.1053 | 154 | 0.9318 | 0.7314 | 0.9318 | 0.9653 |
| No log | 8.2105 | 156 | 0.9336 | 0.7314 | 0.9336 | 0.9662 |
| No log | 8.3158 | 158 | 0.9326 | 0.7160 | 0.9326 | 0.9657 |
| No log | 8.4211 | 160 | 0.8907 | 0.7363 | 0.8907 | 0.9438 |
| No log | 8.5263 | 162 | 0.8549 | 0.6368 | 0.8549 | 0.9246 |
| No log | 8.6316 | 164 | 0.8327 | 0.6368 | 0.8327 | 0.9125 |
| No log | 8.7368 | 166 | 0.8300 | 0.6368 | 0.8300 | 0.9111 |
| No log | 8.8421 | 168 | 0.8247 | 0.6368 | 0.8247 | 0.9081 |
| No log | 8.9474 | 170 | 0.8219 | 0.6368 | 0.8219 | 0.9066 |
| No log | 9.0526 | 172 | 0.8272 | 0.6368 | 0.8272 | 0.9095 |
| No log | 9.1579 | 174 | 0.8488 | 0.7065 | 0.8488 | 0.9213 |
| No log | 9.2632 | 176 | 0.8695 | 0.7065 | 0.8695 | 0.9325 |
| No log | 9.3684 | 178 | 0.8830 | 0.7210 | 0.8830 | 0.9397 |
| No log | 9.4737 | 180 | 0.8945 | 0.7363 | 0.8945 | 0.9458 |
| No log | 9.5789 | 182 | 0.8889 | 0.7210 | 0.8889 | 0.9428 |
| No log | 9.6842 | 184 | 0.8831 | 0.7210 | 0.8831 | 0.9397 |
| No log | 9.7895 | 186 | 0.8795 | 0.7210 | 0.8795 | 0.9378 |
| No log | 9.8947 | 188 | 0.8763 | 0.7210 | 0.8763 | 0.9361 |
| No log | 10.0 | 190 | 0.8745 | 0.7210 | 0.8745 | 0.9351 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
tlsdm65376/krx_Llama3.1_8b_instruct_M1_all | tlsdm65376 | 2024-11-25T13:34:05Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T11:54:29Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tlsdm65376
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k1_task5_organization_fold0 | MayBashendy | 2024-11-25T13:32:18Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:30:37Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k1_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k1_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6545
- Qwk: 0.7430
- Mse: 0.6545
- Rmse: 0.8090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.1333 | 2 | 2.0376 | 0.0796 | 2.0376 | 1.4274 |
| No log | 0.2667 | 4 | 0.9360 | 0.3357 | 0.9360 | 0.9674 |
| No log | 0.4 | 6 | 0.8823 | 0.2578 | 0.8823 | 0.9393 |
| No log | 0.5333 | 8 | 1.0137 | 0.2578 | 1.0137 | 1.0068 |
| No log | 0.6667 | 10 | 0.9472 | 0.2578 | 0.9472 | 0.9732 |
| No log | 0.8 | 12 | 0.8729 | 0.3548 | 0.8729 | 0.9343 |
| No log | 0.9333 | 14 | 0.8777 | 0.4888 | 0.8777 | 0.9369 |
| No log | 1.0667 | 16 | 0.8718 | 0.4848 | 0.8718 | 0.9337 |
| No log | 1.2 | 18 | 0.8058 | 0.5195 | 0.8058 | 0.8976 |
| No log | 1.3333 | 20 | 0.6286 | 0.5253 | 0.6286 | 0.7928 |
| No log | 1.4667 | 22 | 0.4650 | 0.5223 | 0.4650 | 0.6819 |
| No log | 1.6 | 24 | 0.4631 | 0.5745 | 0.4631 | 0.6805 |
| No log | 1.7333 | 26 | 0.4282 | 0.6243 | 0.4282 | 0.6544 |
| No log | 1.8667 | 28 | 0.4358 | 0.6581 | 0.4358 | 0.6601 |
| No log | 2.0 | 30 | 0.5688 | 0.5536 | 0.5688 | 0.7542 |
| No log | 2.1333 | 32 | 0.7062 | 0.3980 | 0.7062 | 0.8403 |
| No log | 2.2667 | 34 | 0.7535 | 0.3980 | 0.7535 | 0.8681 |
| No log | 2.4 | 36 | 0.7505 | 0.5195 | 0.7505 | 0.8663 |
| No log | 2.5333 | 38 | 0.6853 | 0.5862 | 0.6853 | 0.8278 |
| No log | 2.6667 | 40 | 0.5459 | 0.7094 | 0.5459 | 0.7389 |
| No log | 2.8 | 42 | 0.4649 | 0.7442 | 0.4649 | 0.6819 |
| No log | 2.9333 | 44 | 0.4700 | 0.6944 | 0.4700 | 0.6856 |
| No log | 3.0667 | 46 | 0.4505 | 0.7442 | 0.4505 | 0.6712 |
| No log | 3.2 | 48 | 0.4978 | 0.7074 | 0.4978 | 0.7055 |
| No log | 3.3333 | 50 | 0.5526 | 0.6667 | 0.5526 | 0.7433 |
| No log | 3.4667 | 52 | 0.5800 | 0.6287 | 0.5800 | 0.7616 |
| No log | 3.6 | 54 | 0.7154 | 0.6119 | 0.7154 | 0.8458 |
| No log | 3.7333 | 56 | 0.8173 | 0.5975 | 0.8173 | 0.9041 |
| No log | 3.8667 | 58 | 0.7902 | 0.6444 | 0.7902 | 0.8889 |
| No log | 4.0 | 60 | 0.7274 | 0.6372 | 0.7274 | 0.8529 |
| No log | 4.1333 | 62 | 0.5889 | 0.7229 | 0.5889 | 0.7674 |
| No log | 4.2667 | 64 | 0.5217 | 0.6766 | 0.5217 | 0.7223 |
| No log | 4.4 | 66 | 0.5341 | 0.6992 | 0.5341 | 0.7308 |
| No log | 4.5333 | 68 | 0.6118 | 0.6977 | 0.6118 | 0.7822 |
| No log | 4.6667 | 70 | 0.6933 | 0.6119 | 0.6933 | 0.8326 |
| No log | 4.8 | 72 | 0.7101 | 0.6119 | 0.7101 | 0.8427 |
| No log | 4.9333 | 74 | 0.6291 | 0.6934 | 0.6291 | 0.7932 |
| No log | 5.0667 | 76 | 0.5575 | 0.6977 | 0.5575 | 0.7466 |
| No log | 5.2 | 78 | 0.5152 | 0.7229 | 0.5152 | 0.7177 |
| No log | 5.3333 | 80 | 0.4642 | 0.7356 | 0.4642 | 0.6813 |
| No log | 5.4667 | 82 | 0.4859 | 0.7356 | 0.4859 | 0.6971 |
| No log | 5.6 | 84 | 0.5469 | 0.7229 | 0.5469 | 0.7396 |
| No log | 5.7333 | 86 | 0.5841 | 0.7229 | 0.5841 | 0.7643 |
| No log | 5.8667 | 88 | 0.5844 | 0.7229 | 0.5844 | 0.7644 |
| No log | 6.0 | 90 | 0.6187 | 0.7229 | 0.6187 | 0.7866 |
| No log | 6.1333 | 92 | 0.7243 | 0.6977 | 0.7243 | 0.8510 |
| No log | 6.2667 | 94 | 0.8514 | 0.6185 | 0.8514 | 0.9227 |
| No log | 6.4 | 96 | 0.9142 | 0.5732 | 0.9142 | 0.9562 |
| No log | 6.5333 | 98 | 0.9106 | 0.5732 | 0.9106 | 0.9543 |
| No log | 6.6667 | 100 | 0.8101 | 0.6008 | 0.8101 | 0.9001 |
| No log | 6.8 | 102 | 0.6956 | 0.7002 | 0.6956 | 0.8340 |
| No log | 6.9333 | 104 | 0.5594 | 0.7229 | 0.5594 | 0.7479 |
| No log | 7.0667 | 106 | 0.5182 | 0.7356 | 0.5182 | 0.7199 |
| No log | 7.2 | 108 | 0.5248 | 0.7356 | 0.5248 | 0.7244 |
| No log | 7.3333 | 110 | 0.5105 | 0.7356 | 0.5105 | 0.7145 |
| No log | 7.4667 | 112 | 0.5547 | 0.6977 | 0.5547 | 0.7448 |
| No log | 7.6 | 114 | 0.6232 | 0.6977 | 0.6232 | 0.7894 |
| No log | 7.7333 | 116 | 0.6667 | 0.6721 | 0.6667 | 0.8165 |
| No log | 7.8667 | 118 | 0.7060 | 0.6171 | 0.7060 | 0.8402 |
| No log | 8.0 | 120 | 0.7253 | 0.5918 | 0.7253 | 0.8516 |
| No log | 8.1333 | 122 | 0.7139 | 0.6462 | 0.7139 | 0.8450 |
| No log | 8.2667 | 124 | 0.7104 | 0.6462 | 0.7104 | 0.8429 |
| No log | 8.4 | 126 | 0.7219 | 0.6462 | 0.7219 | 0.8496 |
| No log | 8.5333 | 128 | 0.7473 | 0.5918 | 0.7473 | 0.8645 |
| No log | 8.6667 | 130 | 0.7336 | 0.6988 | 0.7336 | 0.8565 |
| No log | 8.8 | 132 | 0.6998 | 0.6963 | 0.6998 | 0.8365 |
| No log | 8.9333 | 134 | 0.6571 | 0.6963 | 0.6571 | 0.8106 |
| No log | 9.0667 | 136 | 0.6447 | 0.7430 | 0.6447 | 0.8029 |
| No log | 9.2 | 138 | 0.6402 | 0.7430 | 0.6402 | 0.8002 |
| No log | 9.3333 | 140 | 0.6517 | 0.7430 | 0.6517 | 0.8073 |
| No log | 9.4667 | 142 | 0.6580 | 0.7430 | 0.6580 | 0.8111 |
| No log | 9.6 | 144 | 0.6649 | 0.7430 | 0.6649 | 0.8154 |
| No log | 9.7333 | 146 | 0.6630 | 0.7430 | 0.6630 | 0.8143 |
| No log | 9.8667 | 148 | 0.6576 | 0.7430 | 0.6576 | 0.8110 |
| No log | 10.0 | 150 | 0.6545 | 0.7430 | 0.6545 | 0.8090 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k60_task3_organization_fold1 | MayBashendy | 2024-11-25T13:29:26Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:17:46Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k60_task3_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k60_task3_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7118
- Qwk: 0.1538
- Mse: 0.7118
- Rmse: 0.8437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0084 | 2 | 6.7161 | 0.0647 | 6.7161 | 2.5915 |
| No log | 0.0167 | 4 | 2.5951 | 0.1692 | 2.5951 | 1.6109 |
| No log | 0.0251 | 6 | 1.4517 | 0.0 | 1.4517 | 1.2049 |
| No log | 0.0335 | 8 | 0.7119 | -0.0421 | 0.7119 | 0.8438 |
| No log | 0.0418 | 10 | 0.7301 | 0.2443 | 0.7301 | 0.8545 |
| No log | 0.0502 | 12 | 0.7671 | 0.2143 | 0.7671 | 0.8758 |
| No log | 0.0586 | 14 | 0.6755 | 0.1538 | 0.6755 | 0.8219 |
| No log | 0.0669 | 16 | 0.6814 | 0.2326 | 0.6814 | 0.8255 |
| No log | 0.0753 | 18 | 0.7672 | -0.0916 | 0.7672 | 0.8759 |
| No log | 0.0837 | 20 | 0.8158 | 0.1646 | 0.8158 | 0.9032 |
| No log | 0.0921 | 22 | 1.0666 | 0.0 | 1.0666 | 1.0327 |
| No log | 0.1004 | 24 | 1.1011 | 0.0 | 1.1011 | 1.0494 |
| No log | 0.1088 | 26 | 1.1199 | 0.0 | 1.1199 | 1.0582 |
| No log | 0.1172 | 28 | 0.9066 | 0.0 | 0.9066 | 0.9521 |
| No log | 0.1255 | 30 | 0.7847 | 0.0571 | 0.7847 | 0.8858 |
| No log | 0.1339 | 32 | 0.8209 | 0.1437 | 0.8209 | 0.9060 |
| No log | 0.1423 | 34 | 0.9472 | 0.0 | 0.9472 | 0.9733 |
| No log | 0.1506 | 36 | 1.1047 | 0.0 | 1.1047 | 1.0511 |
| No log | 0.1590 | 38 | 1.3277 | 0.0 | 1.3277 | 1.1523 |
| No log | 0.1674 | 40 | 1.1742 | 0.0 | 1.1742 | 1.0836 |
| No log | 0.1757 | 42 | 0.8648 | 0.0 | 0.8648 | 0.9300 |
| No log | 0.1841 | 44 | 0.8141 | 0.1646 | 0.8141 | 0.9022 |
| No log | 0.1925 | 46 | 0.9263 | 0.0 | 0.9263 | 0.9624 |
| No log | 0.2008 | 48 | 0.9499 | 0.0 | 0.9499 | 0.9746 |
| No log | 0.2092 | 50 | 1.4708 | 0.0 | 1.4708 | 1.2128 |
| No log | 0.2176 | 52 | 1.6961 | 0.0 | 1.6961 | 1.3024 |
| No log | 0.2259 | 54 | 1.2353 | 0.0 | 1.2353 | 1.1114 |
| No log | 0.2343 | 56 | 0.8292 | 0.1646 | 0.8292 | 0.9106 |
| No log | 0.2427 | 58 | 0.6191 | 0.0 | 0.6191 | 0.7868 |
| No log | 0.2510 | 60 | 0.6493 | 0.0 | 0.6493 | 0.8058 |
| No log | 0.2594 | 62 | 0.7967 | 0.2143 | 0.7967 | 0.8926 |
| No log | 0.2678 | 64 | 0.8425 | 0.0 | 0.8425 | 0.9179 |
| No log | 0.2762 | 66 | 0.7845 | 0.1646 | 0.7845 | 0.8857 |
| No log | 0.2845 | 68 | 0.7839 | 0.1646 | 0.7839 | 0.8854 |
| No log | 0.2929 | 70 | 0.8077 | 0.1646 | 0.8077 | 0.8987 |
| No log | 0.3013 | 72 | 0.8563 | 0.0 | 0.8563 | 0.9254 |
| No log | 0.3096 | 74 | 0.8766 | 0.0 | 0.8766 | 0.9363 |
| No log | 0.3180 | 76 | 1.0965 | 0.0 | 1.0965 | 1.0472 |
| No log | 0.3264 | 78 | 1.1166 | 0.0 | 1.1166 | 1.0567 |
| No log | 0.3347 | 80 | 0.7880 | 0.1646 | 0.7880 | 0.8877 |
| No log | 0.3431 | 82 | 0.5959 | 0.0 | 0.5959 | 0.7719 |
| No log | 0.3515 | 84 | 0.6065 | 0.0 | 0.6065 | 0.7788 |
| No log | 0.3598 | 86 | 0.6758 | 0.1895 | 0.6758 | 0.8221 |
| No log | 0.3682 | 88 | 0.9088 | 0.0 | 0.9088 | 0.9533 |
| No log | 0.3766 | 90 | 1.1055 | 0.0 | 1.1055 | 1.0514 |
| No log | 0.3849 | 92 | 1.0128 | 0.0 | 1.0128 | 1.0064 |
| No log | 0.3933 | 94 | 0.9270 | 0.0 | 0.9270 | 0.9628 |
| No log | 0.4017 | 96 | 0.8522 | 0.1437 | 0.8522 | 0.9231 |
| No log | 0.4100 | 98 | 0.9106 | 0.0 | 0.9106 | 0.9543 |
| No log | 0.4184 | 100 | 0.8072 | 0.1879 | 0.8072 | 0.8985 |
| No log | 0.4268 | 102 | 0.9576 | 0.0 | 0.9576 | 0.9786 |
| No log | 0.4351 | 104 | 1.4231 | 0.0 | 1.4231 | 1.1929 |
| No log | 0.4435 | 106 | 1.5565 | 0.0 | 1.5565 | 1.2476 |
| No log | 0.4519 | 108 | 1.1261 | 0.0 | 1.1261 | 1.0612 |
| No log | 0.4603 | 110 | 0.6920 | 0.5133 | 0.6920 | 0.8319 |
| No log | 0.4686 | 112 | 0.6589 | 0.5133 | 0.6589 | 0.8117 |
| No log | 0.4770 | 114 | 0.6892 | 0.5133 | 0.6892 | 0.8302 |
| No log | 0.4854 | 116 | 0.6582 | 0.5133 | 0.6582 | 0.8113 |
| No log | 0.4937 | 118 | 0.5950 | 0.6526 | 0.5950 | 0.7714 |
| No log | 0.5021 | 120 | 0.6035 | 0.6526 | 0.6035 | 0.7768 |
| No log | 0.5105 | 122 | 0.7113 | -0.2791 | 0.7113 | 0.8434 |
| No log | 0.5188 | 124 | 0.8327 | -0.2791 | 0.8327 | 0.9125 |
| No log | 0.5272 | 126 | 0.8919 | -0.2791 | 0.8919 | 0.9444 |
| No log | 0.5356 | 128 | 0.6568 | -0.0421 | 0.6568 | 0.8104 |
| No log | 0.5439 | 130 | 0.5717 | 0.5769 | 0.5717 | 0.7561 |
| No log | 0.5523 | 132 | 0.8579 | 0.3038 | 0.8579 | 0.9262 |
| No log | 0.5607 | 134 | 0.7746 | 0.3038 | 0.7746 | 0.8801 |
| No log | 0.5690 | 136 | 0.6461 | 0.2326 | 0.6461 | 0.8038 |
| No log | 0.5774 | 138 | 1.0171 | -0.4808 | 1.0171 | 1.0085 |
| No log | 0.5858 | 140 | 1.2214 | -0.4426 | 1.2214 | 1.1052 |
| No log | 0.5941 | 142 | 1.1263 | -0.2655 | 1.1263 | 1.0613 |
| No log | 0.6025 | 144 | 0.7633 | -0.0421 | 0.7633 | 0.8736 |
| No log | 0.6109 | 146 | 0.9139 | -0.2692 | 0.9139 | 0.9560 |
| No log | 0.6192 | 148 | 2.1917 | -0.5118 | 2.1917 | 1.4804 |
| No log | 0.6276 | 150 | 2.1343 | -0.5248 | 2.1343 | 1.4609 |
| No log | 0.6360 | 152 | 0.5742 | 0.6526 | 0.5742 | 0.7578 |
| No log | 0.6444 | 154 | 0.7399 | 0.4211 | 0.7399 | 0.8602 |
| No log | 0.6527 | 156 | 0.5233 | 0.4568 | 0.5233 | 0.7234 |
| No log | 0.6611 | 158 | 0.8325 | -0.2692 | 0.8325 | 0.9124 |
| No log | 0.6695 | 160 | 1.5425 | -0.5053 | 1.5425 | 1.2420 |
| No log | 0.6778 | 162 | 1.0186 | -0.5053 | 1.0186 | 1.0093 |
| No log | 0.6862 | 164 | 0.5954 | 0.2326 | 0.5954 | 0.7716 |
| No log | 0.6946 | 166 | 0.4050 | 0.2326 | 0.4050 | 0.6364 |
| No log | 0.7029 | 168 | 0.3840 | 0.1895 | 0.3840 | 0.6197 |
| No log | 0.7113 | 170 | 0.6571 | 0.0 | 0.6571 | 0.8106 |
| No log | 0.7197 | 172 | 1.0448 | -0.2791 | 1.0448 | 1.0222 |
| No log | 0.7280 | 174 | 0.9218 | -0.2791 | 0.9218 | 0.9601 |
| No log | 0.7364 | 176 | 0.6313 | 0.2326 | 0.6313 | 0.7945 |
| No log | 0.7448 | 178 | 0.4283 | 0.1895 | 0.4283 | 0.6545 |
| No log | 0.7531 | 180 | 0.4332 | 0.5769 | 0.4332 | 0.6581 |
| No log | 0.7615 | 182 | 0.6867 | 0.2326 | 0.6867 | 0.8287 |
| No log | 0.7699 | 184 | 1.1301 | -0.5053 | 1.1301 | 1.0631 |
| No log | 0.7782 | 186 | 1.0757 | -0.2791 | 1.0757 | 1.0372 |
| No log | 0.7866 | 188 | 0.7267 | -0.0421 | 0.7267 | 0.8524 |
| No log | 0.7950 | 190 | 0.5527 | 0.1895 | 0.5527 | 0.7435 |
| No log | 0.8033 | 192 | 0.5986 | 0.2326 | 0.5986 | 0.7737 |
| No log | 0.8117 | 194 | 0.6830 | 0.2326 | 0.6830 | 0.8265 |
| No log | 0.8201 | 196 | 0.6922 | 0.2326 | 0.6922 | 0.8320 |
| No log | 0.8285 | 198 | 0.7463 | 0.2143 | 0.7463 | 0.8639 |
| No log | 0.8368 | 200 | 0.5119 | 0.3419 | 0.5119 | 0.7155 |
| No log | 0.8452 | 202 | 0.4346 | 0.5417 | 0.4346 | 0.6593 |
| No log | 0.8536 | 204 | 0.4120 | 0.5417 | 0.4120 | 0.6419 |
| No log | 0.8619 | 206 | 0.3799 | 0.4444 | 0.3799 | 0.6164 |
| No log | 0.8703 | 208 | 0.6642 | 0.3444 | 0.6642 | 0.8150 |
| No log | 0.8787 | 210 | 0.5247 | 0.56 | 0.5247 | 0.7244 |
| No log | 0.8870 | 212 | 0.4511 | 0.2326 | 0.4511 | 0.6716 |
| No log | 0.8954 | 214 | 0.4944 | 0.2326 | 0.4944 | 0.7031 |
| No log | 0.9038 | 216 | 0.5909 | 0.2222 | 0.5909 | 0.7687 |
| No log | 0.9121 | 218 | 0.4833 | 0.2326 | 0.4833 | 0.6952 |
| No log | 0.9205 | 220 | 0.5276 | 0.2326 | 0.5276 | 0.7264 |
| No log | 0.9289 | 222 | 0.7094 | 0.0222 | 0.7094 | 0.8423 |
| No log | 0.9372 | 224 | 0.8293 | 0.0222 | 0.8293 | 0.9107 |
| No log | 0.9456 | 226 | 0.6962 | 0.0 | 0.6962 | 0.8344 |
| No log | 0.9540 | 228 | 0.7088 | 0.0 | 0.7088 | 0.8419 |
| No log | 0.9623 | 230 | 0.6571 | 0.0 | 0.6571 | 0.8106 |
| No log | 0.9707 | 232 | 0.6407 | 0.0222 | 0.6407 | 0.8004 |
| No log | 0.9791 | 234 | 0.7061 | 0.2524 | 0.7061 | 0.8403 |
| No log | 0.9874 | 236 | 0.5721 | 0.4444 | 0.5721 | 0.7564 |
| No log | 0.9958 | 238 | 0.5787 | 0.4444 | 0.5787 | 0.7607 |
| No log | 1.0042 | 240 | 0.6740 | 0.2524 | 0.6740 | 0.8210 |
| No log | 1.0126 | 242 | 0.6544 | 0.4310 | 0.6544 | 0.8090 |
| No log | 1.0209 | 244 | 0.5582 | 0.4444 | 0.5582 | 0.7471 |
| No log | 1.0293 | 246 | 0.5715 | 0.1538 | 0.5715 | 0.7560 |
| No log | 1.0377 | 248 | 0.5752 | 0.1538 | 0.5752 | 0.7584 |
| No log | 1.0460 | 250 | 0.7199 | 0.0704 | 0.7199 | 0.8485 |
| No log | 1.0544 | 252 | 0.8383 | 0.0774 | 0.8383 | 0.9156 |
| No log | 1.0628 | 254 | 0.7084 | 0.2029 | 0.7084 | 0.8417 |
| No log | 1.0711 | 256 | 0.6461 | 0.56 | 0.6461 | 0.8038 |
| No log | 1.0795 | 258 | 0.6219 | 0.56 | 0.6219 | 0.7886 |
| No log | 1.0879 | 260 | 0.7533 | 0.1987 | 0.7533 | 0.8679 |
| No log | 1.0962 | 262 | 0.6827 | 0.4107 | 0.6827 | 0.8263 |
| No log | 1.1046 | 264 | 0.6486 | 0.4107 | 0.6486 | 0.8054 |
| No log | 1.1130 | 266 | 0.7970 | -0.3276 | 0.7970 | 0.8927 |
| No log | 1.1213 | 268 | 0.7084 | 0.2326 | 0.7084 | 0.8417 |
| No log | 1.1297 | 270 | 0.5452 | 0.2326 | 0.5452 | 0.7384 |
| No log | 1.1381 | 272 | 0.5754 | 0.2326 | 0.5754 | 0.7585 |
| No log | 1.1464 | 274 | 0.9031 | -0.3883 | 0.9031 | 0.9503 |
| No log | 1.1548 | 276 | 1.0084 | -0.3883 | 1.0084 | 1.0042 |
| No log | 1.1632 | 278 | 0.8335 | -0.3276 | 0.8335 | 0.9130 |
| No log | 1.1715 | 280 | 0.5407 | 0.2326 | 0.5407 | 0.7353 |
| No log | 1.1799 | 282 | 0.5365 | 0.1895 | 0.5365 | 0.7325 |
| No log | 1.1883 | 284 | 0.6865 | 0.2222 | 0.6865 | 0.8286 |
| No log | 1.1967 | 286 | 1.1191 | -0.1786 | 1.1191 | 1.0579 |
| No log | 1.2050 | 288 | 1.2107 | -0.1786 | 1.2107 | 1.1003 |
| No log | 1.2134 | 290 | 1.0624 | -0.1786 | 1.0624 | 1.0307 |
| No log | 1.2218 | 292 | 0.5933 | 0.4444 | 0.5933 | 0.7703 |
| No log | 1.2301 | 294 | 0.5142 | 0.1895 | 0.5142 | 0.7170 |
| No log | 1.2385 | 296 | 0.5124 | 0.2326 | 0.5124 | 0.7158 |
| No log | 1.2469 | 298 | 0.6883 | 0.2414 | 0.6883 | 0.8296 |
| No log | 1.2552 | 300 | 0.9401 | -0.0645 | 0.9401 | 0.9696 |
| No log | 1.2636 | 302 | 0.8402 | -0.1085 | 0.8402 | 0.9166 |
| No log | 1.2720 | 304 | 0.6553 | 0.4107 | 0.6553 | 0.8095 |
| No log | 1.2803 | 306 | 0.7061 | 0.2143 | 0.7061 | 0.8403 |
| No log | 1.2887 | 308 | 0.7534 | 0.0388 | 0.7534 | 0.8680 |
| No log | 1.2971 | 310 | 0.8611 | -0.3276 | 0.8611 | 0.9280 |
| No log | 1.3054 | 312 | 0.8133 | -0.3276 | 0.8133 | 0.9018 |
| No log | 1.3138 | 314 | 0.6023 | 0.2222 | 0.6023 | 0.7761 |
| No log | 1.3222 | 316 | 0.5693 | 0.2222 | 0.5693 | 0.7545 |
| No log | 1.3305 | 318 | 0.6567 | 0.2222 | 0.6567 | 0.8104 |
| No log | 1.3389 | 320 | 0.8591 | -0.3276 | 0.8591 | 0.9269 |
| No log | 1.3473 | 322 | 1.0006 | -0.4667 | 1.0006 | 1.0003 |
| No log | 1.3556 | 324 | 1.0115 | -0.2791 | 1.0115 | 1.0057 |
| No log | 1.3640 | 326 | 0.9064 | -0.2791 | 0.9064 | 0.9521 |
| No log | 1.3724 | 328 | 0.7814 | 0.0 | 0.7814 | 0.8840 |
| No log | 1.3808 | 330 | 0.6320 | 0.2326 | 0.6320 | 0.7950 |
| No log | 1.3891 | 332 | 0.6006 | 0.2326 | 0.6006 | 0.7750 |
| No log | 1.3975 | 334 | 0.5837 | 0.2326 | 0.5837 | 0.7640 |
| No log | 1.4059 | 336 | 0.6770 | 0.0 | 0.6770 | 0.8228 |
| No log | 1.4142 | 338 | 0.9019 | 0.0 | 0.9019 | 0.9497 |
| No log | 1.4226 | 340 | 1.0005 | -0.3883 | 1.0005 | 1.0002 |
| No log | 1.4310 | 342 | 0.9550 | -0.3276 | 0.9550 | 0.9772 |
| No log | 1.4393 | 344 | 0.7675 | 0.0 | 0.7675 | 0.8761 |
| No log | 1.4477 | 346 | 0.6199 | 0.0 | 0.6199 | 0.7873 |
| No log | 1.4561 | 348 | 0.6133 | 0.2326 | 0.6133 | 0.7831 |
| No log | 1.4644 | 350 | 0.6153 | 0.0 | 0.6153 | 0.7844 |
| No log | 1.4728 | 352 | 0.6164 | 0.0 | 0.6164 | 0.7851 |
| No log | 1.4812 | 354 | 0.6342 | 0.0 | 0.6342 | 0.7964 |
| No log | 1.4895 | 356 | 0.6315 | 0.0 | 0.6315 | 0.7947 |
| No log | 1.4979 | 358 | 0.6668 | 0.0 | 0.6668 | 0.8166 |
| No log | 1.5063 | 360 | 0.6654 | 0.0 | 0.6654 | 0.8157 |
| No log | 1.5146 | 362 | 0.6303 | 0.2326 | 0.6303 | 0.7939 |
| No log | 1.5230 | 364 | 0.6429 | 0.2222 | 0.6429 | 0.8018 |
| No log | 1.5314 | 366 | 0.6055 | 0.2326 | 0.6055 | 0.7781 |
| No log | 1.5397 | 368 | 0.5789 | 0.2326 | 0.5789 | 0.7609 |
| No log | 1.5481 | 370 | 0.5730 | 0.2326 | 0.5730 | 0.7570 |
| No log | 1.5565 | 372 | 0.6534 | 0.4107 | 0.6534 | 0.8083 |
| No log | 1.5649 | 374 | 0.7879 | 0.1987 | 0.7879 | 0.8876 |
| No log | 1.5732 | 376 | 0.7408 | 0.56 | 0.7408 | 0.8607 |
| No log | 1.5816 | 378 | 0.6167 | 0.2326 | 0.6167 | 0.7853 |
| No log | 1.5900 | 380 | 0.6598 | 0.0222 | 0.6598 | 0.8123 |
| No log | 1.5983 | 382 | 0.7173 | 0.0222 | 0.7173 | 0.8469 |
| No log | 1.6067 | 384 | 0.6783 | 0.0222 | 0.6783 | 0.8236 |
| No log | 1.6151 | 386 | 0.5692 | 0.0 | 0.5692 | 0.7544 |
| No log | 1.6234 | 388 | 0.4973 | 0.2326 | 0.4973 | 0.7052 |
| No log | 1.6318 | 390 | 0.5155 | 0.2326 | 0.5155 | 0.7180 |
| No log | 1.6402 | 392 | 0.5736 | 0.1895 | 0.5736 | 0.7573 |
| No log | 1.6485 | 394 | 0.5284 | 0.2326 | 0.5284 | 0.7269 |
| No log | 1.6569 | 396 | 0.5678 | 0.2326 | 0.5678 | 0.7535 |
| No log | 1.6653 | 398 | 0.7001 | 0.2524 | 0.7001 | 0.8367 |
| No log | 1.6736 | 400 | 0.6305 | 0.4107 | 0.6305 | 0.7941 |
| No log | 1.6820 | 402 | 0.6029 | 0.1895 | 0.6029 | 0.7765 |
| No log | 1.6904 | 404 | 0.6233 | 0.4107 | 0.6233 | 0.7895 |
| No log | 1.6987 | 406 | 0.7238 | 0.4107 | 0.7238 | 0.8508 |
| No log | 1.7071 | 408 | 0.7407 | 0.0222 | 0.7407 | 0.8606 |
| No log | 1.7155 | 410 | 0.6839 | 0.0 | 0.6839 | 0.8270 |
| No log | 1.7238 | 412 | 0.6127 | 0.2326 | 0.6127 | 0.7828 |
| No log | 1.7322 | 414 | 0.5960 | 0.2326 | 0.5960 | 0.7720 |
| No log | 1.7406 | 416 | 0.6171 | 0.1239 | 0.6171 | 0.7855 |
| No log | 1.7490 | 418 | 0.6286 | 0.0763 | 0.6286 | 0.7929 |
| No log | 1.7573 | 420 | 0.5867 | 0.1895 | 0.5867 | 0.7659 |
| No log | 1.7657 | 422 | 0.5447 | 0.2326 | 0.5447 | 0.7380 |
| No log | 1.7741 | 424 | 0.6084 | 0.0 | 0.6084 | 0.7800 |
| No log | 1.7824 | 426 | 0.6460 | 0.0222 | 0.6460 | 0.8037 |
| No log | 1.7908 | 428 | 0.5411 | 0.2326 | 0.5411 | 0.7356 |
| No log | 1.7992 | 430 | 0.5233 | 0.2326 | 0.5233 | 0.7234 |
| No log | 1.8075 | 432 | 0.5452 | 0.3889 | 0.5452 | 0.7384 |
| No log | 1.8159 | 434 | 0.5505 | 0.5075 | 0.5505 | 0.7419 |
| No log | 1.8243 | 436 | 0.6399 | 0.56 | 0.6399 | 0.7999 |
| No log | 1.8326 | 438 | 0.9054 | 0.0774 | 0.9054 | 0.9515 |
| No log | 1.8410 | 440 | 0.8728 | 0.0774 | 0.8728 | 0.9342 |
| No log | 1.8494 | 442 | 0.8317 | 0.0774 | 0.8317 | 0.9120 |
| No log | 1.8577 | 444 | 0.5693 | 0.6071 | 0.5693 | 0.7545 |
| No log | 1.8661 | 446 | 0.5996 | 0.0763 | 0.5996 | 0.7743 |
| No log | 1.8745 | 448 | 0.6066 | 0.0763 | 0.6066 | 0.7789 |
| No log | 1.8828 | 450 | 0.5468 | 0.1895 | 0.5468 | 0.7394 |
| No log | 1.8912 | 452 | 0.5565 | 0.2326 | 0.5565 | 0.7460 |
| No log | 1.8996 | 454 | 0.5465 | 0.2326 | 0.5465 | 0.7393 |
| No log | 1.9079 | 456 | 0.4832 | 0.2326 | 0.4832 | 0.6951 |
| No log | 1.9163 | 458 | 0.5214 | 0.1239 | 0.5214 | 0.7221 |
| No log | 1.9247 | 460 | 0.5820 | 0.0763 | 0.5820 | 0.7629 |
| No log | 1.9331 | 462 | 0.4839 | 0.1895 | 0.4839 | 0.6957 |
| No log | 1.9414 | 464 | 0.4780 | 0.2326 | 0.4780 | 0.6914 |
| No log | 1.9498 | 466 | 0.4848 | 0.2326 | 0.4848 | 0.6963 |
| No log | 1.9582 | 468 | 0.4894 | 0.1895 | 0.4894 | 0.6996 |
| No log | 1.9665 | 470 | 0.5198 | 0.1895 | 0.5198 | 0.7210 |
| No log | 1.9749 | 472 | 0.5713 | 0.1239 | 0.5713 | 0.7558 |
| No log | 1.9833 | 474 | 0.5234 | 0.1895 | 0.5234 | 0.7235 |
| No log | 1.9916 | 476 | 0.5215 | 0.2326 | 0.5215 | 0.7221 |
| No log | 2.0 | 478 | 0.5765 | 0.0 | 0.5765 | 0.7593 |
| No log | 2.0084 | 480 | 0.5572 | 0.2326 | 0.5572 | 0.7465 |
| No log | 2.0167 | 482 | 0.5769 | 0.1895 | 0.5769 | 0.7595 |
| No log | 2.0251 | 484 | 0.6077 | 0.1239 | 0.6077 | 0.7796 |
| No log | 2.0335 | 486 | 0.5586 | 0.1895 | 0.5586 | 0.7474 |
| No log | 2.0418 | 488 | 0.5453 | 0.2326 | 0.5453 | 0.7385 |
| No log | 2.0502 | 490 | 0.5391 | 0.2667 | 0.5391 | 0.7342 |
| No log | 2.0586 | 492 | 0.5000 | 0.4444 | 0.5000 | 0.7071 |
| No log | 2.0669 | 494 | 0.4877 | 0.4444 | 0.4877 | 0.6983 |
| No log | 2.0753 | 496 | 0.4434 | 0.4444 | 0.4434 | 0.6659 |
| No log | 2.0837 | 498 | 0.4373 | 0.4444 | 0.4373 | 0.6613 |
| 0.4771 | 2.0921 | 500 | 0.4391 | 0.4444 | 0.4391 | 0.6627 |
| 0.4771 | 2.1004 | 502 | 0.4663 | 0.2326 | 0.4663 | 0.6828 |
| 0.4771 | 2.1088 | 504 | 0.4996 | 0.1538 | 0.4996 | 0.7068 |
| 0.4771 | 2.1172 | 506 | 0.5046 | 0.1538 | 0.5046 | 0.7103 |
| 0.4771 | 2.1255 | 508 | 0.4426 | 0.2326 | 0.4426 | 0.6653 |
| 0.4771 | 2.1339 | 510 | 0.5291 | 0.6071 | 0.5291 | 0.7274 |
| 0.4771 | 2.1423 | 512 | 0.5291 | 0.6071 | 0.5291 | 0.7274 |
| 0.4771 | 2.1506 | 514 | 0.4547 | 0.6071 | 0.4547 | 0.6743 |
| 0.4771 | 2.1590 | 516 | 0.5970 | 0.0984 | 0.5970 | 0.7726 |
| 0.4771 | 2.1674 | 518 | 0.7822 | 0.1637 | 0.7822 | 0.8844 |
| 0.4771 | 2.1757 | 520 | 0.7174 | 0.0571 | 0.7174 | 0.8470 |
| 0.4771 | 2.1841 | 522 | 0.5309 | 0.1239 | 0.5309 | 0.7286 |
| 0.4771 | 2.1925 | 524 | 0.5131 | 0.2326 | 0.5131 | 0.7163 |
| 0.4771 | 2.2008 | 526 | 0.5565 | 0.2326 | 0.5565 | 0.7460 |
| 0.4771 | 2.2092 | 528 | 0.5310 | 0.2326 | 0.5310 | 0.7287 |
| 0.4771 | 2.2176 | 530 | 0.5108 | 0.2326 | 0.5108 | 0.7147 |
| 0.4771 | 2.2259 | 532 | 0.5922 | 0.0984 | 0.5922 | 0.7696 |
| 0.4771 | 2.2343 | 534 | 0.6104 | 0.0984 | 0.6104 | 0.7813 |
| 0.4771 | 2.2427 | 536 | 0.5263 | 0.1895 | 0.5263 | 0.7255 |
| 0.4771 | 2.2510 | 538 | 0.5029 | 0.2326 | 0.5029 | 0.7092 |
| 0.4771 | 2.2594 | 540 | 0.5324 | 0.2326 | 0.5324 | 0.7296 |
| 0.4771 | 2.2678 | 542 | 0.5075 | 0.2326 | 0.5075 | 0.7124 |
| 0.4771 | 2.2762 | 544 | 0.4725 | 0.2326 | 0.4725 | 0.6874 |
| 0.4771 | 2.2845 | 546 | 0.4708 | 0.2326 | 0.4708 | 0.6861 |
| 0.4771 | 2.2929 | 548 | 0.4523 | 0.2326 | 0.4523 | 0.6726 |
| 0.4771 | 2.3013 | 550 | 0.4519 | 0.2326 | 0.4519 | 0.6723 |
| 0.4771 | 2.3096 | 552 | 0.4635 | 0.4444 | 0.4635 | 0.6808 |
| 0.4771 | 2.3180 | 554 | 0.4331 | 0.2326 | 0.4331 | 0.6581 |
| 0.4771 | 2.3264 | 556 | 0.4255 | 0.2326 | 0.4255 | 0.6523 |
| 0.4771 | 2.3347 | 558 | 0.4340 | 0.2326 | 0.4340 | 0.6588 |
| 0.4771 | 2.3431 | 560 | 0.4514 | 0.2326 | 0.4514 | 0.6719 |
| 0.4771 | 2.3515 | 562 | 0.4411 | 0.2326 | 0.4411 | 0.6642 |
| 0.4771 | 2.3598 | 564 | 0.4502 | 0.2326 | 0.4502 | 0.6710 |
| 0.4771 | 2.3682 | 566 | 0.5110 | 0.6071 | 0.5110 | 0.7148 |
| 0.4771 | 2.3766 | 568 | 0.5298 | 0.6071 | 0.5298 | 0.7279 |
| 0.4771 | 2.3849 | 570 | 0.5346 | 0.6071 | 0.5346 | 0.7311 |
| 0.4771 | 2.3933 | 572 | 0.5512 | 0.6071 | 0.5512 | 0.7424 |
| 0.4771 | 2.4017 | 574 | 0.5996 | 0.6071 | 0.5996 | 0.7743 |
| 0.4771 | 2.4100 | 576 | 0.6854 | 0.2222 | 0.6854 | 0.8279 |
| 0.4771 | 2.4184 | 578 | 0.8089 | -0.1440 | 0.8089 | 0.8994 |
| 0.4771 | 2.4268 | 580 | 0.7567 | -0.2222 | 0.7567 | 0.8699 |
| 0.4771 | 2.4351 | 582 | 0.6300 | 0.2326 | 0.6300 | 0.7938 |
| 0.4771 | 2.4435 | 584 | 0.6266 | 0.1538 | 0.6266 | 0.7916 |
| 0.4771 | 2.4519 | 586 | 0.6029 | 0.1895 | 0.6029 | 0.7765 |
| 0.4771 | 2.4603 | 588 | 0.6575 | 0.2326 | 0.6575 | 0.8109 |
| 0.4771 | 2.4686 | 590 | 0.8351 | 0.0435 | 0.8351 | 0.9139 |
| 0.4771 | 2.4770 | 592 | 0.8680 | 0.0530 | 0.8680 | 0.9317 |
| 0.4771 | 2.4854 | 594 | 0.7198 | 0.0320 | 0.7198 | 0.8484 |
| 0.4771 | 2.4937 | 596 | 0.6597 | 0.2222 | 0.6597 | 0.8122 |
| 0.4771 | 2.5021 | 598 | 0.6659 | 0.2222 | 0.6659 | 0.8161 |
| 0.4771 | 2.5105 | 600 | 0.6725 | 0.2222 | 0.6725 | 0.8201 |
| 0.4771 | 2.5188 | 602 | 0.6590 | 0.2222 | 0.6590 | 0.8118 |
| 0.4771 | 2.5272 | 604 | 0.6395 | 0.1895 | 0.6395 | 0.7997 |
| 0.4771 | 2.5356 | 606 | 0.6308 | 0.1895 | 0.6308 | 0.7942 |
| 0.4771 | 2.5439 | 608 | 0.6268 | 0.1895 | 0.6268 | 0.7917 |
| 0.4771 | 2.5523 | 610 | 0.6527 | 0.2222 | 0.6527 | 0.8079 |
| 0.4771 | 2.5607 | 612 | 0.6676 | 0.4107 | 0.6676 | 0.8171 |
| 0.4771 | 2.5690 | 614 | 0.7295 | -0.1786 | 0.7295 | 0.8541 |
| 0.4771 | 2.5774 | 616 | 0.7702 | -0.1786 | 0.7702 | 0.8776 |
| 0.4771 | 2.5858 | 618 | 0.7769 | -0.1786 | 0.7769 | 0.8814 |
| 0.4771 | 2.5941 | 620 | 0.8180 | -0.2222 | 0.8180 | 0.9044 |
| 0.4771 | 2.6025 | 622 | 0.7379 | -0.2222 | 0.7379 | 0.8590 |
| 0.4771 | 2.6109 | 624 | 0.6729 | 0.2326 | 0.6729 | 0.8203 |
| 0.4771 | 2.6192 | 626 | 0.7102 | -0.2222 | 0.7102 | 0.8427 |
| 0.4771 | 2.6276 | 628 | 0.7391 | -0.2222 | 0.7391 | 0.8597 |
| 0.4771 | 2.6360 | 630 | 0.8595 | -0.1786 | 0.8595 | 0.9271 |
| 0.4771 | 2.6444 | 632 | 0.9040 | -0.1786 | 0.9040 | 0.9508 |
| 0.4771 | 2.6527 | 634 | 0.7403 | -0.2222 | 0.7403 | 0.8604 |
| 0.4771 | 2.6611 | 636 | 0.5858 | 0.2326 | 0.5858 | 0.7654 |
| 0.4771 | 2.6695 | 638 | 0.5508 | 0.2326 | 0.5508 | 0.7422 |
| 0.4771 | 2.6778 | 640 | 0.5319 | 0.2326 | 0.5319 | 0.7293 |
| 0.4771 | 2.6862 | 642 | 0.5316 | 0.2326 | 0.5316 | 0.7291 |
| 0.4771 | 2.6946 | 644 | 0.5792 | 0.2326 | 0.5792 | 0.7610 |
| 0.4771 | 2.7029 | 646 | 0.5299 | 0.2326 | 0.5299 | 0.7280 |
| 0.4771 | 2.7113 | 648 | 0.4991 | 0.2326 | 0.4991 | 0.7064 |
| 0.4771 | 2.7197 | 650 | 0.4971 | 0.2326 | 0.4971 | 0.7050 |
| 0.4771 | 2.7280 | 652 | 0.4970 | 0.2326 | 0.4970 | 0.7050 |
| 0.4771 | 2.7364 | 654 | 0.5063 | 0.2326 | 0.5063 | 0.7115 |
| 0.4771 | 2.7448 | 656 | 0.5790 | 0.1538 | 0.5790 | 0.7609 |
| 0.4771 | 2.7531 | 658 | 0.5860 | 0.1538 | 0.5860 | 0.7655 |
| 0.4771 | 2.7615 | 660 | 0.5470 | 0.1895 | 0.5470 | 0.7396 |
| 0.4771 | 2.7699 | 662 | 0.6530 | 0.4444 | 0.6530 | 0.8081 |
| 0.4771 | 2.7782 | 664 | 0.7235 | 0.0320 | 0.7235 | 0.8506 |
| 0.4771 | 2.7866 | 666 | 0.6625 | 0.2326 | 0.6625 | 0.8139 |
| 0.4771 | 2.7950 | 668 | 0.5750 | 0.2326 | 0.5750 | 0.7583 |
| 0.4771 | 2.8033 | 670 | 0.6291 | 0.1895 | 0.6291 | 0.7931 |
| 0.4771 | 2.8117 | 672 | 0.6702 | 0.2092 | 0.6702 | 0.8186 |
| 0.4771 | 2.8201 | 674 | 0.5759 | 0.1895 | 0.5759 | 0.7589 |
| 0.4771 | 2.8285 | 676 | 0.5591 | 0.2326 | 0.5591 | 0.7477 |
| 0.4771 | 2.8368 | 678 | 0.6529 | 0.4444 | 0.6529 | 0.8080 |
| 0.4771 | 2.8452 | 680 | 0.6989 | 0.0179 | 0.6989 | 0.8360 |
| 0.4771 | 2.8536 | 682 | 0.6237 | 0.2326 | 0.6237 | 0.7898 |
| 0.4771 | 2.8619 | 684 | 0.5530 | 0.2326 | 0.5530 | 0.7436 |
| 0.4771 | 2.8703 | 686 | 0.5951 | 0.1538 | 0.5951 | 0.7714 |
| 0.4771 | 2.8787 | 688 | 0.6003 | 0.1538 | 0.6003 | 0.7748 |
| 0.4771 | 2.8870 | 690 | 0.5564 | 0.2326 | 0.5564 | 0.7459 |
| 0.4771 | 2.8954 | 692 | 0.5624 | 0.2326 | 0.5624 | 0.7500 |
| 0.4771 | 2.9038 | 694 | 0.6129 | 0.1239 | 0.6129 | 0.7829 |
| 0.4771 | 2.9121 | 696 | 0.6075 | 0.1239 | 0.6075 | 0.7795 |
| 0.4771 | 2.9205 | 698 | 0.5268 | 0.2326 | 0.5268 | 0.7258 |
| 0.4771 | 2.9289 | 700 | 0.5172 | 0.2326 | 0.5172 | 0.7192 |
| 0.4771 | 2.9372 | 702 | 0.5224 | 0.2326 | 0.5224 | 0.7228 |
| 0.4771 | 2.9456 | 704 | 0.5127 | 0.2326 | 0.5127 | 0.7160 |
| 0.4771 | 2.9540 | 706 | 0.5287 | 0.2326 | 0.5287 | 0.7271 |
| 0.4771 | 2.9623 | 708 | 0.5394 | 0.2326 | 0.5394 | 0.7344 |
| 0.4771 | 2.9707 | 710 | 0.5447 | 0.2326 | 0.5447 | 0.7380 |
| 0.4771 | 2.9791 | 712 | 0.5840 | 0.2326 | 0.5840 | 0.7642 |
| 0.4771 | 2.9874 | 714 | 0.6438 | 0.1239 | 0.6438 | 0.8023 |
| 0.4771 | 2.9958 | 716 | 0.6760 | 0.0571 | 0.6760 | 0.8222 |
| 0.4771 | 3.0042 | 718 | 0.6397 | 0.1239 | 0.6397 | 0.7998 |
| 0.4771 | 3.0126 | 720 | 0.5467 | 0.1895 | 0.5467 | 0.7394 |
| 0.4771 | 3.0209 | 722 | 0.5070 | 0.2326 | 0.5070 | 0.7120 |
| 0.4771 | 3.0293 | 724 | 0.5164 | 0.2326 | 0.5164 | 0.7186 |
| 0.4771 | 3.0377 | 726 | 0.5306 | 0.2326 | 0.5306 | 0.7284 |
| 0.4771 | 3.0460 | 728 | 0.5518 | 0.2326 | 0.5518 | 0.7428 |
| 0.4771 | 3.0544 | 730 | 0.5565 | 0.2326 | 0.5565 | 0.7460 |
| 0.4771 | 3.0628 | 732 | 0.5854 | 0.1895 | 0.5854 | 0.7651 |
| 0.4771 | 3.0711 | 734 | 0.6952 | 0.1239 | 0.6952 | 0.8338 |
| 0.4771 | 3.0795 | 736 | 0.7655 | 0.0763 | 0.7655 | 0.8749 |
| 0.4771 | 3.0879 | 738 | 0.6742 | 0.1895 | 0.6742 | 0.8211 |
| 0.4771 | 3.0962 | 740 | 0.5641 | 0.2326 | 0.5641 | 0.7511 |
| 0.4771 | 3.1046 | 742 | 0.5797 | 0.2326 | 0.5797 | 0.7614 |
| 0.4771 | 3.1130 | 744 | 0.5947 | 0.2326 | 0.5947 | 0.7712 |
| 0.4771 | 3.1213 | 746 | 0.5935 | 0.2326 | 0.5935 | 0.7704 |
| 0.4771 | 3.1297 | 748 | 0.6122 | 0.2326 | 0.6122 | 0.7824 |
| 0.4771 | 3.1381 | 750 | 0.6262 | 0.2326 | 0.6262 | 0.7913 |
| 0.4771 | 3.1464 | 752 | 0.6671 | 0.2326 | 0.6671 | 0.8168 |
| 0.4771 | 3.1548 | 754 | 0.6995 | -0.2222 | 0.6995 | 0.8364 |
| 0.4771 | 3.1632 | 756 | 0.7156 | -0.2222 | 0.7156 | 0.8459 |
| 0.4771 | 3.1715 | 758 | 0.6734 | 0.2326 | 0.6734 | 0.8206 |
| 0.4771 | 3.1799 | 760 | 0.6604 | 0.1895 | 0.6604 | 0.8127 |
| 0.4771 | 3.1883 | 762 | 0.7416 | 0.1538 | 0.7416 | 0.8611 |
| 0.4771 | 3.1967 | 764 | 0.7682 | 0.1538 | 0.7682 | 0.8764 |
| 0.4771 | 3.2050 | 766 | 0.6646 | 0.1895 | 0.6646 | 0.8152 |
| 0.4771 | 3.2134 | 768 | 0.7202 | 0.0179 | 0.7202 | 0.8486 |
| 0.4771 | 3.2218 | 770 | 0.7803 | 0.1987 | 0.7803 | 0.8834 |
| 0.4771 | 3.2301 | 772 | 0.7160 | 0.0179 | 0.7160 | 0.8462 |
| 0.4771 | 3.2385 | 774 | 0.8413 | 0.0984 | 0.8413 | 0.9172 |
| 0.4771 | 3.2469 | 776 | 1.0454 | 0.0351 | 1.0454 | 1.0224 |
| 0.4771 | 3.2552 | 778 | 1.0605 | 0.0222 | 1.0605 | 1.0298 |
| 0.4771 | 3.2636 | 780 | 0.8228 | 0.1538 | 0.8228 | 0.9071 |
| 0.4771 | 3.2720 | 782 | 0.6524 | 0.2326 | 0.6524 | 0.8077 |
| 0.4771 | 3.2803 | 784 | 0.6689 | 0.2326 | 0.6689 | 0.8178 |
| 0.4771 | 3.2887 | 786 | 0.6735 | 0.2326 | 0.6735 | 0.8207 |
| 0.4771 | 3.2971 | 788 | 0.6478 | 0.2326 | 0.6478 | 0.8049 |
| 0.4771 | 3.3054 | 790 | 0.6396 | 0.2326 | 0.6396 | 0.7997 |
| 0.4771 | 3.3138 | 792 | 0.6444 | 0.2326 | 0.6444 | 0.8028 |
| 0.4771 | 3.3222 | 794 | 0.6519 | 0.2326 | 0.6519 | 0.8074 |
| 0.4771 | 3.3305 | 796 | 0.6573 | 0.2326 | 0.6573 | 0.8107 |
| 0.4771 | 3.3389 | 798 | 0.6506 | 0.2326 | 0.6506 | 0.8066 |
| 0.4771 | 3.3473 | 800 | 0.6509 | 0.2326 | 0.6509 | 0.8068 |
| 0.4771 | 3.3556 | 802 | 0.6425 | 0.2326 | 0.6425 | 0.8015 |
| 0.4771 | 3.3640 | 804 | 0.6473 | 0.2326 | 0.6473 | 0.8046 |
| 0.4771 | 3.3724 | 806 | 0.6626 | 0.2080 | 0.6626 | 0.8140 |
| 0.4771 | 3.3808 | 808 | 0.6600 | 0.2080 | 0.6600 | 0.8124 |
| 0.4771 | 3.3891 | 810 | 0.6132 | 0.2326 | 0.6132 | 0.7831 |
| 0.4771 | 3.3975 | 812 | 0.6186 | 0.1895 | 0.6186 | 0.7865 |
| 0.4771 | 3.4059 | 814 | 0.6268 | 0.1895 | 0.6268 | 0.7917 |
| 0.4771 | 3.4142 | 816 | 0.6139 | 0.2326 | 0.6139 | 0.7835 |
| 0.4771 | 3.4226 | 818 | 0.6541 | 0.2326 | 0.6541 | 0.8088 |
| 0.4771 | 3.4310 | 820 | 0.8094 | 0.0704 | 0.8094 | 0.8997 |
| 0.4771 | 3.4393 | 822 | 0.9036 | 0.0833 | 0.9036 | 0.9506 |
| 0.4771 | 3.4477 | 824 | 0.8684 | 0.0833 | 0.8684 | 0.9319 |
| 0.4771 | 3.4561 | 826 | 0.8093 | 0.0179 | 0.8093 | 0.8996 |
| 0.4771 | 3.4644 | 828 | 0.7708 | -0.2222 | 0.7708 | 0.8780 |
| 0.4771 | 3.4728 | 830 | 0.7334 | 0.1895 | 0.7334 | 0.8564 |
| 0.4771 | 3.4812 | 832 | 0.8044 | 0.1538 | 0.8044 | 0.8969 |
| 0.4771 | 3.4895 | 834 | 0.8263 | 0.1538 | 0.8263 | 0.9090 |
| 0.4771 | 3.4979 | 836 | 0.7728 | 0.1538 | 0.7728 | 0.8791 |
| 0.4771 | 3.5063 | 838 | 0.7213 | 0.1538 | 0.7213 | 0.8493 |
| 0.4771 | 3.5146 | 840 | 0.6812 | 0.1895 | 0.6812 | 0.8254 |
| 0.4771 | 3.5230 | 842 | 0.6891 | 0.2326 | 0.6891 | 0.8301 |
| 0.4771 | 3.5314 | 844 | 0.6847 | 0.2326 | 0.6847 | 0.8275 |
| 0.4771 | 3.5397 | 846 | 0.6685 | 0.1895 | 0.6685 | 0.8176 |
| 0.4771 | 3.5481 | 848 | 0.7026 | 0.1538 | 0.7026 | 0.8382 |
| 0.4771 | 3.5565 | 850 | 0.7538 | 0.1239 | 0.7538 | 0.8682 |
| 0.4771 | 3.5649 | 852 | 0.8311 | 0.0763 | 0.8311 | 0.9117 |
| 0.4771 | 3.5732 | 854 | 0.9018 | 0.1637 | 0.9018 | 0.9496 |
| 0.4771 | 3.5816 | 856 | 0.8201 | 0.0763 | 0.8201 | 0.9056 |
| 0.4771 | 3.5900 | 858 | 0.7016 | 0.1239 | 0.7016 | 0.8376 |
| 0.4771 | 3.5983 | 860 | 0.6288 | 0.1895 | 0.6288 | 0.7930 |
| 0.4771 | 3.6067 | 862 | 0.6503 | 0.2326 | 0.6503 | 0.8064 |
| 0.4771 | 3.6151 | 864 | 0.6600 | 0.2326 | 0.6600 | 0.8124 |
| 0.4771 | 3.6234 | 866 | 0.6370 | 0.2326 | 0.6370 | 0.7981 |
| 0.4771 | 3.6318 | 868 | 0.6240 | 0.1895 | 0.6240 | 0.7899 |
| 0.4771 | 3.6402 | 870 | 0.7166 | 0.1538 | 0.7166 | 0.8465 |
| 0.4771 | 3.6485 | 872 | 0.7933 | 0.0763 | 0.7933 | 0.8907 |
| 0.4771 | 3.6569 | 874 | 0.8200 | 0.0571 | 0.8200 | 0.9055 |
| 0.4771 | 3.6653 | 876 | 0.7682 | 0.0571 | 0.7682 | 0.8765 |
| 0.4771 | 3.6736 | 878 | 0.7321 | 0.0984 | 0.7321 | 0.8556 |
| 0.4771 | 3.6820 | 880 | 0.7125 | 0.1239 | 0.7125 | 0.8441 |
| 0.4771 | 3.6904 | 882 | 0.6430 | 0.1538 | 0.6430 | 0.8019 |
| 0.4771 | 3.6987 | 884 | 0.6029 | 0.1895 | 0.6029 | 0.7765 |
| 0.4771 | 3.7071 | 886 | 0.6333 | 0.2326 | 0.6333 | 0.7958 |
| 0.4771 | 3.7155 | 888 | 0.6899 | 0.0 | 0.6899 | 0.8306 |
| 0.4771 | 3.7238 | 890 | 0.7033 | 0.0 | 0.7033 | 0.8386 |
| 0.4771 | 3.7322 | 892 | 0.6609 | 0.0 | 0.6609 | 0.8130 |
| 0.4771 | 3.7406 | 894 | 0.6223 | 0.2326 | 0.6223 | 0.7889 |
| 0.4771 | 3.7490 | 896 | 0.6693 | 0.1895 | 0.6693 | 0.8181 |
| 0.4771 | 3.7573 | 898 | 0.7241 | 0.1239 | 0.7241 | 0.8510 |
| 0.4771 | 3.7657 | 900 | 0.6611 | 0.1895 | 0.6611 | 0.8131 |
| 0.4771 | 3.7741 | 902 | 0.6107 | 0.1895 | 0.6107 | 0.7815 |
| 0.4771 | 3.7824 | 904 | 0.6017 | 0.2326 | 0.6017 | 0.7757 |
| 0.4771 | 3.7908 | 906 | 0.6339 | 0.2326 | 0.6339 | 0.7962 |
| 0.4771 | 3.7992 | 908 | 0.6423 | 0.2326 | 0.6423 | 0.8015 |
| 0.4771 | 3.8075 | 910 | 0.6326 | 0.2326 | 0.6326 | 0.7954 |
| 0.4771 | 3.8159 | 912 | 0.6120 | 0.2326 | 0.6120 | 0.7823 |
| 0.4771 | 3.8243 | 914 | 0.5944 | 0.2326 | 0.5944 | 0.7710 |
| 0.4771 | 3.8326 | 916 | 0.5843 | 0.2326 | 0.5843 | 0.7644 |
| 0.4771 | 3.8410 | 918 | 0.5799 | 0.2326 | 0.5799 | 0.7615 |
| 0.4771 | 3.8494 | 920 | 0.6094 | 0.1895 | 0.6094 | 0.7806 |
| 0.4771 | 3.8577 | 922 | 0.6259 | 0.1895 | 0.6259 | 0.7911 |
| 0.4771 | 3.8661 | 924 | 0.6068 | 0.1895 | 0.6068 | 0.7790 |
| 0.4771 | 3.8745 | 926 | 0.6084 | 0.1895 | 0.6084 | 0.7800 |
| 0.4771 | 3.8828 | 928 | 0.6060 | 0.2326 | 0.6060 | 0.7785 |
| 0.4771 | 3.8912 | 930 | 0.6154 | 0.2326 | 0.6154 | 0.7845 |
| 0.4771 | 3.8996 | 932 | 0.6494 | 0.1895 | 0.6494 | 0.8058 |
| 0.4771 | 3.9079 | 934 | 0.6732 | 0.1895 | 0.6732 | 0.8205 |
| 0.4771 | 3.9163 | 936 | 0.6708 | 0.1895 | 0.6708 | 0.8190 |
| 0.4771 | 3.9247 | 938 | 0.6724 | 0.1895 | 0.6724 | 0.8200 |
| 0.4771 | 3.9331 | 940 | 0.7043 | 0.1538 | 0.7043 | 0.8392 |
| 0.4771 | 3.9414 | 942 | 0.7189 | 0.1239 | 0.7189 | 0.8479 |
| 0.4771 | 3.9498 | 944 | 0.6986 | 0.1538 | 0.6986 | 0.8358 |
| 0.4771 | 3.9582 | 946 | 0.6845 | 0.1895 | 0.6845 | 0.8273 |
| 0.4771 | 3.9665 | 948 | 0.7047 | 0.2080 | 0.7047 | 0.8395 |
| 0.4771 | 3.9749 | 950 | 0.7371 | 0.2080 | 0.7371 | 0.8585 |
| 0.4771 | 3.9833 | 952 | 0.7252 | 0.2080 | 0.7252 | 0.8516 |
| 0.4771 | 3.9916 | 954 | 0.7156 | 0.1791 | 0.7156 | 0.8460 |
| 0.4771 | 4.0 | 956 | 0.7270 | 0.2080 | 0.7270 | 0.8527 |
| 0.4771 | 4.0084 | 958 | 0.7173 | 0.2080 | 0.7173 | 0.8470 |
| 0.4771 | 4.0167 | 960 | 0.6871 | 0.2080 | 0.6871 | 0.8289 |
| 0.4771 | 4.0251 | 962 | 0.6702 | 0.5455 | 0.6702 | 0.8187 |
| 0.4771 | 4.0335 | 964 | 0.6683 | 0.4444 | 0.6683 | 0.8175 |
| 0.4771 | 4.0418 | 966 | 0.6604 | 0.4444 | 0.6604 | 0.8126 |
| 0.4771 | 4.0502 | 968 | 0.6597 | 0.1895 | 0.6597 | 0.8122 |
| 0.4771 | 4.0586 | 970 | 0.7326 | 0.1239 | 0.7326 | 0.8559 |
| 0.4771 | 4.0669 | 972 | 0.7745 | 0.1239 | 0.7745 | 0.8801 |
| 0.4771 | 4.0753 | 974 | 0.7822 | 0.1239 | 0.7822 | 0.8844 |
| 0.4771 | 4.0837 | 976 | 0.7390 | 0.1239 | 0.7390 | 0.8596 |
| 0.4771 | 4.0921 | 978 | 0.6733 | 0.1895 | 0.6733 | 0.8205 |
| 0.4771 | 4.1004 | 980 | 0.6588 | 0.2326 | 0.6588 | 0.8117 |
| 0.4771 | 4.1088 | 982 | 0.6979 | -0.2222 | 0.6979 | 0.8354 |
| 0.4771 | 4.1172 | 984 | 0.7045 | -0.2222 | 0.7045 | 0.8394 |
| 0.4771 | 4.1255 | 986 | 0.6927 | -0.2222 | 0.6927 | 0.8323 |
| 0.4771 | 4.1339 | 988 | 0.6908 | 0.1895 | 0.6908 | 0.8312 |
| 0.4771 | 4.1423 | 990 | 0.6989 | 0.1895 | 0.6989 | 0.8360 |
| 0.4771 | 4.1506 | 992 | 0.7063 | 0.1895 | 0.7063 | 0.8404 |
| 0.4771 | 4.1590 | 994 | 0.6914 | 0.1895 | 0.6914 | 0.8315 |
| 0.4771 | 4.1674 | 996 | 0.6810 | 0.1895 | 0.6810 | 0.8252 |
| 0.4771 | 4.1757 | 998 | 0.6910 | -0.2222 | 0.6910 | 0.8313 |
| 0.1126 | 4.1841 | 1000 | 0.6779 | -0.2222 | 0.6779 | 0.8233 |
| 0.1126 | 4.1925 | 1002 | 0.6710 | 0.2326 | 0.6710 | 0.8192 |
| 0.1126 | 4.2008 | 1004 | 0.6772 | 0.1895 | 0.6772 | 0.8229 |
| 0.1126 | 4.2092 | 1006 | 0.7026 | 0.1895 | 0.7026 | 0.8382 |
| 0.1126 | 4.2176 | 1008 | 0.7365 | -0.2222 | 0.7365 | 0.8582 |
| 0.1126 | 4.2259 | 1010 | 0.7733 | -0.2222 | 0.7733 | 0.8794 |
| 0.1126 | 4.2343 | 1012 | 0.7898 | -0.2222 | 0.7898 | 0.8887 |
| 0.1126 | 4.2427 | 1014 | 0.7943 | -0.2222 | 0.7943 | 0.8912 |
| 0.1126 | 4.2510 | 1016 | 0.7741 | -0.2222 | 0.7741 | 0.8799 |
| 0.1126 | 4.2594 | 1018 | 0.7525 | -0.2222 | 0.7525 | 0.8675 |
| 0.1126 | 4.2678 | 1020 | 0.7350 | -0.2222 | 0.7350 | 0.8573 |
| 0.1126 | 4.2762 | 1022 | 0.7582 | -0.2222 | 0.7582 | 0.8708 |
| 0.1126 | 4.2845 | 1024 | 0.7685 | -0.2222 | 0.7685 | 0.8767 |
| 0.1126 | 4.2929 | 1026 | 0.7435 | -0.2222 | 0.7435 | 0.8622 |
| 0.1126 | 4.3013 | 1028 | 0.7355 | -0.2222 | 0.7355 | 0.8576 |
| 0.1126 | 4.3096 | 1030 | 0.7326 | -0.2222 | 0.7326 | 0.8559 |
| 0.1126 | 4.3180 | 1032 | 0.7258 | -0.2222 | 0.7258 | 0.8520 |
| 0.1126 | 4.3264 | 1034 | 0.7045 | -0.2222 | 0.7046 | 0.8394 |
| 0.1126 | 4.3347 | 1036 | 0.6922 | -0.2222 | 0.6922 | 0.8320 |
| 0.1126 | 4.3431 | 1038 | 0.7243 | 0.0179 | 0.7243 | 0.8511 |
| 0.1126 | 4.3515 | 1040 | 0.7024 | 0.0179 | 0.7024 | 0.8381 |
| 0.1126 | 4.3598 | 1042 | 0.6911 | -0.2222 | 0.6911 | 0.8313 |
| 0.1126 | 4.3682 | 1044 | 0.6539 | 0.2326 | 0.6539 | 0.8086 |
| 0.1126 | 4.3766 | 1046 | 0.6521 | 0.2326 | 0.6521 | 0.8075 |
| 0.1126 | 4.3849 | 1048 | 0.6736 | 0.1895 | 0.6736 | 0.8207 |
| 0.1126 | 4.3933 | 1050 | 0.6595 | 0.2326 | 0.6595 | 0.8121 |
| 0.1126 | 4.4017 | 1052 | 0.6687 | 0.2326 | 0.6687 | 0.8177 |
| 0.1126 | 4.4100 | 1054 | 0.6726 | 0.2326 | 0.6726 | 0.8201 |
| 0.1126 | 4.4184 | 1056 | 0.6675 | 0.2326 | 0.6675 | 0.8170 |
| 0.1126 | 4.4268 | 1058 | 0.6924 | -0.2222 | 0.6924 | 0.8321 |
| 0.1126 | 4.4351 | 1060 | 0.7025 | -0.2222 | 0.7025 | 0.8381 |
| 0.1126 | 4.4435 | 1062 | 0.6851 | 0.2326 | 0.6851 | 0.8277 |
| 0.1126 | 4.4519 | 1064 | 0.6874 | 0.1895 | 0.6874 | 0.8291 |
| 0.1126 | 4.4603 | 1066 | 0.7407 | 0.1538 | 0.7407 | 0.8606 |
| 0.1126 | 4.4686 | 1068 | 0.7513 | 0.1538 | 0.7513 | 0.8668 |
| 0.1126 | 4.4770 | 1070 | 0.7494 | 0.1538 | 0.7494 | 0.8657 |
| 0.1126 | 4.4854 | 1072 | 0.7594 | 0.1538 | 0.7594 | 0.8714 |
| 0.1126 | 4.4937 | 1074 | 0.7372 | 0.1895 | 0.7372 | 0.8586 |
| 0.1126 | 4.5021 | 1076 | 0.7109 | 0.1895 | 0.7109 | 0.8431 |
| 0.1126 | 4.5105 | 1078 | 0.6963 | 0.1895 | 0.6963 | 0.8345 |
| 0.1126 | 4.5188 | 1080 | 0.7134 | 0.1895 | 0.7134 | 0.8446 |
| 0.1126 | 4.5272 | 1082 | 0.7174 | 0.1895 | 0.7174 | 0.8470 |
| 0.1126 | 4.5356 | 1084 | 0.6999 | 0.1895 | 0.6999 | 0.8366 |
| 0.1126 | 4.5439 | 1086 | 0.6906 | 0.1895 | 0.6906 | 0.8310 |
| 0.1126 | 4.5523 | 1088 | 0.6981 | 0.1895 | 0.6981 | 0.8355 |
| 0.1126 | 4.5607 | 1090 | 0.7199 | 0.1895 | 0.7199 | 0.8485 |
| 0.1126 | 4.5690 | 1092 | 0.7298 | 0.1538 | 0.7298 | 0.8543 |
| 0.1126 | 4.5774 | 1094 | 0.7268 | 0.1895 | 0.7268 | 0.8525 |
| 0.1126 | 4.5858 | 1096 | 0.7011 | 0.1895 | 0.7011 | 0.8373 |
| 0.1126 | 4.5941 | 1098 | 0.6787 | 0.1895 | 0.6787 | 0.8239 |
| 0.1126 | 4.6025 | 1100 | 0.6763 | 0.2326 | 0.6763 | 0.8224 |
| 0.1126 | 4.6109 | 1102 | 0.6786 | 0.2326 | 0.6786 | 0.8238 |
| 0.1126 | 4.6192 | 1104 | 0.6646 | 0.2326 | 0.6646 | 0.8152 |
| 0.1126 | 4.6276 | 1106 | 0.6705 | 0.2326 | 0.6705 | 0.8188 |
| 0.1126 | 4.6360 | 1108 | 0.7564 | 0.1538 | 0.7564 | 0.8697 |
| 0.1126 | 4.6444 | 1110 | 0.8337 | 0.0571 | 0.8337 | 0.9131 |
| 0.1126 | 4.6527 | 1112 | 0.8092 | 0.0763 | 0.8092 | 0.8996 |
| 0.1126 | 4.6611 | 1114 | 0.7215 | 0.1538 | 0.7215 | 0.8494 |
| 0.1126 | 4.6695 | 1116 | 0.6836 | -0.2222 | 0.6836 | 0.8268 |
| 0.1126 | 4.6778 | 1118 | 0.7453 | 0.2080 | 0.7453 | 0.8633 |
| 0.1126 | 4.6862 | 1120 | 0.7561 | 0.2080 | 0.7561 | 0.8696 |
| 0.1126 | 4.6946 | 1122 | 0.7198 | -0.2222 | 0.7198 | 0.8484 |
| 0.1126 | 4.7029 | 1124 | 0.7008 | -0.2222 | 0.7008 | 0.8371 |
| 0.1126 | 4.7113 | 1126 | 0.7104 | 0.1895 | 0.7104 | 0.8428 |
| 0.1126 | 4.7197 | 1128 | 0.7116 | 0.1895 | 0.7116 | 0.8435 |
| 0.1126 | 4.7280 | 1130 | 0.7109 | 0.1895 | 0.7109 | 0.8432 |
| 0.1126 | 4.7364 | 1132 | 0.6963 | 0.2326 | 0.6963 | 0.8344 |
| 0.1126 | 4.7448 | 1134 | 0.6893 | 0.2326 | 0.6893 | 0.8302 |
| 0.1126 | 4.7531 | 1136 | 0.7166 | -0.2222 | 0.7166 | 0.8465 |
| 0.1126 | 4.7615 | 1138 | 0.7328 | -0.2222 | 0.7328 | 0.8560 |
| 0.1126 | 4.7699 | 1140 | 0.7401 | 0.2080 | 0.7401 | 0.8603 |
| 0.1126 | 4.7782 | 1142 | 0.7053 | -0.2222 | 0.7053 | 0.8398 |
| 0.1126 | 4.7866 | 1144 | 0.7026 | 0.1895 | 0.7026 | 0.8382 |
| 0.1126 | 4.7950 | 1146 | 0.7240 | 0.1538 | 0.7240 | 0.8509 |
| 0.1126 | 4.8033 | 1148 | 0.7063 | 0.1895 | 0.7063 | 0.8404 |
| 0.1126 | 4.8117 | 1150 | 0.6815 | 0.1895 | 0.6815 | 0.8255 |
| 0.1126 | 4.8201 | 1152 | 0.6839 | -0.2222 | 0.6839 | 0.8270 |
| 0.1126 | 4.8285 | 1154 | 0.6965 | -0.2222 | 0.6965 | 0.8345 |
| 0.1126 | 4.8368 | 1156 | 0.6697 | -0.2222 | 0.6697 | 0.8184 |
| 0.1126 | 4.8452 | 1158 | 0.6594 | 0.1895 | 0.6594 | 0.8120 |
| 0.1126 | 4.8536 | 1160 | 0.7607 | 0.1239 | 0.7607 | 0.8722 |
| 0.1126 | 4.8619 | 1162 | 0.8845 | 0.0763 | 0.8845 | 0.9405 |
| 0.1126 | 4.8703 | 1164 | 0.9284 | 0.0763 | 0.9284 | 0.9635 |
| 0.1126 | 4.8787 | 1166 | 0.8884 | 0.0763 | 0.8884 | 0.9426 |
| 0.1126 | 4.8870 | 1168 | 0.7735 | 0.1239 | 0.7735 | 0.8795 |
| 0.1126 | 4.8954 | 1170 | 0.6586 | 0.1895 | 0.6586 | 0.8116 |
| 0.1126 | 4.9038 | 1172 | 0.6566 | 0.2326 | 0.6566 | 0.8103 |
| 0.1126 | 4.9121 | 1174 | 0.6578 | 0.2326 | 0.6578 | 0.8110 |
| 0.1126 | 4.9205 | 1176 | 0.6687 | 0.2326 | 0.6687 | 0.8177 |
| 0.1126 | 4.9289 | 1178 | 0.7413 | 0.1239 | 0.7413 | 0.8610 |
| 0.1126 | 4.9372 | 1180 | 0.7931 | 0.1239 | 0.7931 | 0.8906 |
| 0.1126 | 4.9456 | 1182 | 0.8106 | 0.1239 | 0.8106 | 0.9003 |
| 0.1126 | 4.9540 | 1184 | 0.7993 | 0.1239 | 0.7993 | 0.8940 |
| 0.1126 | 4.9623 | 1186 | 0.7795 | 0.1239 | 0.7795 | 0.8829 |
| 0.1126 | 4.9707 | 1188 | 0.7585 | 0.1239 | 0.7585 | 0.8709 |
| 0.1126 | 4.9791 | 1190 | 0.7427 | 0.1239 | 0.7427 | 0.8618 |
| 0.1126 | 4.9874 | 1192 | 0.7519 | 0.1239 | 0.7519 | 0.8671 |
| 0.1126 | 4.9958 | 1194 | 0.7453 | 0.1239 | 0.7453 | 0.8633 |
| 0.1126 | 5.0042 | 1196 | 0.7302 | 0.0984 | 0.7302 | 0.8545 |
| 0.1126 | 5.0126 | 1198 | 0.7096 | 0.0763 | 0.7096 | 0.8424 |
| 0.1126 | 5.0209 | 1200 | 0.6844 | 0.0984 | 0.6844 | 0.8273 |
| 0.1126 | 5.0293 | 1202 | 0.6649 | 0.1239 | 0.6649 | 0.8154 |
| 0.1126 | 5.0377 | 1204 | 0.6193 | 0.1538 | 0.6193 | 0.7869 |
| 0.1126 | 5.0460 | 1206 | 0.5947 | 0.2326 | 0.5947 | 0.7712 |
| 0.1126 | 5.0544 | 1208 | 0.5849 | 0.2326 | 0.5849 | 0.7648 |
| 0.1126 | 5.0628 | 1210 | 0.5823 | 0.2326 | 0.5823 | 0.7631 |
| 0.1126 | 5.0711 | 1212 | 0.6087 | 0.1239 | 0.6087 | 0.7802 |
| 0.1126 | 5.0795 | 1214 | 0.5978 | 0.1239 | 0.5978 | 0.7732 |
| 0.1126 | 5.0879 | 1216 | 0.5833 | 0.1895 | 0.5833 | 0.7637 |
| 0.1126 | 5.0962 | 1218 | 0.5730 | 0.2326 | 0.5730 | 0.7570 |
| 0.1126 | 5.1046 | 1220 | 0.5729 | 0.2326 | 0.5729 | 0.7569 |
| 0.1126 | 5.1130 | 1222 | 0.5862 | 0.2326 | 0.5862 | 0.7656 |
| 0.1126 | 5.1213 | 1224 | 0.5982 | 0.2326 | 0.5982 | 0.7734 |
| 0.1126 | 5.1297 | 1226 | 0.6071 | 0.2326 | 0.6071 | 0.7792 |
| 0.1126 | 5.1381 | 1228 | 0.6119 | 0.2326 | 0.6119 | 0.7822 |
| 0.1126 | 5.1464 | 1230 | 0.6382 | 0.1895 | 0.6382 | 0.7989 |
| 0.1126 | 5.1548 | 1232 | 0.7050 | 0.1538 | 0.7050 | 0.8397 |
| 0.1126 | 5.1632 | 1234 | 0.8131 | 0.0984 | 0.8131 | 0.9017 |
| 0.1126 | 5.1715 | 1236 | 0.8165 | 0.0984 | 0.8165 | 0.9036 |
| 0.1126 | 5.1799 | 1238 | 0.8022 | 0.1239 | 0.8022 | 0.8957 |
| 0.1126 | 5.1883 | 1240 | 0.7474 | 0.1239 | 0.7474 | 0.8645 |
| 0.1126 | 5.1967 | 1242 | 0.7039 | 0.1895 | 0.7039 | 0.8390 |
| 0.1126 | 5.2050 | 1244 | 0.6711 | 0.2326 | 0.6711 | 0.8192 |
| 0.1126 | 5.2134 | 1246 | 0.6598 | 0.2326 | 0.6598 | 0.8123 |
| 0.1126 | 5.2218 | 1248 | 0.6549 | 0.2326 | 0.6549 | 0.8093 |
| 0.1126 | 5.2301 | 1250 | 0.6644 | 0.1895 | 0.6644 | 0.8151 |
| 0.1126 | 5.2385 | 1252 | 0.7084 | 0.1239 | 0.7084 | 0.8417 |
| 0.1126 | 5.2469 | 1254 | 0.7302 | 0.1239 | 0.7302 | 0.8545 |
| 0.1126 | 5.2552 | 1256 | 0.7167 | 0.1239 | 0.7167 | 0.8466 |
| 0.1126 | 5.2636 | 1258 | 0.6786 | 0.1239 | 0.6786 | 0.8237 |
| 0.1126 | 5.2720 | 1260 | 0.6720 | 0.1239 | 0.6720 | 0.8198 |
| 0.1126 | 5.2803 | 1262 | 0.6700 | 0.1239 | 0.6700 | 0.8185 |
| 0.1126 | 5.2887 | 1264 | 0.6699 | 0.1239 | 0.6699 | 0.8185 |
| 0.1126 | 5.2971 | 1266 | 0.6750 | 0.1538 | 0.6750 | 0.8216 |
| 0.1126 | 5.3054 | 1268 | 0.6760 | 0.1538 | 0.6760 | 0.8222 |
| 0.1126 | 5.3138 | 1270 | 0.6798 | 0.1895 | 0.6798 | 0.8245 |
| 0.1126 | 5.3222 | 1272 | 0.6753 | 0.2326 | 0.6753 | 0.8217 |
| 0.1126 | 5.3305 | 1274 | 0.6831 | 0.1538 | 0.6831 | 0.8265 |
| 0.1126 | 5.3389 | 1276 | 0.6968 | 0.1239 | 0.6968 | 0.8347 |
| 0.1126 | 5.3473 | 1278 | 0.6965 | 0.1239 | 0.6965 | 0.8346 |
| 0.1126 | 5.3556 | 1280 | 0.6894 | 0.1239 | 0.6894 | 0.8303 |
| 0.1126 | 5.3640 | 1282 | 0.6850 | 0.1239 | 0.6850 | 0.8276 |
| 0.1126 | 5.3724 | 1284 | 0.6754 | 0.1538 | 0.6754 | 0.8218 |
| 0.1126 | 5.3808 | 1286 | 0.6618 | 0.2326 | 0.6618 | 0.8135 |
| 0.1126 | 5.3891 | 1288 | 0.6556 | 0.2326 | 0.6556 | 0.8097 |
| 0.1126 | 5.3975 | 1290 | 0.6573 | 0.2326 | 0.6573 | 0.8107 |
| 0.1126 | 5.4059 | 1292 | 0.6663 | 0.2326 | 0.6663 | 0.8163 |
| 0.1126 | 5.4142 | 1294 | 0.6659 | 0.2326 | 0.6659 | 0.8160 |
| 0.1126 | 5.4226 | 1296 | 0.6535 | 0.2326 | 0.6535 | 0.8084 |
| 0.1126 | 5.4310 | 1298 | 0.6400 | 0.2326 | 0.6400 | 0.8000 |
| 0.1126 | 5.4393 | 1300 | 0.6489 | 0.1895 | 0.6489 | 0.8055 |
| 0.1126 | 5.4477 | 1302 | 0.6688 | 0.1538 | 0.6688 | 0.8178 |
| 0.1126 | 5.4561 | 1304 | 0.6795 | 0.1239 | 0.6795 | 0.8243 |
| 0.1126 | 5.4644 | 1306 | 0.6958 | 0.1239 | 0.6958 | 0.8341 |
| 0.1126 | 5.4728 | 1308 | 0.7027 | 0.1239 | 0.7027 | 0.8383 |
| 0.1126 | 5.4812 | 1310 | 0.6879 | 0.1239 | 0.6879 | 0.8294 |
| 0.1126 | 5.4895 | 1312 | 0.6547 | 0.1538 | 0.6547 | 0.8091 |
| 0.1126 | 5.4979 | 1314 | 0.6337 | 0.2326 | 0.6337 | 0.7960 |
| 0.1126 | 5.5063 | 1316 | 0.6310 | 0.2326 | 0.6310 | 0.7944 |
| 0.1126 | 5.5146 | 1318 | 0.6414 | 0.2326 | 0.6414 | 0.8008 |
| 0.1126 | 5.5230 | 1320 | 0.6363 | 0.2326 | 0.6363 | 0.7977 |
| 0.1126 | 5.5314 | 1322 | 0.6317 | 0.2326 | 0.6317 | 0.7948 |
| 0.1126 | 5.5397 | 1324 | 0.6261 | 0.2326 | 0.6261 | 0.7913 |
| 0.1126 | 5.5481 | 1326 | 0.6235 | 0.2326 | 0.6235 | 0.7896 |
| 0.1126 | 5.5565 | 1328 | 0.6245 | 0.2326 | 0.6245 | 0.7902 |
| 0.1126 | 5.5649 | 1330 | 0.6309 | 0.2326 | 0.6309 | 0.7943 |
| 0.1126 | 5.5732 | 1332 | 0.6310 | 0.2326 | 0.6310 | 0.7943 |
| 0.1126 | 5.5816 | 1334 | 0.6216 | 0.2326 | 0.6216 | 0.7884 |
| 0.1126 | 5.5900 | 1336 | 0.6150 | 0.1895 | 0.6150 | 0.7842 |
| 0.1126 | 5.5983 | 1338 | 0.6316 | 0.1895 | 0.6316 | 0.7947 |
| 0.1126 | 5.6067 | 1340 | 0.6603 | 0.1239 | 0.6603 | 0.8126 |
| 0.1126 | 5.6151 | 1342 | 0.6433 | 0.1538 | 0.6433 | 0.8020 |
| 0.1126 | 5.6234 | 1344 | 0.6112 | 0.1895 | 0.6112 | 0.7818 |
| 0.1126 | 5.6318 | 1346 | 0.6175 | 0.2326 | 0.6175 | 0.7858 |
| 0.1126 | 5.6402 | 1348 | 0.6266 | 0.2326 | 0.6266 | 0.7916 |
| 0.1126 | 5.6485 | 1350 | 0.6324 | 0.2326 | 0.6324 | 0.7953 |
| 0.1126 | 5.6569 | 1352 | 0.6257 | 0.2326 | 0.6257 | 0.7910 |
| 0.1126 | 5.6653 | 1354 | 0.6023 | 0.2326 | 0.6023 | 0.7761 |
| 0.1126 | 5.6736 | 1356 | 0.6111 | 0.1895 | 0.6111 | 0.7817 |
| 0.1126 | 5.6820 | 1358 | 0.6504 | 0.1239 | 0.6504 | 0.8065 |
| 0.1126 | 5.6904 | 1360 | 0.7041 | 0.1239 | 0.7041 | 0.8391 |
| 0.1126 | 5.6987 | 1362 | 0.7056 | 0.1239 | 0.7056 | 0.8400 |
| 0.1126 | 5.7071 | 1364 | 0.6666 | 0.1239 | 0.6666 | 0.8164 |
| 0.1126 | 5.7155 | 1366 | 0.6377 | 0.1538 | 0.6377 | 0.7986 |
| 0.1126 | 5.7238 | 1368 | 0.6145 | 0.2326 | 0.6145 | 0.7839 |
| 0.1126 | 5.7322 | 1370 | 0.6314 | 0.2326 | 0.6314 | 0.7946 |
| 0.1126 | 5.7406 | 1372 | 0.6501 | 0.2326 | 0.6501 | 0.8063 |
| 0.1126 | 5.7490 | 1374 | 0.6485 | 0.2326 | 0.6485 | 0.8053 |
| 0.1126 | 5.7573 | 1376 | 0.6480 | 0.2326 | 0.6480 | 0.8050 |
| 0.1126 | 5.7657 | 1378 | 0.6452 | 0.2326 | 0.6452 | 0.8032 |
| 0.1126 | 5.7741 | 1380 | 0.6397 | 0.2326 | 0.6397 | 0.7998 |
| 0.1126 | 5.7824 | 1382 | 0.6345 | 0.2326 | 0.6345 | 0.7965 |
| 0.1126 | 5.7908 | 1384 | 0.6449 | 0.2326 | 0.6449 | 0.8031 |
| 0.1126 | 5.7992 | 1386 | 0.6766 | 0.1239 | 0.6766 | 0.8226 |
| 0.1126 | 5.8075 | 1388 | 0.7397 | 0.1239 | 0.7397 | 0.8600 |
| 0.1126 | 5.8159 | 1390 | 0.7991 | 0.1239 | 0.7991 | 0.8939 |
| 0.1126 | 5.8243 | 1392 | 0.8462 | 0.0984 | 0.8462 | 0.9199 |
| 0.1126 | 5.8326 | 1394 | 0.8095 | 0.1239 | 0.8095 | 0.8997 |
| 0.1126 | 5.8410 | 1396 | 0.7149 | 0.1538 | 0.7149 | 0.8455 |
| 0.1126 | 5.8494 | 1398 | 0.6615 | 0.2326 | 0.6615 | 0.8133 |
| 0.1126 | 5.8577 | 1400 | 0.6733 | 0.2326 | 0.6733 | 0.8206 |
| 0.1126 | 5.8661 | 1402 | 0.6851 | -0.2222 | 0.6851 | 0.8277 |
| 0.1126 | 5.8745 | 1404 | 0.6726 | 0.2326 | 0.6726 | 0.8201 |
| 0.1126 | 5.8828 | 1406 | 0.6603 | 0.2326 | 0.6603 | 0.8126 |
| 0.1126 | 5.8912 | 1408 | 0.6531 | 0.2326 | 0.6531 | 0.8081 |
| 0.1126 | 5.8996 | 1410 | 0.6556 | 0.2326 | 0.6556 | 0.8097 |
| 0.1126 | 5.9079 | 1412 | 0.6605 | 0.2326 | 0.6605 | 0.8127 |
| 0.1126 | 5.9163 | 1414 | 0.6660 | 0.1895 | 0.6660 | 0.8161 |
| 0.1126 | 5.9247 | 1416 | 0.6748 | 0.1895 | 0.6748 | 0.8215 |
| 0.1126 | 5.9331 | 1418 | 0.6630 | 0.2326 | 0.6630 | 0.8143 |
| 0.1126 | 5.9414 | 1420 | 0.6514 | 0.2326 | 0.6514 | 0.8071 |
| 0.1126 | 5.9498 | 1422 | 0.6373 | 0.2326 | 0.6373 | 0.7983 |
| 0.1126 | 5.9582 | 1424 | 0.6310 | 0.2326 | 0.6310 | 0.7943 |
| 0.1126 | 5.9665 | 1426 | 0.6330 | 0.2326 | 0.6330 | 0.7956 |
| 0.1126 | 5.9749 | 1428 | 0.6521 | 0.2326 | 0.6521 | 0.8075 |
| 0.1126 | 5.9833 | 1430 | 0.6773 | 0.1895 | 0.6773 | 0.8230 |
| 0.1126 | 5.9916 | 1432 | 0.7204 | 0.1239 | 0.7204 | 0.8487 |
| 0.1126 | 6.0 | 1434 | 0.7460 | 0.1239 | 0.7460 | 0.8637 |
| 0.1126 | 6.0084 | 1436 | 0.7432 | 0.1239 | 0.7432 | 0.8621 |
| 0.1126 | 6.0167 | 1438 | 0.7035 | 0.1239 | 0.7035 | 0.8388 |
| 0.1126 | 6.0251 | 1440 | 0.6673 | 0.1895 | 0.6673 | 0.8169 |
| 0.1126 | 6.0335 | 1442 | 0.6626 | 0.2326 | 0.6626 | 0.8140 |
| 0.1126 | 6.0418 | 1444 | 0.6713 | 0.1895 | 0.6713 | 0.8194 |
| 0.1126 | 6.0502 | 1446 | 0.6844 | 0.1895 | 0.6844 | 0.8273 |
| 0.1126 | 6.0586 | 1448 | 0.6647 | 0.2326 | 0.6647 | 0.8153 |
| 0.1126 | 6.0669 | 1450 | 0.6424 | 0.2326 | 0.6424 | 0.8015 |
| 0.1126 | 6.0753 | 1452 | 0.6460 | 0.2326 | 0.6460 | 0.8037 |
| 0.1126 | 6.0837 | 1454 | 0.6508 | 0.2326 | 0.6508 | 0.8067 |
| 0.1126 | 6.0921 | 1456 | 0.6437 | 0.2326 | 0.6437 | 0.8023 |
| 0.1126 | 6.1004 | 1458 | 0.6351 | 0.2326 | 0.6351 | 0.7969 |
| 0.1126 | 6.1088 | 1460 | 0.6538 | 0.2326 | 0.6538 | 0.8086 |
| 0.1126 | 6.1172 | 1462 | 0.7181 | 0.0984 | 0.7181 | 0.8474 |
| 0.1126 | 6.1255 | 1464 | 0.7539 | 0.0984 | 0.7539 | 0.8683 |
| 0.1126 | 6.1339 | 1466 | 0.7431 | 0.0984 | 0.7431 | 0.8620 |
| 0.1126 | 6.1423 | 1468 | 0.7168 | 0.0984 | 0.7168 | 0.8467 |
| 0.1126 | 6.1506 | 1470 | 0.6665 | 0.1895 | 0.6665 | 0.8164 |
| 0.1126 | 6.1590 | 1472 | 0.6308 | 0.2326 | 0.6308 | 0.7942 |
| 0.1126 | 6.1674 | 1474 | 0.6166 | 0.2326 | 0.6166 | 0.7853 |
| 0.1126 | 6.1757 | 1476 | 0.6142 | 0.2326 | 0.6142 | 0.7837 |
| 0.1126 | 6.1841 | 1478 | 0.6125 | 0.2326 | 0.6125 | 0.7826 |
| 0.1126 | 6.1925 | 1480 | 0.6112 | 0.2326 | 0.6112 | 0.7818 |
| 0.1126 | 6.2008 | 1482 | 0.6112 | 0.2326 | 0.6112 | 0.7818 |
| 0.1126 | 6.2092 | 1484 | 0.6110 | 0.2326 | 0.6110 | 0.7817 |
| 0.1126 | 6.2176 | 1486 | 0.6178 | 0.2326 | 0.6178 | 0.7860 |
| 0.1126 | 6.2259 | 1488 | 0.6283 | 0.2326 | 0.6283 | 0.7927 |
| 0.1126 | 6.2343 | 1490 | 0.6301 | 0.2326 | 0.6301 | 0.7938 |
| 0.1126 | 6.2427 | 1492 | 0.6256 | 0.2326 | 0.6256 | 0.7909 |
| 0.1126 | 6.2510 | 1494 | 0.6488 | 0.1895 | 0.6488 | 0.8055 |
| 0.1126 | 6.2594 | 1496 | 0.6652 | 0.1538 | 0.6652 | 0.8156 |
| 0.1126 | 6.2678 | 1498 | 0.6408 | 0.1895 | 0.6408 | 0.8005 |
| 0.0651 | 6.2762 | 1500 | 0.6121 | 0.2326 | 0.6121 | 0.7824 |
| 0.0651 | 6.2845 | 1502 | 0.6087 | 0.2326 | 0.6087 | 0.7802 |
| 0.0651 | 6.2929 | 1504 | 0.6073 | 0.2326 | 0.6073 | 0.7793 |
| 0.0651 | 6.3013 | 1506 | 0.5938 | 0.2326 | 0.5938 | 0.7706 |
| 0.0651 | 6.3096 | 1508 | 0.5921 | 0.2326 | 0.5921 | 0.7695 |
| 0.0651 | 6.3180 | 1510 | 0.5804 | 0.2326 | 0.5804 | 0.7618 |
| 0.0651 | 6.3264 | 1512 | 0.5691 | 0.2326 | 0.5691 | 0.7544 |
| 0.0651 | 6.3347 | 1514 | 0.5544 | 0.2326 | 0.5544 | 0.7446 |
| 0.0651 | 6.3431 | 1516 | 0.5472 | 0.2326 | 0.5472 | 0.7397 |
| 0.0651 | 6.3515 | 1518 | 0.5417 | 0.2326 | 0.5417 | 0.7360 |
| 0.0651 | 6.3598 | 1520 | 0.5536 | 0.2326 | 0.5536 | 0.7441 |
| 0.0651 | 6.3682 | 1522 | 0.5821 | 0.1538 | 0.5821 | 0.7630 |
| 0.0651 | 6.3766 | 1524 | 0.6235 | 0.1538 | 0.6235 | 0.7896 |
| 0.0651 | 6.3849 | 1526 | 0.6450 | 0.1239 | 0.6450 | 0.8031 |
| 0.0651 | 6.3933 | 1528 | 0.6349 | 0.1239 | 0.6349 | 0.7968 |
| 0.0651 | 6.4017 | 1530 | 0.6483 | 0.0984 | 0.6483 | 0.8051 |
| 0.0651 | 6.4100 | 1532 | 0.6485 | 0.0984 | 0.6485 | 0.8053 |
| 0.0651 | 6.4184 | 1534 | 0.6377 | 0.0984 | 0.6377 | 0.7986 |
| 0.0651 | 6.4268 | 1536 | 0.5878 | 0.1538 | 0.5878 | 0.7667 |
| 0.0651 | 6.4351 | 1538 | 0.5512 | 0.2326 | 0.5512 | 0.7424 |
| 0.0651 | 6.4435 | 1540 | 0.5525 | 0.2326 | 0.5525 | 0.7433 |
| 0.0651 | 6.4519 | 1542 | 0.5798 | 0.2326 | 0.5798 | 0.7615 |
| 0.0651 | 6.4603 | 1544 | 0.6139 | 0.2326 | 0.6139 | 0.7835 |
| 0.0651 | 6.4686 | 1546 | 0.6454 | 0.2326 | 0.6454 | 0.8034 |
| 0.0651 | 6.4770 | 1548 | 0.6743 | 0.1895 | 0.6743 | 0.8211 |
| 0.0651 | 6.4854 | 1550 | 0.7139 | 0.1239 | 0.7139 | 0.8449 |
| 0.0651 | 6.4937 | 1552 | 0.7326 | 0.1239 | 0.7326 | 0.8559 |
| 0.0651 | 6.5021 | 1554 | 0.7136 | 0.1239 | 0.7136 | 0.8448 |
| 0.0651 | 6.5105 | 1556 | 0.7285 | 0.1239 | 0.7285 | 0.8535 |
| 0.0651 | 6.5188 | 1558 | 0.7551 | 0.1239 | 0.7551 | 0.8689 |
| 0.0651 | 6.5272 | 1560 | 0.7849 | 0.1239 | 0.7849 | 0.8860 |
| 0.0651 | 6.5356 | 1562 | 0.8239 | 0.1239 | 0.8239 | 0.9077 |
| 0.0651 | 6.5439 | 1564 | 0.8051 | 0.1239 | 0.8051 | 0.8973 |
| 0.0651 | 6.5523 | 1566 | 0.7669 | 0.1239 | 0.7669 | 0.8757 |
| 0.0651 | 6.5607 | 1568 | 0.7509 | 0.1239 | 0.7509 | 0.8665 |
| 0.0651 | 6.5690 | 1570 | 0.7127 | 0.1538 | 0.7127 | 0.8442 |
| 0.0651 | 6.5774 | 1572 | 0.6958 | 0.1895 | 0.6958 | 0.8341 |
| 0.0651 | 6.5858 | 1574 | 0.7030 | 0.1895 | 0.7030 | 0.8385 |
| 0.0651 | 6.5941 | 1576 | 0.7040 | 0.1895 | 0.7040 | 0.8391 |
| 0.0651 | 6.6025 | 1578 | 0.6990 | 0.1895 | 0.6990 | 0.8361 |
| 0.0651 | 6.6109 | 1580 | 0.6992 | 0.1895 | 0.6992 | 0.8362 |
| 0.0651 | 6.6192 | 1582 | 0.6948 | 0.1895 | 0.6948 | 0.8335 |
| 0.0651 | 6.6276 | 1584 | 0.6913 | 0.1895 | 0.6913 | 0.8315 |
| 0.0651 | 6.6360 | 1586 | 0.7138 | 0.1538 | 0.7138 | 0.8448 |
| 0.0651 | 6.6444 | 1588 | 0.7256 | 0.1538 | 0.7256 | 0.8518 |
| 0.0651 | 6.6527 | 1590 | 0.7388 | 0.1239 | 0.7388 | 0.8595 |
| 0.0651 | 6.6611 | 1592 | 0.7314 | 0.1239 | 0.7314 | 0.8552 |
| 0.0651 | 6.6695 | 1594 | 0.7008 | 0.1538 | 0.7008 | 0.8371 |
| 0.0651 | 6.6778 | 1596 | 0.6733 | 0.1895 | 0.6733 | 0.8206 |
| 0.0651 | 6.6862 | 1598 | 0.6713 | 0.1895 | 0.6713 | 0.8193 |
| 0.0651 | 6.6946 | 1600 | 0.6678 | 0.1895 | 0.6678 | 0.8172 |
| 0.0651 | 6.7029 | 1602 | 0.6627 | 0.1895 | 0.6627 | 0.8141 |
| 0.0651 | 6.7113 | 1604 | 0.6559 | 0.2326 | 0.6559 | 0.8099 |
| 0.0651 | 6.7197 | 1606 | 0.6531 | 0.2326 | 0.6531 | 0.8081 |
| 0.0651 | 6.7280 | 1608 | 0.6524 | 0.2326 | 0.6524 | 0.8077 |
| 0.0651 | 6.7364 | 1610 | 0.6580 | 0.1895 | 0.6580 | 0.8112 |
| 0.0651 | 6.7448 | 1612 | 0.6531 | 0.1895 | 0.6531 | 0.8081 |
| 0.0651 | 6.7531 | 1614 | 0.6500 | 0.2326 | 0.6500 | 0.8062 |
| 0.0651 | 6.7615 | 1616 | 0.6489 | 0.2326 | 0.6489 | 0.8055 |
| 0.0651 | 6.7699 | 1618 | 0.6467 | 0.2326 | 0.6467 | 0.8042 |
| 0.0651 | 6.7782 | 1620 | 0.6564 | 0.1895 | 0.6564 | 0.8102 |
| 0.0651 | 6.7866 | 1622 | 0.6914 | 0.1239 | 0.6914 | 0.8315 |
| 0.0651 | 6.7950 | 1624 | 0.7661 | 0.1239 | 0.7661 | 0.8753 |
| 0.0651 | 6.8033 | 1626 | 0.7886 | 0.1239 | 0.7886 | 0.8880 |
| 0.0651 | 6.8117 | 1628 | 0.7733 | 0.1239 | 0.7733 | 0.8794 |
| 0.0651 | 6.8201 | 1630 | 0.7200 | 0.1239 | 0.7200 | 0.8485 |
| 0.0651 | 6.8285 | 1632 | 0.6682 | 0.1538 | 0.6682 | 0.8174 |
| 0.0651 | 6.8368 | 1634 | 0.6479 | 0.2326 | 0.6479 | 0.8050 |
| 0.0651 | 6.8452 | 1636 | 0.6589 | 0.2326 | 0.6589 | 0.8117 |
| 0.0651 | 6.8536 | 1638 | 0.6694 | 0.2326 | 0.6694 | 0.8182 |
| 0.0651 | 6.8619 | 1640 | 0.6848 | 0.2326 | 0.6848 | 0.8275 |
| 0.0651 | 6.8703 | 1642 | 0.6887 | 0.2326 | 0.6887 | 0.8299 |
| 0.0651 | 6.8787 | 1644 | 0.6655 | 0.2326 | 0.6655 | 0.8158 |
| 0.0651 | 6.8870 | 1646 | 0.6543 | 0.2326 | 0.6543 | 0.8089 |
| 0.0651 | 6.8954 | 1648 | 0.6701 | 0.1895 | 0.6701 | 0.8186 |
| 0.0651 | 6.9038 | 1650 | 0.7060 | 0.1538 | 0.7060 | 0.8402 |
| 0.0651 | 6.9121 | 1652 | 0.7613 | 0.1239 | 0.7613 | 0.8725 |
| 0.0651 | 6.9205 | 1654 | 0.7748 | 0.1239 | 0.7748 | 0.8802 |
| 0.0651 | 6.9289 | 1656 | 0.7761 | 0.1239 | 0.7761 | 0.8810 |
| 0.0651 | 6.9372 | 1658 | 0.7580 | 0.1239 | 0.7580 | 0.8706 |
| 0.0651 | 6.9456 | 1660 | 0.7167 | 0.1538 | 0.7167 | 0.8466 |
| 0.0651 | 6.9540 | 1662 | 0.6824 | 0.1538 | 0.6824 | 0.8261 |
| 0.0651 | 6.9623 | 1664 | 0.6710 | 0.1538 | 0.6710 | 0.8191 |
| 0.0651 | 6.9707 | 1666 | 0.6605 | 0.2326 | 0.6605 | 0.8127 |
| 0.0651 | 6.9791 | 1668 | 0.6547 | 0.2326 | 0.6547 | 0.8091 |
| 0.0651 | 6.9874 | 1670 | 0.6549 | 0.2326 | 0.6549 | 0.8093 |
| 0.0651 | 6.9958 | 1672 | 0.6566 | 0.2326 | 0.6566 | 0.8103 |
| 0.0651 | 7.0042 | 1674 | 0.6581 | 0.2326 | 0.6581 | 0.8113 |
| 0.0651 | 7.0126 | 1676 | 0.6556 | 0.2326 | 0.6556 | 0.8097 |
| 0.0651 | 7.0209 | 1678 | 0.6530 | 0.2326 | 0.6530 | 0.8081 |
| 0.0651 | 7.0293 | 1680 | 0.6530 | 0.2326 | 0.6530 | 0.8081 |
| 0.0651 | 7.0377 | 1682 | 0.6536 | 0.2326 | 0.6536 | 0.8084 |
| 0.0651 | 7.0460 | 1684 | 0.6541 | 0.2326 | 0.6541 | 0.8088 |
| 0.0651 | 7.0544 | 1686 | 0.6590 | 0.2326 | 0.6590 | 0.8118 |
| 0.0651 | 7.0628 | 1688 | 0.6625 | 0.2326 | 0.6625 | 0.8139 |
| 0.0651 | 7.0711 | 1690 | 0.6704 | 0.1895 | 0.6704 | 0.8188 |
| 0.0651 | 7.0795 | 1692 | 0.6758 | 0.1895 | 0.6758 | 0.8220 |
| 0.0651 | 7.0879 | 1694 | 0.6748 | 0.1895 | 0.6748 | 0.8214 |
| 0.0651 | 7.0962 | 1696 | 0.6676 | 0.1895 | 0.6676 | 0.8170 |
| 0.0651 | 7.1046 | 1698 | 0.6714 | 0.1895 | 0.6714 | 0.8194 |
| 0.0651 | 7.1130 | 1700 | 0.6799 | 0.1538 | 0.6799 | 0.8246 |
| 0.0651 | 7.1213 | 1702 | 0.6747 | 0.1895 | 0.6747 | 0.8214 |
| 0.0651 | 7.1297 | 1704 | 0.6621 | 0.1895 | 0.6621 | 0.8137 |
| 0.0651 | 7.1381 | 1706 | 0.6552 | 0.2326 | 0.6552 | 0.8094 |
| 0.0651 | 7.1464 | 1708 | 0.6545 | 0.2326 | 0.6545 | 0.8090 |
| 0.0651 | 7.1548 | 1710 | 0.6555 | 0.2326 | 0.6555 | 0.8096 |
| 0.0651 | 7.1632 | 1712 | 0.6622 | 0.1895 | 0.6622 | 0.8138 |
| 0.0651 | 7.1715 | 1714 | 0.6653 | 0.2326 | 0.6653 | 0.8157 |
| 0.0651 | 7.1799 | 1716 | 0.6622 | 0.2326 | 0.6622 | 0.8138 |
| 0.0651 | 7.1883 | 1718 | 0.6639 | 0.2326 | 0.6639 | 0.8148 |
| 0.0651 | 7.1967 | 1720 | 0.6685 | 0.2326 | 0.6685 | 0.8176 |
| 0.0651 | 7.2050 | 1722 | 0.6755 | 0.2326 | 0.6755 | 0.8219 |
| 0.0651 | 7.2134 | 1724 | 0.6716 | 0.2326 | 0.6716 | 0.8195 |
| 0.0651 | 7.2218 | 1726 | 0.6761 | 0.2326 | 0.6761 | 0.8223 |
| 0.0651 | 7.2301 | 1728 | 0.6928 | 0.1895 | 0.6928 | 0.8324 |
| 0.0651 | 7.2385 | 1730 | 0.7042 | 0.1538 | 0.7042 | 0.8392 |
| 0.0651 | 7.2469 | 1732 | 0.7020 | 0.1538 | 0.7020 | 0.8379 |
| 0.0651 | 7.2552 | 1734 | 0.7223 | 0.1239 | 0.7223 | 0.8499 |
| 0.0651 | 7.2636 | 1736 | 0.7477 | 0.1239 | 0.7477 | 0.8647 |
| 0.0651 | 7.2720 | 1738 | 0.7587 | 0.1239 | 0.7587 | 0.8710 |
| 0.0651 | 7.2803 | 1740 | 0.7558 | 0.1239 | 0.7558 | 0.8694 |
| 0.0651 | 7.2887 | 1742 | 0.7578 | 0.1239 | 0.7578 | 0.8705 |
| 0.0651 | 7.2971 | 1744 | 0.7355 | 0.1239 | 0.7355 | 0.8576 |
| 0.0651 | 7.3054 | 1746 | 0.7087 | 0.1895 | 0.7087 | 0.8418 |
| 0.0651 | 7.3138 | 1748 | 0.6992 | 0.1895 | 0.6992 | 0.8362 |
| 0.0651 | 7.3222 | 1750 | 0.7094 | 0.1895 | 0.7094 | 0.8423 |
| 0.0651 | 7.3305 | 1752 | 0.7092 | 0.1895 | 0.7092 | 0.8421 |
| 0.0651 | 7.3389 | 1754 | 0.7034 | 0.1895 | 0.7034 | 0.8387 |
| 0.0651 | 7.3473 | 1756 | 0.6948 | 0.1895 | 0.6948 | 0.8336 |
| 0.0651 | 7.3556 | 1758 | 0.6885 | 0.1895 | 0.6885 | 0.8298 |
| 0.0651 | 7.3640 | 1760 | 0.6933 | 0.1895 | 0.6933 | 0.8327 |
| 0.0651 | 7.3724 | 1762 | 0.7021 | 0.1895 | 0.7021 | 0.8379 |
| 0.0651 | 7.3808 | 1764 | 0.7046 | 0.1895 | 0.7046 | 0.8394 |
| 0.0651 | 7.3891 | 1766 | 0.7079 | 0.1895 | 0.7079 | 0.8414 |
| 0.0651 | 7.3975 | 1768 | 0.7345 | 0.1538 | 0.7345 | 0.8570 |
| 0.0651 | 7.4059 | 1770 | 0.7766 | 0.1239 | 0.7766 | 0.8813 |
| 0.0651 | 7.4142 | 1772 | 0.8028 | 0.1239 | 0.8028 | 0.8960 |
| 0.0651 | 7.4226 | 1774 | 0.7893 | 0.1239 | 0.7893 | 0.8884 |
| 0.0651 | 7.4310 | 1776 | 0.7791 | 0.1239 | 0.7791 | 0.8827 |
| 0.0651 | 7.4393 | 1778 | 0.7552 | 0.1538 | 0.7552 | 0.8690 |
| 0.0651 | 7.4477 | 1780 | 0.7556 | 0.1538 | 0.7556 | 0.8693 |
| 0.0651 | 7.4561 | 1782 | 0.7717 | 0.1538 | 0.7717 | 0.8785 |
| 0.0651 | 7.4644 | 1784 | 0.7712 | 0.1538 | 0.7712 | 0.8782 |
| 0.0651 | 7.4728 | 1786 | 0.7706 | -0.2222 | 0.7706 | 0.8778 |
| 0.0651 | 7.4812 | 1788 | 0.7696 | 0.1538 | 0.7696 | 0.8773 |
| 0.0651 | 7.4895 | 1790 | 0.7595 | -0.2222 | 0.7595 | 0.8715 |
| 0.0651 | 7.4979 | 1792 | 0.7471 | -0.2222 | 0.7471 | 0.8643 |
| 0.0651 | 7.5063 | 1794 | 0.7357 | -0.2222 | 0.7357 | 0.8578 |
| 0.0651 | 7.5146 | 1796 | 0.7285 | -0.2222 | 0.7285 | 0.8535 |
| 0.0651 | 7.5230 | 1798 | 0.7332 | 0.1895 | 0.7332 | 0.8562 |
| 0.0651 | 7.5314 | 1800 | 0.7698 | 0.1239 | 0.7698 | 0.8774 |
| 0.0651 | 7.5397 | 1802 | 0.8141 | 0.1239 | 0.8141 | 0.9023 |
| 0.0651 | 7.5481 | 1804 | 0.8542 | 0.1239 | 0.8542 | 0.9242 |
| 0.0651 | 7.5565 | 1806 | 0.8649 | 0.1239 | 0.8649 | 0.9300 |
| 0.0651 | 7.5649 | 1808 | 0.8407 | 0.1239 | 0.8407 | 0.9169 |
| 0.0651 | 7.5732 | 1810 | 0.7867 | 0.1239 | 0.7867 | 0.8870 |
| 0.0651 | 7.5816 | 1812 | 0.7465 | 0.1538 | 0.7465 | 0.8640 |
| 0.0651 | 7.5900 | 1814 | 0.7379 | 0.1538 | 0.7379 | 0.8590 |
| 0.0651 | 7.5983 | 1816 | 0.7293 | 0.1895 | 0.7293 | 0.8540 |
| 0.0651 | 7.6067 | 1818 | 0.7231 | 0.1895 | 0.7231 | 0.8503 |
| 0.0651 | 7.6151 | 1820 | 0.7139 | 0.1895 | 0.7139 | 0.8449 |
| 0.0651 | 7.6234 | 1822 | 0.7007 | 0.1895 | 0.7007 | 0.8371 |
| 0.0651 | 7.6318 | 1824 | 0.6860 | 0.1895 | 0.6860 | 0.8283 |
| 0.0651 | 7.6402 | 1826 | 0.6815 | 0.1895 | 0.6815 | 0.8255 |
| 0.0651 | 7.6485 | 1828 | 0.6831 | 0.1895 | 0.6831 | 0.8265 |
| 0.0651 | 7.6569 | 1830 | 0.6925 | 0.1895 | 0.6925 | 0.8322 |
| 0.0651 | 7.6653 | 1832 | 0.6961 | 0.1895 | 0.6961 | 0.8343 |
| 0.0651 | 7.6736 | 1834 | 0.7143 | 0.1895 | 0.7143 | 0.8452 |
| 0.0651 | 7.6820 | 1836 | 0.7422 | 0.1239 | 0.7422 | 0.8615 |
| 0.0651 | 7.6904 | 1838 | 0.7364 | 0.1239 | 0.7364 | 0.8582 |
| 0.0651 | 7.6987 | 1840 | 0.7293 | 0.1239 | 0.7293 | 0.8540 |
| 0.0651 | 7.7071 | 1842 | 0.7171 | 0.1895 | 0.7171 | 0.8468 |
| 0.0651 | 7.7155 | 1844 | 0.7053 | 0.1895 | 0.7053 | 0.8398 |
| 0.0651 | 7.7238 | 1846 | 0.7010 | 0.1895 | 0.7010 | 0.8372 |
| 0.0651 | 7.7322 | 1848 | 0.7013 | 0.1895 | 0.7013 | 0.8374 |
| 0.0651 | 7.7406 | 1850 | 0.7039 | 0.1895 | 0.7039 | 0.8390 |
| 0.0651 | 7.7490 | 1852 | 0.7049 | 0.1895 | 0.7049 | 0.8396 |
| 0.0651 | 7.7573 | 1854 | 0.7077 | 0.1895 | 0.7077 | 0.8413 |
| 0.0651 | 7.7657 | 1856 | 0.7214 | 0.1538 | 0.7214 | 0.8494 |
| 0.0651 | 7.7741 | 1858 | 0.7326 | 0.1239 | 0.7326 | 0.8559 |
| 0.0651 | 7.7824 | 1860 | 0.7422 | 0.1239 | 0.7422 | 0.8615 |
| 0.0651 | 7.7908 | 1862 | 0.7218 | 0.1538 | 0.7218 | 0.8496 |
| 0.0651 | 7.7992 | 1864 | 0.6925 | 0.1895 | 0.6925 | 0.8322 |
| 0.0651 | 7.8075 | 1866 | 0.6813 | 0.1895 | 0.6813 | 0.8254 |
| 0.0651 | 7.8159 | 1868 | 0.6763 | 0.1895 | 0.6763 | 0.8224 |
| 0.0651 | 7.8243 | 1870 | 0.6761 | 0.1895 | 0.6761 | 0.8222 |
| 0.0651 | 7.8326 | 1872 | 0.6747 | 0.1895 | 0.6747 | 0.8214 |
| 0.0651 | 7.8410 | 1874 | 0.6742 | 0.1895 | 0.6742 | 0.8211 |
| 0.0651 | 7.8494 | 1876 | 0.6767 | 0.1895 | 0.6767 | 0.8226 |
| 0.0651 | 7.8577 | 1878 | 0.6833 | 0.1895 | 0.6833 | 0.8266 |
| 0.0651 | 7.8661 | 1880 | 0.6988 | 0.1895 | 0.6988 | 0.8360 |
| 0.0651 | 7.8745 | 1882 | 0.7184 | 0.1239 | 0.7184 | 0.8476 |
| 0.0651 | 7.8828 | 1884 | 0.7315 | 0.1239 | 0.7315 | 0.8553 |
| 0.0651 | 7.8912 | 1886 | 0.7297 | 0.1239 | 0.7297 | 0.8542 |
| 0.0651 | 7.8996 | 1888 | 0.7223 | 0.1239 | 0.7223 | 0.8499 |
| 0.0651 | 7.9079 | 1890 | 0.7116 | 0.1239 | 0.7116 | 0.8436 |
| 0.0651 | 7.9163 | 1892 | 0.6932 | 0.1538 | 0.6932 | 0.8326 |
| 0.0651 | 7.9247 | 1894 | 0.6757 | 0.2326 | 0.6757 | 0.8220 |
| 0.0651 | 7.9331 | 1896 | 0.6727 | 0.2326 | 0.6727 | 0.8202 |
| 0.0651 | 7.9414 | 1898 | 0.6691 | 0.2326 | 0.6691 | 0.8180 |
| 0.0651 | 7.9498 | 1900 | 0.6695 | 0.2326 | 0.6695 | 0.8182 |
| 0.0651 | 7.9582 | 1902 | 0.6695 | 0.2326 | 0.6695 | 0.8182 |
| 0.0651 | 7.9665 | 1904 | 0.6707 | 0.2326 | 0.6707 | 0.8190 |
| 0.0651 | 7.9749 | 1906 | 0.6758 | 0.2326 | 0.6758 | 0.8221 |
| 0.0651 | 7.9833 | 1908 | 0.6846 | 0.2326 | 0.6846 | 0.8274 |
| 0.0651 | 7.9916 | 1910 | 0.6976 | 0.1538 | 0.6976 | 0.8352 |
| 0.0651 | 8.0 | 1912 | 0.6999 | 0.1538 | 0.6999 | 0.8366 |
| 0.0651 | 8.0084 | 1914 | 0.6924 | 0.1895 | 0.6924 | 0.8321 |
| 0.0651 | 8.0167 | 1916 | 0.6869 | 0.1895 | 0.6869 | 0.8288 |
| 0.0651 | 8.0251 | 1918 | 0.6836 | 0.2326 | 0.6836 | 0.8268 |
| 0.0651 | 8.0335 | 1920 | 0.6867 | 0.1895 | 0.6867 | 0.8287 |
| 0.0651 | 8.0418 | 1922 | 0.6896 | 0.1895 | 0.6896 | 0.8304 |
| 0.0651 | 8.0502 | 1924 | 0.6828 | 0.2326 | 0.6828 | 0.8263 |
| 0.0651 | 8.0586 | 1926 | 0.6742 | 0.2326 | 0.6742 | 0.8211 |
| 0.0651 | 8.0669 | 1928 | 0.6711 | 0.2326 | 0.6711 | 0.8192 |
| 0.0651 | 8.0753 | 1930 | 0.6714 | 0.2326 | 0.6714 | 0.8194 |
| 0.0651 | 8.0837 | 1932 | 0.6719 | 0.2326 | 0.6719 | 0.8197 |
| 0.0651 | 8.0921 | 1934 | 0.6716 | 0.2326 | 0.6716 | 0.8195 |
| 0.0651 | 8.1004 | 1936 | 0.6734 | 0.2326 | 0.6734 | 0.8206 |
| 0.0651 | 8.1088 | 1938 | 0.6763 | 0.2326 | 0.6763 | 0.8224 |
| 0.0651 | 8.1172 | 1940 | 0.6775 | 0.2326 | 0.6775 | 0.8231 |
| 0.0651 | 8.1255 | 1942 | 0.6742 | 0.2326 | 0.6742 | 0.8211 |
| 0.0651 | 8.1339 | 1944 | 0.6709 | 0.2326 | 0.6709 | 0.8191 |
| 0.0651 | 8.1423 | 1946 | 0.6683 | 0.2326 | 0.6683 | 0.8175 |
| 0.0651 | 8.1506 | 1948 | 0.6701 | 0.2326 | 0.6701 | 0.8186 |
| 0.0651 | 8.1590 | 1950 | 0.6705 | 0.2326 | 0.6705 | 0.8188 |
| 0.0651 | 8.1674 | 1952 | 0.6647 | 0.2326 | 0.6647 | 0.8153 |
| 0.0651 | 8.1757 | 1954 | 0.6607 | 0.2326 | 0.6607 | 0.8128 |
| 0.0651 | 8.1841 | 1956 | 0.6595 | 0.2326 | 0.6595 | 0.8121 |
| 0.0651 | 8.1925 | 1958 | 0.6578 | 0.2326 | 0.6578 | 0.8110 |
| 0.0651 | 8.2008 | 1960 | 0.6536 | 0.2326 | 0.6536 | 0.8084 |
| 0.0651 | 8.2092 | 1962 | 0.6524 | 0.2326 | 0.6524 | 0.8077 |
| 0.0651 | 8.2176 | 1964 | 0.6528 | 0.2326 | 0.6528 | 0.8079 |
| 0.0651 | 8.2259 | 1966 | 0.6503 | 0.2326 | 0.6503 | 0.8064 |
| 0.0651 | 8.2343 | 1968 | 0.6460 | 0.2326 | 0.6460 | 0.8037 |
| 0.0651 | 8.2427 | 1970 | 0.6427 | 0.2326 | 0.6427 | 0.8017 |
| 0.0651 | 8.2510 | 1972 | 0.6390 | 0.2326 | 0.6390 | 0.7994 |
| 0.0651 | 8.2594 | 1974 | 0.6347 | 0.2326 | 0.6347 | 0.7967 |
| 0.0651 | 8.2678 | 1976 | 0.6375 | 0.2326 | 0.6375 | 0.7984 |
| 0.0651 | 8.2762 | 1978 | 0.6485 | 0.1895 | 0.6485 | 0.8053 |
| 0.0651 | 8.2845 | 1980 | 0.6666 | 0.1895 | 0.6666 | 0.8165 |
| 0.0651 | 8.2929 | 1982 | 0.6788 | 0.1538 | 0.6788 | 0.8239 |
| 0.0651 | 8.3013 | 1984 | 0.6772 | 0.1538 | 0.6772 | 0.8229 |
| 0.0651 | 8.3096 | 1986 | 0.6693 | 0.1895 | 0.6693 | 0.8181 |
| 0.0651 | 8.3180 | 1988 | 0.6580 | 0.1895 | 0.6580 | 0.8111 |
| 0.0651 | 8.3264 | 1990 | 0.6487 | 0.1895 | 0.6487 | 0.8054 |
| 0.0651 | 8.3347 | 1992 | 0.6419 | 0.2326 | 0.6419 | 0.8012 |
| 0.0651 | 8.3431 | 1994 | 0.6311 | 0.2326 | 0.6311 | 0.7944 |
| 0.0651 | 8.3515 | 1996 | 0.6269 | 0.2326 | 0.6269 | 0.7917 |
| 0.0651 | 8.3598 | 1998 | 0.6269 | 0.2326 | 0.6269 | 0.7918 |
| 0.0448 | 8.3682 | 2000 | 0.6312 | 0.2326 | 0.6312 | 0.7945 |
| 0.0448 | 8.3766 | 2002 | 0.6379 | 0.2326 | 0.6379 | 0.7987 |
| 0.0448 | 8.3849 | 2004 | 0.6388 | 0.2326 | 0.6388 | 0.7992 |
| 0.0448 | 8.3933 | 2006 | 0.6385 | 0.2326 | 0.6385 | 0.7991 |
| 0.0448 | 8.4017 | 2008 | 0.6396 | 0.2326 | 0.6396 | 0.7997 |
| 0.0448 | 8.4100 | 2010 | 0.6347 | 0.2326 | 0.6347 | 0.7967 |
| 0.0448 | 8.4184 | 2012 | 0.6327 | 0.2326 | 0.6327 | 0.7954 |
| 0.0448 | 8.4268 | 2014 | 0.6329 | 0.2326 | 0.6329 | 0.7955 |
| 0.0448 | 8.4351 | 2016 | 0.6328 | 0.2326 | 0.6328 | 0.7955 |
| 0.0448 | 8.4435 | 2018 | 0.6323 | 0.2326 | 0.6323 | 0.7952 |
| 0.0448 | 8.4519 | 2020 | 0.6340 | 0.2326 | 0.6340 | 0.7962 |
| 0.0448 | 8.4603 | 2022 | 0.6356 | 0.2326 | 0.6356 | 0.7972 |
| 0.0448 | 8.4686 | 2024 | 0.6374 | 0.2326 | 0.6374 | 0.7984 |
| 0.0448 | 8.4770 | 2026 | 0.6397 | 0.2326 | 0.6397 | 0.7998 |
| 0.0448 | 8.4854 | 2028 | 0.6442 | 0.2326 | 0.6442 | 0.8026 |
| 0.0448 | 8.4937 | 2030 | 0.6484 | 0.2326 | 0.6484 | 0.8053 |
| 0.0448 | 8.5021 | 2032 | 0.6479 | 0.2326 | 0.6479 | 0.8049 |
| 0.0448 | 8.5105 | 2034 | 0.6472 | 0.2326 | 0.6472 | 0.8045 |
| 0.0448 | 8.5188 | 2036 | 0.6476 | 0.2326 | 0.6476 | 0.8047 |
| 0.0448 | 8.5272 | 2038 | 0.6495 | 0.2326 | 0.6495 | 0.8059 |
| 0.0448 | 8.5356 | 2040 | 0.6563 | 0.1895 | 0.6563 | 0.8101 |
| 0.0448 | 8.5439 | 2042 | 0.6637 | 0.1895 | 0.6637 | 0.8147 |
| 0.0448 | 8.5523 | 2044 | 0.6680 | 0.1895 | 0.6680 | 0.8173 |
| 0.0448 | 8.5607 | 2046 | 0.6756 | 0.1895 | 0.6756 | 0.8220 |
| 0.0448 | 8.5690 | 2048 | 0.6794 | 0.1895 | 0.6794 | 0.8243 |
| 0.0448 | 8.5774 | 2050 | 0.6819 | 0.1895 | 0.6819 | 0.8258 |
| 0.0448 | 8.5858 | 2052 | 0.6854 | 0.1895 | 0.6854 | 0.8279 |
| 0.0448 | 8.5941 | 2054 | 0.6894 | 0.1895 | 0.6894 | 0.8303 |
| 0.0448 | 8.6025 | 2056 | 0.6892 | 0.1895 | 0.6892 | 0.8302 |
| 0.0448 | 8.6109 | 2058 | 0.6908 | 0.1895 | 0.6908 | 0.8311 |
| 0.0448 | 8.6192 | 2060 | 0.6923 | 0.1895 | 0.6923 | 0.8320 |
| 0.0448 | 8.6276 | 2062 | 0.6957 | 0.1895 | 0.6957 | 0.8341 |
| 0.0448 | 8.6360 | 2064 | 0.6945 | 0.1895 | 0.6945 | 0.8334 |
| 0.0448 | 8.6444 | 2066 | 0.6878 | 0.1895 | 0.6878 | 0.8293 |
| 0.0448 | 8.6527 | 2068 | 0.6830 | 0.1895 | 0.6830 | 0.8265 |
| 0.0448 | 8.6611 | 2070 | 0.6770 | 0.1895 | 0.6770 | 0.8228 |
| 0.0448 | 8.6695 | 2072 | 0.6733 | 0.2326 | 0.6733 | 0.8205 |
| 0.0448 | 8.6778 | 2074 | 0.6714 | 0.2326 | 0.6714 | 0.8194 |
| 0.0448 | 8.6862 | 2076 | 0.6707 | 0.2326 | 0.6707 | 0.8190 |
| 0.0448 | 8.6946 | 2078 | 0.6679 | 0.2326 | 0.6679 | 0.8172 |
| 0.0448 | 8.7029 | 2080 | 0.6657 | 0.2326 | 0.6657 | 0.8159 |
| 0.0448 | 8.7113 | 2082 | 0.6647 | 0.2326 | 0.6647 | 0.8153 |
| 0.0448 | 8.7197 | 2084 | 0.6609 | 0.2326 | 0.6609 | 0.8130 |
| 0.0448 | 8.7280 | 2086 | 0.6567 | 0.2326 | 0.6567 | 0.8104 |
| 0.0448 | 8.7364 | 2088 | 0.6541 | 0.2326 | 0.6541 | 0.8088 |
| 0.0448 | 8.7448 | 2090 | 0.6558 | 0.2326 | 0.6558 | 0.8098 |
| 0.0448 | 8.7531 | 2092 | 0.6604 | 0.2326 | 0.6604 | 0.8127 |
| 0.0448 | 8.7615 | 2094 | 0.6651 | 0.1895 | 0.6651 | 0.8155 |
| 0.0448 | 8.7699 | 2096 | 0.6614 | 0.1895 | 0.6614 | 0.8133 |
| 0.0448 | 8.7782 | 2098 | 0.6575 | 0.1895 | 0.6575 | 0.8108 |
| 0.0448 | 8.7866 | 2100 | 0.6551 | 0.2326 | 0.6551 | 0.8094 |
| 0.0448 | 8.7950 | 2102 | 0.6483 | 0.2326 | 0.6483 | 0.8052 |
| 0.0448 | 8.8033 | 2104 | 0.6403 | 0.2326 | 0.6403 | 0.8002 |
| 0.0448 | 8.8117 | 2106 | 0.6384 | 0.2326 | 0.6384 | 0.7990 |
| 0.0448 | 8.8201 | 2108 | 0.6413 | 0.2326 | 0.6413 | 0.8008 |
| 0.0448 | 8.8285 | 2110 | 0.6485 | 0.2326 | 0.6485 | 0.8053 |
| 0.0448 | 8.8368 | 2112 | 0.6519 | 0.2326 | 0.6519 | 0.8074 |
| 0.0448 | 8.8452 | 2114 | 0.6559 | 0.2326 | 0.6559 | 0.8099 |
| 0.0448 | 8.8536 | 2116 | 0.6624 | 0.1895 | 0.6624 | 0.8139 |
| 0.0448 | 8.8619 | 2118 | 0.6715 | 0.1538 | 0.6715 | 0.8195 |
| 0.0448 | 8.8703 | 2120 | 0.6796 | 0.1538 | 0.6796 | 0.8244 |
| 0.0448 | 8.8787 | 2122 | 0.6768 | 0.1538 | 0.6768 | 0.8227 |
| 0.0448 | 8.8870 | 2124 | 0.6737 | 0.1538 | 0.6737 | 0.8208 |
| 0.0448 | 8.8954 | 2126 | 0.6680 | 0.1895 | 0.6680 | 0.8173 |
| 0.0448 | 8.9038 | 2128 | 0.6607 | 0.2326 | 0.6607 | 0.8128 |
| 0.0448 | 8.9121 | 2130 | 0.6621 | 0.2326 | 0.6621 | 0.8137 |
| 0.0448 | 8.9205 | 2132 | 0.6697 | 0.2326 | 0.6697 | 0.8184 |
| 0.0448 | 8.9289 | 2134 | 0.6775 | 0.1895 | 0.6775 | 0.8231 |
| 0.0448 | 8.9372 | 2136 | 0.6850 | 0.1895 | 0.6850 | 0.8276 |
| 0.0448 | 8.9456 | 2138 | 0.6863 | 0.1895 | 0.6863 | 0.8284 |
| 0.0448 | 8.9540 | 2140 | 0.6874 | 0.1895 | 0.6874 | 0.8291 |
| 0.0448 | 8.9623 | 2142 | 0.6852 | 0.1895 | 0.6852 | 0.8277 |
| 0.0448 | 8.9707 | 2144 | 0.6861 | 0.1895 | 0.6861 | 0.8283 |
| 0.0448 | 8.9791 | 2146 | 0.6925 | 0.1538 | 0.6925 | 0.8321 |
| 0.0448 | 8.9874 | 2148 | 0.7045 | 0.1538 | 0.7045 | 0.8393 |
| 0.0448 | 8.9958 | 2150 | 0.7125 | 0.1239 | 0.7125 | 0.8441 |
| 0.0448 | 9.0042 | 2152 | 0.7203 | 0.1239 | 0.7203 | 0.8487 |
| 0.0448 | 9.0126 | 2154 | 0.7291 | 0.1239 | 0.7291 | 0.8539 |
| 0.0448 | 9.0209 | 2156 | 0.7402 | 0.1239 | 0.7402 | 0.8603 |
| 0.0448 | 9.0293 | 2158 | 0.7467 | 0.1239 | 0.7467 | 0.8641 |
| 0.0448 | 9.0377 | 2160 | 0.7514 | 0.1239 | 0.7514 | 0.8669 |
| 0.0448 | 9.0460 | 2162 | 0.7541 | 0.1239 | 0.7541 | 0.8684 |
| 0.0448 | 9.0544 | 2164 | 0.7586 | 0.1239 | 0.7586 | 0.8710 |
| 0.0448 | 9.0628 | 2166 | 0.7513 | 0.1239 | 0.7513 | 0.8668 |
| 0.0448 | 9.0711 | 2168 | 0.7427 | 0.1239 | 0.7427 | 0.8618 |
| 0.0448 | 9.0795 | 2170 | 0.7264 | 0.1239 | 0.7264 | 0.8523 |
| 0.0448 | 9.0879 | 2172 | 0.7127 | 0.1538 | 0.7127 | 0.8442 |
| 0.0448 | 9.0962 | 2174 | 0.6967 | 0.1538 | 0.6967 | 0.8347 |
| 0.0448 | 9.1046 | 2176 | 0.6900 | 0.2326 | 0.6900 | 0.8306 |
| 0.0448 | 9.1130 | 2178 | 0.6877 | 0.2326 | 0.6877 | 0.8293 |
| 0.0448 | 9.1213 | 2180 | 0.6841 | 0.2326 | 0.6841 | 0.8271 |
| 0.0448 | 9.1297 | 2182 | 0.6783 | 0.2326 | 0.6783 | 0.8236 |
| 0.0448 | 9.1381 | 2184 | 0.6761 | 0.2326 | 0.6761 | 0.8222 |
| 0.0448 | 9.1464 | 2186 | 0.6774 | 0.2326 | 0.6774 | 0.8230 |
| 0.0448 | 9.1548 | 2188 | 0.6819 | 0.2326 | 0.6819 | 0.8257 |
| 0.0448 | 9.1632 | 2190 | 0.6880 | 0.1895 | 0.6880 | 0.8295 |
| 0.0448 | 9.1715 | 2192 | 0.6977 | 0.1895 | 0.6977 | 0.8353 |
| 0.0448 | 9.1799 | 2194 | 0.7041 | 0.1538 | 0.7041 | 0.8391 |
| 0.0448 | 9.1883 | 2196 | 0.7118 | 0.1538 | 0.7118 | 0.8437 |
| 0.0448 | 9.1967 | 2198 | 0.7165 | 0.1538 | 0.7165 | 0.8465 |
| 0.0448 | 9.2050 | 2200 | 0.7227 | 0.1538 | 0.7227 | 0.8501 |
| 0.0448 | 9.2134 | 2202 | 0.7318 | 0.1538 | 0.7318 | 0.8554 |
| 0.0448 | 9.2218 | 2204 | 0.7403 | 0.1239 | 0.7403 | 0.8604 |
| 0.0448 | 9.2301 | 2206 | 0.7431 | 0.1239 | 0.7431 | 0.8620 |
| 0.0448 | 9.2385 | 2208 | 0.7469 | 0.1239 | 0.7469 | 0.8643 |
| 0.0448 | 9.2469 | 2210 | 0.7522 | 0.1239 | 0.7522 | 0.8673 |
| 0.0448 | 9.2552 | 2212 | 0.7563 | 0.1239 | 0.7563 | 0.8697 |
| 0.0448 | 9.2636 | 2214 | 0.7602 | 0.1239 | 0.7602 | 0.8719 |
| 0.0448 | 9.2720 | 2216 | 0.7603 | 0.1239 | 0.7603 | 0.8720 |
| 0.0448 | 9.2803 | 2218 | 0.7558 | 0.1239 | 0.7558 | 0.8694 |
| 0.0448 | 9.2887 | 2220 | 0.7472 | 0.1239 | 0.7472 | 0.8644 |
| 0.0448 | 9.2971 | 2222 | 0.7387 | 0.1239 | 0.7387 | 0.8595 |
| 0.0448 | 9.3054 | 2224 | 0.7273 | 0.1538 | 0.7273 | 0.8528 |
| 0.0448 | 9.3138 | 2226 | 0.7173 | 0.1538 | 0.7173 | 0.8469 |
| 0.0448 | 9.3222 | 2228 | 0.7091 | 0.1895 | 0.7091 | 0.8421 |
| 0.0448 | 9.3305 | 2230 | 0.7015 | 0.1895 | 0.7015 | 0.8376 |
| 0.0448 | 9.3389 | 2232 | 0.6965 | 0.1895 | 0.6965 | 0.8346 |
| 0.0448 | 9.3473 | 2234 | 0.6962 | 0.2326 | 0.6962 | 0.8344 |
| 0.0448 | 9.3556 | 2236 | 0.6969 | 0.2326 | 0.6969 | 0.8348 |
| 0.0448 | 9.3640 | 2238 | 0.6990 | 0.2326 | 0.6990 | 0.8360 |
| 0.0448 | 9.3724 | 2240 | 0.7035 | 0.1895 | 0.7035 | 0.8388 |
| 0.0448 | 9.3808 | 2242 | 0.7101 | 0.1895 | 0.7101 | 0.8427 |
| 0.0448 | 9.3891 | 2244 | 0.7163 | 0.1895 | 0.7163 | 0.8464 |
| 0.0448 | 9.3975 | 2246 | 0.7213 | 0.1538 | 0.7213 | 0.8493 |
| 0.0448 | 9.4059 | 2248 | 0.7230 | 0.1538 | 0.7230 | 0.8503 |
| 0.0448 | 9.4142 | 2250 | 0.7254 | 0.1538 | 0.7254 | 0.8517 |
| 0.0448 | 9.4226 | 2252 | 0.7280 | 0.1538 | 0.7280 | 0.8532 |
| 0.0448 | 9.4310 | 2254 | 0.7289 | 0.1538 | 0.7289 | 0.8538 |
| 0.0448 | 9.4393 | 2256 | 0.7275 | 0.1538 | 0.7275 | 0.8529 |
| 0.0448 | 9.4477 | 2258 | 0.7261 | 0.1538 | 0.7261 | 0.8521 |
| 0.0448 | 9.4561 | 2260 | 0.7237 | 0.1538 | 0.7237 | 0.8507 |
| 0.0448 | 9.4644 | 2262 | 0.7189 | 0.1538 | 0.7189 | 0.8479 |
| 0.0448 | 9.4728 | 2264 | 0.7155 | 0.1538 | 0.7155 | 0.8458 |
| 0.0448 | 9.4812 | 2266 | 0.7135 | 0.1538 | 0.7135 | 0.8447 |
| 0.0448 | 9.4895 | 2268 | 0.7164 | 0.1538 | 0.7164 | 0.8464 |
| 0.0448 | 9.4979 | 2270 | 0.7167 | 0.1538 | 0.7167 | 0.8466 |
| 0.0448 | 9.5063 | 2272 | 0.7194 | 0.1538 | 0.7194 | 0.8482 |
| 0.0448 | 9.5146 | 2274 | 0.7219 | 0.1538 | 0.7219 | 0.8497 |
| 0.0448 | 9.5230 | 2276 | 0.7199 | 0.1538 | 0.7199 | 0.8485 |
| 0.0448 | 9.5314 | 2278 | 0.7179 | 0.1538 | 0.7179 | 0.8473 |
| 0.0448 | 9.5397 | 2280 | 0.7165 | 0.1538 | 0.7165 | 0.8465 |
| 0.0448 | 9.5481 | 2282 | 0.7150 | 0.1538 | 0.7150 | 0.8456 |
| 0.0448 | 9.5565 | 2284 | 0.7127 | 0.1538 | 0.7127 | 0.8442 |
| 0.0448 | 9.5649 | 2286 | 0.7103 | 0.1538 | 0.7103 | 0.8428 |
| 0.0448 | 9.5732 | 2288 | 0.7069 | 0.1538 | 0.7069 | 0.8408 |
| 0.0448 | 9.5816 | 2290 | 0.7039 | 0.1538 | 0.7039 | 0.8390 |
| 0.0448 | 9.5900 | 2292 | 0.7016 | 0.1895 | 0.7016 | 0.8376 |
| 0.0448 | 9.5983 | 2294 | 0.7012 | 0.1895 | 0.7012 | 0.8374 |
| 0.0448 | 9.6067 | 2296 | 0.7010 | 0.1895 | 0.7010 | 0.8372 |
| 0.0448 | 9.6151 | 2298 | 0.7012 | 0.1895 | 0.7012 | 0.8374 |
| 0.0448 | 9.6234 | 2300 | 0.7019 | 0.1895 | 0.7019 | 0.8378 |
| 0.0448 | 9.6318 | 2302 | 0.7039 | 0.1895 | 0.7039 | 0.8390 |
| 0.0448 | 9.6402 | 2304 | 0.7081 | 0.1538 | 0.7081 | 0.8415 |
| 0.0448 | 9.6485 | 2306 | 0.7096 | 0.1538 | 0.7096 | 0.8424 |
| 0.0448 | 9.6569 | 2308 | 0.7100 | 0.1538 | 0.7100 | 0.8426 |
| 0.0448 | 9.6653 | 2310 | 0.7105 | 0.1538 | 0.7105 | 0.8429 |
| 0.0448 | 9.6736 | 2312 | 0.7119 | 0.1538 | 0.7119 | 0.8437 |
| 0.0448 | 9.6820 | 2314 | 0.7132 | 0.1538 | 0.7132 | 0.8445 |
| 0.0448 | 9.6904 | 2316 | 0.7125 | 0.1538 | 0.7125 | 0.8441 |
| 0.0448 | 9.6987 | 2318 | 0.7132 | 0.1538 | 0.7132 | 0.8445 |
| 0.0448 | 9.7071 | 2320 | 0.7128 | 0.1538 | 0.7128 | 0.8443 |
| 0.0448 | 9.7155 | 2322 | 0.7140 | 0.1538 | 0.7140 | 0.8450 |
| 0.0448 | 9.7238 | 2324 | 0.7171 | 0.1538 | 0.7171 | 0.8468 |
| 0.0448 | 9.7322 | 2326 | 0.7206 | 0.1538 | 0.7206 | 0.8489 |
| 0.0448 | 9.7406 | 2328 | 0.7229 | 0.1538 | 0.7229 | 0.8502 |
| 0.0448 | 9.7490 | 2330 | 0.7253 | 0.1538 | 0.7253 | 0.8517 |
| 0.0448 | 9.7573 | 2332 | 0.7293 | 0.1538 | 0.7293 | 0.8540 |
| 0.0448 | 9.7657 | 2334 | 0.7311 | 0.1538 | 0.7311 | 0.8550 |
| 0.0448 | 9.7741 | 2336 | 0.7323 | 0.1538 | 0.7323 | 0.8557 |
| 0.0448 | 9.7824 | 2338 | 0.7335 | 0.1538 | 0.7335 | 0.8565 |
| 0.0448 | 9.7908 | 2340 | 0.7345 | 0.1538 | 0.7345 | 0.8570 |
| 0.0448 | 9.7992 | 2342 | 0.7351 | 0.1538 | 0.7351 | 0.8574 |
| 0.0448 | 9.8075 | 2344 | 0.7363 | 0.1538 | 0.7363 | 0.8581 |
| 0.0448 | 9.8159 | 2346 | 0.7350 | 0.1538 | 0.7350 | 0.8573 |
| 0.0448 | 9.8243 | 2348 | 0.7356 | 0.1538 | 0.7356 | 0.8577 |
| 0.0448 | 9.8326 | 2350 | 0.7350 | 0.1538 | 0.7350 | 0.8573 |
| 0.0448 | 9.8410 | 2352 | 0.7336 | 0.1538 | 0.7336 | 0.8565 |
| 0.0448 | 9.8494 | 2354 | 0.7313 | 0.1538 | 0.7313 | 0.8551 |
| 0.0448 | 9.8577 | 2356 | 0.7288 | 0.1538 | 0.7288 | 0.8537 |
| 0.0448 | 9.8661 | 2358 | 0.7271 | 0.1538 | 0.7271 | 0.8527 |
| 0.0448 | 9.8745 | 2360 | 0.7256 | 0.1538 | 0.7256 | 0.8518 |
| 0.0448 | 9.8828 | 2362 | 0.7245 | 0.1538 | 0.7245 | 0.8512 |
| 0.0448 | 9.8912 | 2364 | 0.7231 | 0.1538 | 0.7231 | 0.8504 |
| 0.0448 | 9.8996 | 2366 | 0.7215 | 0.1538 | 0.7215 | 0.8494 |
| 0.0448 | 9.9079 | 2368 | 0.7201 | 0.1538 | 0.7201 | 0.8486 |
| 0.0448 | 9.9163 | 2370 | 0.7188 | 0.1538 | 0.7188 | 0.8478 |
| 0.0448 | 9.9247 | 2372 | 0.7173 | 0.1538 | 0.7173 | 0.8469 |
| 0.0448 | 9.9331 | 2374 | 0.7156 | 0.1538 | 0.7156 | 0.8459 |
| 0.0448 | 9.9414 | 2376 | 0.7143 | 0.1538 | 0.7143 | 0.8452 |
| 0.0448 | 9.9498 | 2378 | 0.7132 | 0.1538 | 0.7132 | 0.8445 |
| 0.0448 | 9.9582 | 2380 | 0.7127 | 0.1538 | 0.7127 | 0.8442 |
| 0.0448 | 9.9665 | 2382 | 0.7124 | 0.1538 | 0.7124 | 0.8440 |
| 0.0448 | 9.9749 | 2384 | 0.7122 | 0.1538 | 0.7122 | 0.8439 |
| 0.0448 | 9.9833 | 2386 | 0.7120 | 0.1538 | 0.7120 | 0.8438 |
| 0.0448 | 9.9916 | 2388 | 0.7118 | 0.1538 | 0.7118 | 0.8437 |
| 0.0448 | 10.0 | 2390 | 0.7118 | 0.1538 | 0.7118 | 0.8437 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
smiling-pranjal/medical_summarization-finetuned-Medical-summary | smiling-pranjal | 2024-11-25T13:24:07Z | 119 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Falconsai/medical_summarization",
"base_model:finetune:Falconsai/medical_summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-25T13:05:47Z | ---
library_name: transformers
license: apache-2.0
base_model: Falconsai/medical_summarization
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: medical_summarization-finetuned-Medical-summary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medical_summarization-finetuned-Medical-summary
This model is a fine-tuned version of [Falconsai/medical_summarization](https://huggingface.co/Falconsai/medical_summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2124
- Rouge1: 22.5658
- Rouge2: 14.2244
- Rougel: 20.2774
- Rougelsum: 21.7581
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.3725 | 1.0 | 579 | 1.2124 | 22.5658 | 14.2244 | 20.2774 | 21.7581 | 19.0 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ankit5319/chronos-t5-small-fine-tuned_Final_Ankit | ankit5319 | 2024-11-25T13:22:36Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-25T13:22:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alakxender/dhivehi-bert-mlm | alakxender | 2024-11-25T13:21:57Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-25T11:18:18Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: dhivehi-bert-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dhivehi-bert-mlm
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ankit5319/chronos-t5-small-fine-tuned_Final2 | ankit5319 | 2024-11-25T13:20:40Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-25T13:20:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ElMad/omniscient-bass-483 | ElMad | 2024-11-25T13:13:42Z | 181 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:12:54Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
model-index:
- name: omniscient-bass-483
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# omniscient-bass-483
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3386
- Hamming Loss: 0.1123
- Zero One Loss: 1.0
- Jaccard Score: 1.0
- Hamming Loss Optimised: 0.1123
- Hamming Loss Threshold: 0.9000
- Zero One Loss Optimised: 1.0
- Zero One Loss Threshold: 0.9000
- Jaccard Score Optimised: 1.0
- Jaccard Score Threshold: 0.9000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0011128424281972827
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|
| 0.3519 | 1.0 | 100 | 0.3445 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3415 | 2.0 | 200 | 0.3420 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3399 | 3.0 | 300 | 0.3427 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3381 | 4.0 | 400 | 0.3391 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3364 | 5.0 | 500 | 0.3414 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3364 | 6.0 | 600 | 0.3398 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3352 | 7.0 | 700 | 0.3421 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3344 | 8.0 | 800 | 0.3396 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3337 | 9.0 | 900 | 0.3386 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/Meta-Llama-3.1-8B-Instruct-Q6_K-GGUF | Triangle104 | 2024-11-25T13:08:24Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T13:07:52Z | ---
language:
- en
library_name: transformers
license: llama3.1
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
---
# Triangle104/Meta-Llama-3.1-8B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`unsloth/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-q6_k.gguf -c 2048
```
|
ElMad/resilient-rook-798 | ElMad | 2024-11-25T13:08:07Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-xsmall",
"base_model:finetune:microsoft/deberta-v3-xsmall",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T13:07:25Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-xsmall
tags:
- generated_from_trainer
model-index:
- name: resilient-rook-798
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resilient-rook-798
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2372
- Hamming Loss: 0.0925
- Zero One Loss: 0.7925
- Jaccard Score: 0.79
- Hamming Loss Optimised: 0.0789
- Hamming Loss Threshold: 0.3524
- Zero One Loss Optimised: 0.5887
- Zero One Loss Threshold: 0.3038
- Jaccard Score Optimised: 0.5148
- Jaccard Score Threshold: 0.2378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.0943791435964314e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|
| 0.4118 | 1.0 | 100 | 0.3355 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.308 | 2.0 | 200 | 0.2855 | 0.0938 | 0.8125 | 0.81 | 0.0929 | 0.3525 | 0.7488 | 0.1661 | 0.6086 | 0.1537 |
| 0.2668 | 3.0 | 300 | 0.2478 | 0.0925 | 0.7913 | 0.7888 | 0.0865 | 0.3723 | 0.64 | 0.2728 | 0.5209 | 0.1919 |
| 0.2417 | 4.0 | 400 | 0.2372 | 0.0925 | 0.7925 | 0.79 | 0.0789 | 0.3524 | 0.5887 | 0.3038 | 0.5148 | 0.2378 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
joey00072/exp-pkv-attention | joey00072 | 2024-11-25T13:04:29Z | 6 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-11-25T13:04:05Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Triangle104/Meta-Llama-3.1-8B-Instruct-Q5_K_M-GGUF | Triangle104 | 2024-11-25T13:01:57Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T13:01:26Z | ---
language:
- en
library_name: transformers
license: llama3.1
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
---
# Triangle104/Meta-Llama-3.1-8B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q5_k_m.gguf -c 2048
```
|
ElMad/capricious-gnu-139 | ElMad | 2024-11-25T12:59:48Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-xsmall",
"base_model:finetune:microsoft/deberta-v3-xsmall",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T12:58:59Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-xsmall
tags:
- generated_from_trainer
model-index:
- name: capricious-gnu-139
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# capricious-gnu-139
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1831
- Hamming Loss: 0.0661
- Zero One Loss: 0.4537
- Jaccard Score: 0.4105
- Hamming Loss Optimised: 0.0659
- Hamming Loss Threshold: 0.6135
- Zero One Loss Optimised: 0.4087
- Zero One Loss Threshold: 0.4316
- Jaccard Score Optimised: 0.3479
- Jaccard Score Threshold: 0.3462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.0943791435964314e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|
| 0.4159 | 1.0 | 100 | 0.3376 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 |
| 0.3121 | 2.0 | 200 | 0.2841 | 0.0932 | 0.8113 | 0.8087 | 0.0931 | 0.4416 | 0.6963 | 0.1641 | 0.6101 | 0.1642 |
| 0.2602 | 3.0 | 300 | 0.2338 | 0.092 | 0.785 | 0.7819 | 0.0765 | 0.3980 | 0.6113 | 0.3139 | 0.5072 | 0.2086 |
| 0.2174 | 4.0 | 400 | 0.2063 | 0.0712 | 0.5975 | 0.5703 | 0.0698 | 0.4494 | 0.5363 | 0.3378 | 0.4363 | 0.2553 |
| 0.1896 | 5.0 | 500 | 0.1967 | 0.0694 | 0.5813 | 0.5551 | 0.0661 | 0.4552 | 0.4513 | 0.3622 | 0.3900 | 0.2346 |
| 0.1726 | 6.0 | 600 | 0.1910 | 0.07 | 0.4988 | 0.4614 | 0.0695 | 0.5944 | 0.4400 | 0.4036 | 0.3569 | 0.3149 |
| 0.1618 | 7.0 | 700 | 0.1861 | 0.0679 | 0.475 | 0.4339 | 0.0651 | 0.5430 | 0.4237 | 0.4130 | 0.3652 | 0.3483 |
| 0.1522 | 8.0 | 800 | 0.1845 | 0.0683 | 0.4712 | 0.4328 | 0.0663 | 0.5807 | 0.4337 | 0.4266 | 0.3585 | 0.3310 |
| 0.1484 | 9.0 | 900 | 0.1831 | 0.0661 | 0.4537 | 0.4105 | 0.0659 | 0.6135 | 0.4087 | 0.4316 | 0.3479 | 0.3462 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
LRPxxx/ViTFinetuned | LRPxxx | 2024-11-25T12:58:38Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-11-25T12:58:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stablediffusionapi/my-stablediffusion-lora-2855 | stablediffusionapi | 2024-11-25T12:57:27Z | 5 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-11-12T10:55:34Z | ---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: De Bruyne
license: openrail++
---
# ModelsLab LoRA DreamBooth Training - stablediffusionapi/my-stablediffusion-lora-2855
<Gallery />
## Model description
These are stablediffusionapi/my-stablediffusion-lora-2855 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [Modelslab](https://modelslab.com).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
!pip install -q transformers accelerate peft diffusers
from diffusers import DiffusionPipeline
import torch
pipe_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda")
pipe.load_lora_weights("stablediffusionapi/my-stablediffusion-lora-2855", weight_name="pytorch_lora_weights.safetensors", adapter_name="abc")
prompt = "abc of a hacker with a hoodie"
lora_scale = 0.9
image = pipe(
prompt,
num_inference_steps=30,
cross_attention_kwargs={"scale": lora_scale},
generator=torch.manual_seed(0)
).images[0]
image
```
## Trigger words
You should use De Bruyne to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](stablediffusionapi/my-stablediffusion-lora-2855/tree/main) them in the Files & versions tab. |
mlfoundations-dev/Meta-Llama-3.1-8B_alpaca_en_2.00E-05_2_32_1 | mlfoundations-dev | 2024-11-25T12:56:14Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T12:33:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF | performanceoptician | 2024-11-25T12:55:31Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:allenai/RLVR-GSM-MATH-IF-Mixed-Constraints",
"base_model:allenai/Llama-3.1-Tulu-3-8B",
"base_model:quantized:allenai/Llama-3.1-Tulu-3-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-11-25T12:55:11Z | ---
license: llama3.1
language:
- en
pipeline_tag: text-generation
datasets:
- allenai/RLVR-GSM-MATH-IF-Mixed-Constraints
base_model: allenai/Llama-3.1-Tulu-3-8B
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF
This model was converted to GGUF format from [`allenai/Llama-3.1-Tulu-3-8B`](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo performanceoptician/Llama-3.1-Tulu-3-8B-IQ3_XXS-GGUF --hf-file llama-3.1-tulu-3-8b-iq3_xxs-imat.gguf -c 2048
```
|
altndrr/cased | altndrr | 2024-11-25T12:53:55Z | 125 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"cased",
"feature-extraction",
"vision",
"image-classification",
"custom_code",
"arxiv:2306.00917",
"region:us"
] | image-classification | 2023-06-07T08:47:15Z | ---
pipeline_tag: image-classification
tags:
- vision
inference: false
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
example_title: Cat & Dog
---
# Category Search from External Databases (CaSED)
Disclaimer: The model card is taken and modified from the official repository, which can be found [here](https://github.com/altndrr/vic). The paper can be found [here](https://arxiv.org/abs/2306.00917).
## Intended uses & limitations
You can use the model for vocabulary-free image classification, i.e. classification with CLIP-like models without a pre-defined list of class names.
## How to use
Here is how to use this model:
```python
import requests
from PIL import Image
from transformers import AutoModel, CLIPProcessor
# download an image from the internet
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# load the model and the processor
model = AutoModel.from_pretrained("altndrr/cased", trust_remote_code=True)
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
# get the model outputs
images = processor(images=[image], return_tensors="pt", padding=True)
outputs = model(images, alpha=0.7)
labels, scores = outputs["vocabularies"][0], outputs["scores"][0]
# print the top 5 most likely labels for the image
values, indices = scores.topk(3)
print("\nTop predictions:\n")
for value, index in zip(values, indices):
print(f"{labels[index]:>16s}: {100 * value.item():.2f}%")
```
The model depends on some libraries you have to install manually before execution:
```bash
pip install torch faiss-cpu flair inflect nltk pyarrow transformers
```
## Citation
```latex
@article{conti2023vocabularyfree,
title={Vocabulary-free Image Classification},
author={Alessandro Conti and Enrico Fini and Massimiliano Mancini and Paolo Rota and Yiming Wang and Elisa Ricci},
year={2023},
journal={NeurIPS},
}
```
|
Dynosaur/llama3-8b-math-sft-mix-8-1 | Dynosaur | 2024-11-25T12:46:33Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:hexuan21/math-sft-mix-8-1",
"base_model:Dynosaur/llama3-8b-math-sft",
"base_model:finetune:Dynosaur/llama3-8b-math-sft",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T09:20:52Z | ---
library_name: transformers
license: llama3
base_model: Dynosaur/llama3-8b-math-sft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- hexuan21/math-sft-mix-8-1
model-index:
- name: llama3-8b-math-sft-mix-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-math-sft-mix-8-1
This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co/Dynosaur/llama3-8b-math-sft) on the hexuan21/math-sft-mix-8-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF | Triangle104 | 2024-11-25T12:46:30Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T12:46:03Z | ---
language:
- en
library_name: transformers
license: llama3.1
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
---
# Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_m.gguf -c 2048
```
|
oxorudo/whisper_ssokssokword | oxorudo | 2024-11-25T12:45:21Z | 13 | 1 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"ko",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2024-11-22T00:59:11Z | ---
license: apache-2.0
language:
- ko
base_model:
- openai/whisper-small
pipeline_tag: automatic-speech-recognition
--- |
mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF | mradermacher | 2024-11-25T12:43:22Z | 110 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"en",
"base_model:Isotonic/Hermes-2-Pro-Mixtral-4x7B",
"base_model:quantized:Isotonic/Hermes-2-Pro-Mixtral-4x7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-25T03:59:48Z | ---
base_model: Isotonic/Hermes-2-Pro-Mixtral-4x7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- lazymergekit
- NousResearch/Hermes-2-Pro-Mistral-7B
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Isotonic/Hermes-2-Pro-Mixtral-4x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-2-Pro-Mixtral-4x7B-i1-GGUF/resolve/main/Hermes-2-Pro-Mixtral-4x7B.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_S-GGUF | Triangle104 | 2024-11-25T12:39:20Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T12:38:59Z | ---
language:
- en
library_name: transformers
license: llama3.1
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
---
# Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`unsloth/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_S-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_S-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_S-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Meta-Llama-3.1-8B-Instruct-Q4_K_S-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_k_s.gguf -c 2048
```
|
leinad-deinor/Qwen2.5-7B-redeIT-XML-GGUF | leinad-deinor | 2024-11-25T12:34:08Z | 32 | 1 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T10:28:56Z | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leinad-deinor
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MoritzLaurer/parler-tts-large-v1 | MoritzLaurer | 2024-11-25T12:32:40Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-11-22T14:46:07Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Large v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Large v1** is a 2.2B-parameters text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Mini v1](https://huggingface.co/parler-tts/parler-tts-mini-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-large-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-large-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-large-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-large-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
nell123/flan_t5_base-lora_wind_energy-v3 | nell123 | 2024-11-25T12:30:11Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-25T12:29:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF | mradermacher | 2024-11-25T12:24:09Z | 54 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT4-Gen2-GBMAMU-gemma-2-9B",
"base_model:quantized:zelk12/MT4-Gen2-GBMAMU-gemma-2-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T09:33:56Z | ---
base_model: zelk12/MT4-Gen2-GBMAMU-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/zelk12/MT4-Gen2-GBMAMU-gemma-2-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-Gen2-GBMAMU-gemma-2-9B-GGUF/resolve/main/MT4-Gen2-GBMAMU-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf | Supa-AI | 2024-11-25T12:22:54Z | 71 | 1 | null | [
"gguf",
"llama-cpp",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T09:53:40Z | ---
language:
- fr
- it
- de
- es
- en
license: apache-2.0
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- llama-cpp
- gguf
---
# Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf
This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp.
Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.
## Available Versions
- `Mixtral-8x7B-Instruct-v0.1.q4_0.gguf` (q4_0)
- `Mixtral-8x7B-Instruct-v0.1.q4_1.gguf` (q4_1)
- `Mixtral-8x7B-Instruct-v0.1.q5_0.gguf` (q5_0)
- `Mixtral-8x7B-Instruct-v0.1.q5_1.gguf` (q5_1)
- `Mixtral-8x7B-Instruct-v0.1.q8_0.gguf` (q8_0)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_s.gguf` (q3_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_m.gguf` (q3_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_l.gguf` (q3_K_L)
- `Mixtral-8x7B-Instruct-v0.1.q4_k_s.gguf` (q4_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q4_k_m.gguf` (q4_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q5_k_s.gguf` (q5_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf` (q5_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q6_k.gguf` (q6_K)
## Use with llama.cpp
Replace `FILENAME` with one of the above filenames.
### CLI:
```bash
llama-cli --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -p "Your prompt here"
```
### Server:
```bash
llama-server --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -c 2048
```
## Model Details
- **Original Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Format:** GGUF
|
mradermacher/Deita-20b-GGUF | mradermacher | 2024-11-25T12:21:05Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:KnutJaegersberg/Deita-6k",
"base_model:KnutJaegersberg/Deita-20b",
"base_model:quantized:KnutJaegersberg/Deita-20b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T01:37:38Z | ---
base_model: KnutJaegersberg/Deita-20b
datasets:
- KnutJaegersberg/Deita-6k
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: internlm
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/KnutJaegersberg/Deita-20b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deita-20b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q2_K.gguf) | Q2_K | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q3_K_S.gguf) | Q3_K_S | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q3_K_M.gguf) | Q3_K_M | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q3_K_L.gguf) | Q3_K_L | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.IQ4_XS.gguf) | IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q4_K_S.gguf) | Q4_K_S | 11.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q5_K_S.gguf) | Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q5_K_M.gguf) | Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q6_K.gguf) | Q6_K | 16.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Deita-20b-GGUF/resolve/main/Deita-20b.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
QuantFactory/Llama-SmolTalk-3.2-1B-Instruct-GGUF | QuantFactory | 2024-11-25T12:20:26Z | 152 | 2 | transformers | [
"transformers",
"gguf",
"Llama",
"Llama-CPP",
"SmolTalk",
"ollama",
"bin",
"text-generation",
"en",
"dataset:HuggingFaceTB/smoltalk",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-25T12:12:05Z |
---
license: creativeml-openrail-m
datasets:
- HuggingFaceTB/smoltalk
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- Llama
- Llama-CPP
- SmolTalk
- ollama
- bin
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama-SmolTalk-3.2-1B-Instruct-GGUF
This is quantized version of [prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct](https://huggingface.co/prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct) created using llama.cpp
# Original Model Card
## Updated Files for Model Uploads 🤗
| File Name [ Updated Files ] | Size | Description | Upload Status |
|----------------------------|-----------|--------------------------------------------|----------------|
| `.gitattributes` | 1.57 kB | Git attributes configuration file | Uploaded |
| `README.md` | 42 Bytes | Initial README | Uploaded |
| `config.json` | 1.03 kB | Configuration file | Uploaded |
| `generation_config.json` | 248 Bytes | Configuration for text generation | Uploaded |
| `pytorch_model.bin` | 2.47 GB | PyTorch model weights | Uploaded (LFS) |
| `special_tokens_map.json` | 477 Bytes | Special token mappings | Uploaded |
| `tokenizer.json` | 17.2 MB | Tokenizer configuration | Uploaded (LFS) |
| `tokenizer_config.json` | 57.4 kB | Additional tokenizer settings | Uploaded |
| Model Type | Size | Context Length | Link |
|------------|------|----------------|------|
| GGUF | 1B | - | [🤗 Llama-SmolTalk-3.2-1B-Instruct-GGUF](https://huggingface.co/prithivMLmods/Llama-SmolTalk-3.2-1B-Instruct-GGUF) |
The **Llama-SmolTalk-3.2-1B-Instruct** model is a lightweight, instruction-tuned model designed for efficient text generation and conversational AI tasks. With a 1B parameter architecture, this model strikes a balance between performance and resource efficiency, making it ideal for applications requiring concise, contextually relevant outputs. The model has been fine-tuned to deliver robust instruction-following capabilities, catering to both structured and open-ended queries.
### Key Features:
1. **Instruction-Tuned Performance**: Optimized to understand and execute user-provided instructions across diverse domains.
2. **Lightweight Architecture**: With just 1 billion parameters, the model provides efficient computation and storage without compromising output quality.
3. **Versatile Use Cases**: Suitable for tasks like content generation, conversational interfaces, and basic problem-solving.
### Intended Applications:
- **Conversational AI**: Engage users with dynamic and contextually aware dialogue.
- **Content Generation**: Produce summaries, explanations, or other creative text outputs efficiently.
- **Instruction Execution**: Follow user commands to generate precise and relevant responses.
### Technical Details:
The model leverages PyTorch for training and inference, with a tokenizer optimized for seamless text input processing. It comes with essential configuration files, including `config.json`, `generation_config.json`, and tokenization files (`tokenizer.json` and `special_tokens_map.json`). The primary weights are stored in a PyTorch binary format (`pytorch_model.bin`), ensuring easy integration with existing workflows.
**Model Type**: GGUF
**Size**: 1B Parameters
The **Llama-SmolTalk-3.2-1B-Instruct** model is an excellent choice for lightweight text generation tasks, offering a blend of efficiency and effectiveness for a wide range of applications.
|
ysn-rfd/Marco-o1-Q8_0-GGUF | ysn-rfd | 2024-11-25T12:18:24Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:AIDC-AI/Marco-o1",
"base_model:quantized:AIDC-AI/Marco-o1",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2024-11-25T12:17:49Z | ---
license: apache-2.0
library_name: transformers
inference: false
base_model: AIDC-AI/Marco-o1
tags:
- llama-cpp
- gguf-my-repo
---
# ysn-rfd/Marco-o1-Q8_0-GGUF
This model was converted to GGUF format from [`AIDC-AI/Marco-o1`](https://huggingface.co/AIDC-AI/Marco-o1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AIDC-AI/Marco-o1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ysn-rfd/Marco-o1-Q8_0-GGUF --hf-file marco-o1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ysn-rfd/Marco-o1-Q8_0-GGUF --hf-file marco-o1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ysn-rfd/Marco-o1-Q8_0-GGUF --hf-file marco-o1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ysn-rfd/Marco-o1-Q8_0-GGUF --hf-file marco-o1-q8_0.gguf -c 2048
```
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k35_task3_organization_fold0 | MayBashendy | 2024-11-25T12:11:23Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T11:55:56Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k35_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k35_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0226
- Qwk: 0.0530
- Mse: 1.0226
- Rmse: 1.0112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0119 | 2 | 4.5620 | -0.0072 | 4.5620 | 2.1359 |
| No log | 0.0238 | 4 | 2.5147 | -0.0722 | 2.5147 | 1.5858 |
| No log | 0.0357 | 6 | 1.4713 | 0.0833 | 1.4713 | 1.2130 |
| No log | 0.0476 | 8 | 2.6917 | 0.1538 | 2.6917 | 1.6406 |
| No log | 0.0595 | 10 | 2.1062 | 0.0873 | 2.1062 | 1.4513 |
| No log | 0.0714 | 12 | 1.0402 | 0.1951 | 1.0402 | 1.0199 |
| No log | 0.0833 | 14 | 0.9506 | 0.0320 | 0.9506 | 0.9750 |
| No log | 0.0952 | 16 | 1.4038 | 0.0873 | 1.4038 | 1.1848 |
| No log | 0.1071 | 18 | 1.8418 | 0.0 | 1.8418 | 1.3571 |
| No log | 0.1190 | 20 | 1.8371 | 0.0 | 1.8371 | 1.3554 |
| No log | 0.1310 | 22 | 1.5795 | 0.0 | 1.5795 | 1.2568 |
| No log | 0.1429 | 24 | 1.3107 | -0.2737 | 1.3107 | 1.1448 |
| No log | 0.1548 | 26 | 1.5187 | 0.0 | 1.5187 | 1.2324 |
| No log | 0.1667 | 28 | 1.7908 | 0.0 | 1.7908 | 1.3382 |
| No log | 0.1786 | 30 | 1.7169 | 0.0 | 1.7169 | 1.3103 |
| No log | 0.1905 | 32 | 1.4009 | 0.0 | 1.4009 | 1.1836 |
| No log | 0.2024 | 34 | 1.2950 | 0.0 | 1.2950 | 1.1380 |
| No log | 0.2143 | 36 | 1.3267 | 0.0 | 1.3267 | 1.1518 |
| No log | 0.2262 | 38 | 1.3928 | 0.0 | 1.3928 | 1.1801 |
| No log | 0.2381 | 40 | 1.2427 | 0.0 | 1.2427 | 1.1148 |
| No log | 0.25 | 42 | 1.3105 | 0.0 | 1.3105 | 1.1448 |
| No log | 0.2619 | 44 | 1.2995 | 0.0 | 1.2995 | 1.1400 |
| No log | 0.2738 | 46 | 1.3027 | 0.0 | 1.3027 | 1.1414 |
| No log | 0.2857 | 48 | 1.3158 | 0.0 | 1.3158 | 1.1471 |
| No log | 0.2976 | 50 | 1.4336 | 0.0 | 1.4336 | 1.1973 |
| No log | 0.3095 | 52 | 1.3243 | 0.0 | 1.3243 | 1.1508 |
| No log | 0.3214 | 54 | 1.2776 | 0.0 | 1.2776 | 1.1303 |
| No log | 0.3333 | 56 | 1.2636 | 0.0 | 1.2636 | 1.1241 |
| No log | 0.3452 | 58 | 1.2821 | 0.0 | 1.2821 | 1.1323 |
| No log | 0.3571 | 60 | 1.2370 | -0.0087 | 1.2370 | 1.1122 |
| No log | 0.3690 | 62 | 1.1611 | -0.0565 | 1.1611 | 1.0775 |
| No log | 0.3810 | 64 | 1.3263 | -0.0087 | 1.3263 | 1.1517 |
| No log | 0.3929 | 66 | 1.7754 | 0.0 | 1.7754 | 1.3324 |
| No log | 0.4048 | 68 | 1.9093 | 0.0 | 1.9093 | 1.3818 |
| No log | 0.4167 | 70 | 1.7473 | 0.0 | 1.7473 | 1.3218 |
| No log | 0.4286 | 72 | 1.3899 | 0.0 | 1.3899 | 1.1789 |
| No log | 0.4405 | 74 | 1.1058 | 0.0435 | 1.1058 | 1.0516 |
| No log | 0.4524 | 76 | 1.1100 | 0.0435 | 1.1100 | 1.0536 |
| No log | 0.4643 | 78 | 1.3005 | 0.0 | 1.3005 | 1.1404 |
| No log | 0.4762 | 80 | 1.6752 | 0.0 | 1.6752 | 1.2943 |
| No log | 0.4881 | 82 | 2.4091 | 0.0 | 2.4091 | 1.5521 |
| No log | 0.5 | 84 | 2.8731 | -0.0577 | 2.8731 | 1.6950 |
| No log | 0.5119 | 86 | 2.6890 | 0.0 | 2.6890 | 1.6398 |
| No log | 0.5238 | 88 | 2.1705 | 0.0 | 2.1705 | 1.4733 |
| No log | 0.5357 | 90 | 1.6135 | 0.0 | 1.6135 | 1.2702 |
| No log | 0.5476 | 92 | 1.1435 | 0.2870 | 1.1435 | 1.0693 |
| No log | 0.5595 | 94 | 0.8465 | 0.0 | 0.8465 | 0.9201 |
| No log | 0.5714 | 96 | 0.7466 | 0.0 | 0.7466 | 0.8640 |
| No log | 0.5833 | 98 | 0.7441 | 0.0 | 0.7441 | 0.8626 |
| No log | 0.5952 | 100 | 0.8203 | 0.0 | 0.8203 | 0.9057 |
| No log | 0.6071 | 102 | 0.9276 | 0.2080 | 0.9276 | 0.9631 |
| No log | 0.6190 | 104 | 1.2300 | 0.2870 | 1.2300 | 1.1090 |
| No log | 0.6310 | 106 | 1.6949 | 0.0 | 1.6949 | 1.3019 |
| No log | 0.6429 | 108 | 1.9700 | 0.0 | 1.9700 | 1.4036 |
| No log | 0.6548 | 110 | 1.9892 | 0.0 | 1.9892 | 1.4104 |
| No log | 0.6667 | 112 | 1.8091 | 0.0 | 1.8091 | 1.3450 |
| No log | 0.6786 | 114 | 1.5534 | 0.0 | 1.5534 | 1.2463 |
| No log | 0.6905 | 116 | 1.3429 | 0.0 | 1.3429 | 1.1588 |
| No log | 0.7024 | 118 | 1.1841 | 0.0737 | 1.1841 | 1.0882 |
| No log | 0.7143 | 120 | 1.0953 | 0.1951 | 1.0953 | 1.0466 |
| No log | 0.7262 | 122 | 1.0435 | 0.2029 | 1.0435 | 1.0215 |
| No log | 0.7381 | 124 | 1.0912 | 0.2029 | 1.0912 | 1.0446 |
| No log | 0.75 | 126 | 1.2965 | -0.0296 | 1.2965 | 1.1386 |
| No log | 0.7619 | 128 | 1.8599 | 0.0 | 1.8599 | 1.3638 |
| No log | 0.7738 | 130 | 1.6298 | 0.0 | 1.6298 | 1.2766 |
| No log | 0.7857 | 132 | 1.2573 | -0.0296 | 1.2573 | 1.1213 |
| No log | 0.7976 | 134 | 1.1067 | 0.1987 | 1.1067 | 1.0520 |
| No log | 0.8095 | 136 | 0.9643 | -0.1786 | 0.9643 | 0.9820 |
| No log | 0.8214 | 138 | 0.9082 | -0.1786 | 0.9082 | 0.9530 |
| No log | 0.8333 | 140 | 0.9729 | -0.3200 | 0.9729 | 0.9864 |
| No log | 0.8452 | 142 | 1.1375 | 0.0320 | 1.1375 | 1.0665 |
| No log | 0.8571 | 144 | 1.4960 | 0.0756 | 1.4960 | 1.2231 |
| No log | 0.8690 | 146 | 1.7672 | 0.0803 | 1.7672 | 1.3294 |
| No log | 0.8810 | 148 | 1.4594 | 0.1895 | 1.4594 | 1.2081 |
| No log | 0.8929 | 150 | 1.0002 | 0.0530 | 1.0002 | 1.0001 |
| No log | 0.9048 | 152 | 0.9162 | -0.1786 | 0.9162 | 0.9572 |
| No log | 0.9167 | 154 | 0.8038 | 0.0 | 0.8038 | 0.8966 |
| No log | 0.9286 | 156 | 0.8785 | 0.0320 | 0.8785 | 0.9373 |
| No log | 0.9405 | 158 | 1.2107 | 0.0788 | 1.2107 | 1.1003 |
| No log | 0.9524 | 160 | 1.7492 | 0.0 | 1.7492 | 1.3226 |
| No log | 0.9643 | 162 | 1.9478 | 0.0 | 1.9478 | 1.3956 |
| No log | 0.9762 | 164 | 1.7945 | 0.0 | 1.7945 | 1.3396 |
| No log | 0.9881 | 166 | 1.6541 | 0.0 | 1.6541 | 1.2861 |
| No log | 1.0 | 168 | 1.4256 | 0.0 | 1.4256 | 1.1940 |
| No log | 1.0119 | 170 | 1.1091 | 0.0435 | 1.1091 | 1.0531 |
| No log | 1.0238 | 172 | 0.9828 | 0.0 | 0.9828 | 0.9914 |
| No log | 1.0357 | 174 | 0.9799 | 0.2143 | 0.9799 | 0.9899 |
| No log | 1.0476 | 176 | 1.2849 | -0.0565 | 1.2849 | 1.1335 |
| No log | 1.0595 | 178 | 1.5187 | 0.0 | 1.5187 | 1.2324 |
| No log | 1.0714 | 180 | 1.3555 | 0.0873 | 1.3555 | 1.1642 |
| No log | 1.0833 | 182 | 0.8011 | 0.2143 | 0.8011 | 0.8950 |
| No log | 1.0952 | 184 | 0.7216 | 0.0 | 0.7216 | 0.8495 |
| No log | 1.1071 | 186 | 0.7079 | 0.0 | 0.7079 | 0.8414 |
| No log | 1.1190 | 188 | 0.8132 | 0.384 | 0.8132 | 0.9018 |
| No log | 1.1310 | 190 | 1.2196 | 0.0678 | 1.2196 | 1.1044 |
| No log | 1.1429 | 192 | 1.0997 | 0.0678 | 1.0997 | 1.0487 |
| No log | 1.1548 | 194 | 0.8468 | 0.384 | 0.8468 | 0.9202 |
| No log | 1.1667 | 196 | 0.9008 | 0.0530 | 0.9008 | 0.9491 |
| No log | 1.1786 | 198 | 1.0917 | 0.0678 | 1.0917 | 1.0448 |
| No log | 1.1905 | 200 | 0.9803 | 0.0678 | 0.9803 | 0.9901 |
| No log | 1.2024 | 202 | 0.7149 | 0.1818 | 0.7149 | 0.8455 |
| No log | 1.2143 | 204 | 0.7027 | 0.1538 | 0.7027 | 0.8382 |
| No log | 1.2262 | 206 | 0.7107 | 0.3433 | 0.7107 | 0.8430 |
| No log | 1.2381 | 208 | 0.6775 | 0.4615 | 0.6775 | 0.8231 |
| No log | 1.25 | 210 | 0.6688 | 0.4615 | 0.6688 | 0.8178 |
| No log | 1.2619 | 212 | 0.6530 | 0.2878 | 0.6530 | 0.8081 |
| No log | 1.2738 | 214 | 0.6701 | 0.4211 | 0.6701 | 0.8186 |
| No log | 1.2857 | 216 | 0.6545 | 0.3694 | 0.6545 | 0.8090 |
| No log | 1.2976 | 218 | 0.6875 | 0.3694 | 0.6875 | 0.8291 |
| No log | 1.3095 | 220 | 0.6553 | 0.3694 | 0.6553 | 0.8095 |
| No log | 1.3214 | 222 | 0.6667 | 0.1316 | 0.6667 | 0.8165 |
| No log | 1.3333 | 224 | 0.8204 | 0.3444 | 0.8204 | 0.9057 |
| No log | 1.3452 | 226 | 0.6896 | 0.3623 | 0.6896 | 0.8304 |
| No log | 1.3571 | 228 | 0.6541 | 0.1538 | 0.6541 | 0.8088 |
| No log | 1.3690 | 230 | 0.6550 | 0.4211 | 0.6550 | 0.8093 |
| No log | 1.3810 | 232 | 0.7035 | 0.3433 | 0.7035 | 0.8388 |
| No log | 1.3929 | 234 | 0.7044 | 0.3433 | 0.7044 | 0.8393 |
| No log | 1.4048 | 236 | 0.7292 | -0.0154 | 0.7292 | 0.8539 |
| No log | 1.4167 | 238 | 0.7185 | 0.1818 | 0.7185 | 0.8476 |
| No log | 1.4286 | 240 | 0.7214 | 0.1818 | 0.7214 | 0.8494 |
| No log | 1.4405 | 242 | 0.7136 | 0.1818 | 0.7136 | 0.8447 |
| No log | 1.4524 | 244 | 0.7677 | -0.2222 | 0.7677 | 0.8762 |
| No log | 1.4643 | 246 | 0.7540 | -0.0154 | 0.7540 | 0.8683 |
| No log | 1.4762 | 248 | 0.6921 | 0.1818 | 0.6921 | 0.8319 |
| No log | 1.4881 | 250 | 0.6860 | 0.1818 | 0.6860 | 0.8283 |
| No log | 1.5 | 252 | 0.6625 | 0.1818 | 0.6625 | 0.8139 |
| No log | 1.5119 | 254 | 0.7013 | 0.3623 | 0.7013 | 0.8374 |
| No log | 1.5238 | 256 | 0.7012 | 0.3623 | 0.7012 | 0.8374 |
| No log | 1.5357 | 258 | 0.6391 | 0.2080 | 0.6391 | 0.7994 |
| No log | 1.5476 | 260 | 0.6474 | 0.2080 | 0.6474 | 0.8046 |
| No log | 1.5595 | 262 | 0.6682 | 0.2080 | 0.6682 | 0.8174 |
| No log | 1.5714 | 264 | 0.7036 | 0.3623 | 0.7036 | 0.8388 |
| No log | 1.5833 | 266 | 0.7137 | 0.3623 | 0.7137 | 0.8448 |
| No log | 1.5952 | 268 | 0.7923 | 0.3623 | 0.7923 | 0.8901 |
| No log | 1.6071 | 270 | 0.7698 | 0.3623 | 0.7698 | 0.8774 |
| No log | 1.6190 | 272 | 0.7061 | 0.1791 | 0.7061 | 0.8403 |
| No log | 1.6310 | 274 | 0.6933 | 0.1791 | 0.6933 | 0.8327 |
| No log | 1.6429 | 276 | 0.6987 | 0.1791 | 0.6987 | 0.8359 |
| No log | 1.6548 | 278 | 0.7119 | 0.3265 | 0.7119 | 0.8437 |
| No log | 1.6667 | 280 | 0.6650 | 0.2568 | 0.6650 | 0.8155 |
| No log | 1.6786 | 282 | 0.9191 | 0.2092 | 0.9191 | 0.9587 |
| No log | 1.6905 | 284 | 0.8496 | 0.2092 | 0.8496 | 0.9217 |
| No log | 1.7024 | 286 | 0.6625 | -0.0185 | 0.6625 | 0.8140 |
| No log | 1.7143 | 288 | 0.8477 | 0.3444 | 0.8477 | 0.9207 |
| No log | 1.7262 | 290 | 1.1852 | 0.3053 | 1.1852 | 1.0887 |
| No log | 1.7381 | 292 | 0.9387 | 0.3444 | 0.9387 | 0.9689 |
| No log | 1.75 | 294 | 0.7063 | 0.0 | 0.7063 | 0.8404 |
| No log | 1.7619 | 296 | 0.7051 | 0.0 | 0.7051 | 0.8397 |
| No log | 1.7738 | 298 | 0.7253 | 0.0 | 0.7253 | 0.8516 |
| No log | 1.7857 | 300 | 0.7538 | 0.0320 | 0.7538 | 0.8682 |
| No log | 1.7976 | 302 | 0.7045 | 0.0 | 0.7045 | 0.8393 |
| No log | 1.8095 | 304 | 0.6918 | 0.0 | 0.6918 | 0.8317 |
| No log | 1.8214 | 306 | 0.7027 | 0.2029 | 0.7027 | 0.8383 |
| No log | 1.8333 | 308 | 0.7043 | 0.3444 | 0.7043 | 0.8393 |
| No log | 1.8452 | 310 | 0.6591 | -0.0342 | 0.6591 | 0.8118 |
| No log | 1.8571 | 312 | 0.8336 | 0.3016 | 0.8336 | 0.9130 |
| No log | 1.8690 | 314 | 0.8774 | 0.1037 | 0.8774 | 0.9367 |
| No log | 1.8810 | 316 | 0.6886 | -0.0342 | 0.6886 | 0.8298 |
| No log | 1.8929 | 318 | 0.6868 | 0.1538 | 0.6868 | 0.8288 |
| No log | 1.9048 | 320 | 0.6922 | 0.1270 | 0.6922 | 0.8320 |
| No log | 1.9167 | 322 | 0.7625 | 0.4296 | 0.7625 | 0.8732 |
| No log | 1.9286 | 324 | 0.6977 | 0.1270 | 0.6977 | 0.8353 |
| No log | 1.9405 | 326 | 0.7285 | 0.3125 | 0.7285 | 0.8535 |
| No log | 1.9524 | 328 | 0.7104 | 0.0 | 0.7104 | 0.8428 |
| No log | 1.9643 | 330 | 0.7213 | -0.0185 | 0.7213 | 0.8493 |
| No log | 1.9762 | 332 | 0.7451 | 0.0 | 0.7451 | 0.8632 |
| No log | 1.9881 | 334 | 0.7548 | 0.0 | 0.7548 | 0.8688 |
| No log | 2.0 | 336 | 0.8167 | 0.0179 | 0.8167 | 0.9037 |
| No log | 2.0119 | 338 | 1.0294 | 0.1921 | 1.0294 | 1.0146 |
| No log | 2.0238 | 340 | 0.9880 | 0.2029 | 0.9880 | 0.9940 |
| No log | 2.0357 | 342 | 0.8054 | 0.0 | 0.8054 | 0.8975 |
| No log | 2.0476 | 344 | 0.7623 | 0.0 | 0.7623 | 0.8731 |
| No log | 2.0595 | 346 | 0.7484 | 0.0 | 0.7484 | 0.8651 |
| No log | 2.0714 | 348 | 0.7357 | -0.0185 | 0.7357 | 0.8577 |
| No log | 2.0833 | 350 | 0.7935 | 0.2029 | 0.7935 | 0.8908 |
| No log | 2.0952 | 352 | 0.7580 | 0.1769 | 0.7580 | 0.8706 |
| No log | 2.1071 | 354 | 0.7389 | -0.0342 | 0.7389 | 0.8596 |
| No log | 2.1190 | 356 | 0.8979 | 0.1037 | 0.8979 | 0.9476 |
| No log | 2.1310 | 358 | 0.9489 | 0.1037 | 0.9489 | 0.9741 |
| No log | 2.1429 | 360 | 0.8339 | -0.0342 | 0.8339 | 0.9132 |
| No log | 2.1548 | 362 | 0.7420 | -0.0185 | 0.7420 | 0.8614 |
| No log | 2.1667 | 364 | 0.7464 | 0.0 | 0.7464 | 0.8639 |
| No log | 2.1786 | 366 | 0.7540 | -0.0185 | 0.7540 | 0.8683 |
| No log | 2.1905 | 368 | 0.7924 | -0.0342 | 0.7924 | 0.8901 |
| No log | 2.2024 | 370 | 0.7724 | -0.0342 | 0.7724 | 0.8789 |
| No log | 2.2143 | 372 | 0.7516 | -0.0185 | 0.7516 | 0.8669 |
| No log | 2.2262 | 374 | 0.7496 | -0.0185 | 0.7496 | 0.8658 |
| No log | 2.2381 | 376 | 0.7640 | -0.0342 | 0.7640 | 0.8741 |
| No log | 2.25 | 378 | 0.7492 | -0.0342 | 0.7492 | 0.8656 |
| No log | 2.2619 | 380 | 0.7354 | -0.0342 | 0.7354 | 0.8575 |
| No log | 2.2738 | 382 | 0.7571 | -0.0342 | 0.7571 | 0.8701 |
| No log | 2.2857 | 384 | 0.7520 | -0.0342 | 0.7520 | 0.8672 |
| No log | 2.2976 | 386 | 0.7126 | -0.0185 | 0.7126 | 0.8441 |
| No log | 2.3095 | 388 | 0.7926 | 0.2029 | 0.7926 | 0.8903 |
| No log | 2.3214 | 390 | 0.8225 | 0.2029 | 0.8225 | 0.9069 |
| No log | 2.3333 | 392 | 0.7195 | -0.0185 | 0.7195 | 0.8482 |
| No log | 2.3452 | 394 | 0.7333 | -0.0342 | 0.7333 | 0.8563 |
| No log | 2.3571 | 396 | 0.7442 | 0.3016 | 0.7442 | 0.8627 |
| No log | 2.3690 | 398 | 0.7519 | 0.4296 | 0.7519 | 0.8671 |
| No log | 2.3810 | 400 | 0.7302 | -0.0342 | 0.7302 | 0.8545 |
| No log | 2.3929 | 402 | 0.7414 | -0.0342 | 0.7414 | 0.8610 |
| No log | 2.4048 | 404 | 0.7779 | 0.1037 | 0.7779 | 0.8820 |
| No log | 2.4167 | 406 | 0.7737 | 0.3016 | 0.7737 | 0.8796 |
| No log | 2.4286 | 408 | 0.7426 | -0.0185 | 0.7426 | 0.8617 |
| No log | 2.4405 | 410 | 0.7468 | 0.0 | 0.7468 | 0.8642 |
| No log | 2.4524 | 412 | 0.7641 | 0.0 | 0.7641 | 0.8741 |
| No log | 2.4643 | 414 | 0.7636 | -0.0185 | 0.7636 | 0.8739 |
| No log | 2.4762 | 416 | 0.8306 | 0.1769 | 0.8306 | 0.9114 |
| No log | 2.4881 | 418 | 1.0400 | 0.1720 | 1.0400 | 1.0198 |
| No log | 2.5 | 420 | 0.8646 | 0.1750 | 0.8646 | 0.9298 |
| No log | 2.5119 | 422 | 0.7463 | 0.0 | 0.7463 | 0.8639 |
| No log | 2.5238 | 424 | 0.7671 | -0.0342 | 0.7671 | 0.8758 |
| No log | 2.5357 | 426 | 0.7387 | -0.0342 | 0.7387 | 0.8594 |
| No log | 2.5476 | 428 | 0.7064 | -0.0185 | 0.7064 | 0.8405 |
| No log | 2.5595 | 430 | 0.7969 | 0.1750 | 0.7969 | 0.8927 |
| No log | 2.5714 | 432 | 0.8965 | 0.3293 | 0.8965 | 0.9468 |
| No log | 2.5833 | 434 | 0.7363 | 0.1769 | 0.7363 | 0.8581 |
| No log | 2.5952 | 436 | 0.7106 | 0.1769 | 0.7106 | 0.8430 |
| No log | 2.6071 | 438 | 0.7026 | 0.0 | 0.7026 | 0.8382 |
| No log | 2.6190 | 440 | 0.7188 | 0.0149 | 0.7188 | 0.8478 |
| No log | 2.6310 | 442 | 0.7218 | -0.0185 | 0.7218 | 0.8496 |
| No log | 2.6429 | 444 | 0.7226 | -0.0185 | 0.7226 | 0.8501 |
| No log | 2.6548 | 446 | 0.7587 | 0.0320 | 0.7587 | 0.8710 |
| No log | 2.6667 | 448 | 0.7712 | 0.2029 | 0.7712 | 0.8782 |
| No log | 2.6786 | 450 | 0.7278 | -0.0185 | 0.7278 | 0.8531 |
| No log | 2.6905 | 452 | 0.7411 | -0.0185 | 0.7411 | 0.8609 |
| No log | 2.7024 | 454 | 0.7282 | -0.0185 | 0.7282 | 0.8534 |
| No log | 2.7143 | 456 | 0.7288 | 0.0 | 0.7288 | 0.8537 |
| No log | 2.7262 | 458 | 0.8106 | 0.2029 | 0.8106 | 0.9003 |
| No log | 2.7381 | 460 | 0.8784 | 0.1987 | 0.8784 | 0.9372 |
| No log | 2.75 | 462 | 0.7971 | 0.2029 | 0.7971 | 0.8928 |
| No log | 2.7619 | 464 | 0.7332 | 0.0 | 0.7332 | 0.8562 |
| No log | 2.7738 | 466 | 0.7517 | 0.0 | 0.7517 | 0.8670 |
| No log | 2.7857 | 468 | 0.8475 | 0.2029 | 0.8475 | 0.9206 |
| No log | 2.7976 | 470 | 0.9305 | 0.3293 | 0.9305 | 0.9646 |
| No log | 2.8095 | 472 | 0.9309 | 0.3293 | 0.9309 | 0.9648 |
| No log | 2.8214 | 474 | 0.8188 | 0.0149 | 0.8188 | 0.9049 |
| No log | 2.8333 | 476 | 0.7399 | -0.0185 | 0.7399 | 0.8602 |
| No log | 2.8452 | 478 | 0.7364 | -0.0342 | 0.7364 | 0.8581 |
| No log | 2.8571 | 480 | 0.7442 | -0.0185 | 0.7442 | 0.8627 |
| No log | 2.8690 | 482 | 0.8239 | 0.0320 | 0.8239 | 0.9077 |
| No log | 2.8810 | 484 | 0.9087 | 0.2029 | 0.9087 | 0.9532 |
| No log | 2.8929 | 486 | 0.9196 | 0.2029 | 0.9196 | 0.9590 |
| No log | 2.9048 | 488 | 0.8772 | 0.0320 | 0.8772 | 0.9366 |
| No log | 2.9167 | 490 | 0.9597 | 0.1987 | 0.9597 | 0.9797 |
| No log | 2.9286 | 492 | 0.9250 | 0.1987 | 0.9250 | 0.9618 |
| No log | 2.9405 | 494 | 0.7938 | 0.0320 | 0.7938 | 0.8910 |
| No log | 2.9524 | 496 | 0.7661 | -0.0185 | 0.7661 | 0.8753 |
| No log | 2.9643 | 498 | 0.7659 | -0.0185 | 0.7659 | 0.8752 |
| 0.5259 | 2.9762 | 500 | 0.8034 | 0.0179 | 0.8034 | 0.8963 |
| 0.5259 | 2.9881 | 502 | 1.0217 | 0.0610 | 1.0217 | 1.0108 |
| 0.5259 | 3.0 | 504 | 1.2025 | 0.0610 | 1.2025 | 1.0966 |
| 0.5259 | 3.0119 | 506 | 1.0149 | 0.0610 | 1.0149 | 1.0074 |
| 0.5259 | 3.0238 | 508 | 0.8337 | 0.0179 | 0.8337 | 0.9131 |
| 0.5259 | 3.0357 | 510 | 0.8261 | 0.0179 | 0.8261 | 0.9089 |
| 0.5259 | 3.0476 | 512 | 0.8339 | 0.0320 | 0.8339 | 0.9132 |
| 0.5259 | 3.0595 | 514 | 0.8359 | 0.0320 | 0.8359 | 0.9143 |
| 0.5259 | 3.0714 | 516 | 0.8376 | 0.0320 | 0.8376 | 0.9152 |
| 0.5259 | 3.0833 | 518 | 0.8496 | 0.2029 | 0.8496 | 0.9218 |
| 0.5259 | 3.0952 | 520 | 0.8236 | 0.1791 | 0.8236 | 0.9075 |
| 0.5259 | 3.1071 | 522 | 0.7885 | -0.0154 | 0.7885 | 0.8880 |
| 0.5259 | 3.1190 | 524 | 0.7811 | -0.0154 | 0.7811 | 0.8838 |
| 0.5259 | 3.1310 | 526 | 0.7829 | -0.0154 | 0.7829 | 0.8848 |
| 0.5259 | 3.1429 | 528 | 0.7846 | -0.0154 | 0.7846 | 0.8858 |
| 0.5259 | 3.1548 | 530 | 0.7937 | 0.0 | 0.7937 | 0.8909 |
| 0.5259 | 3.1667 | 532 | 0.8099 | 0.1791 | 0.8099 | 0.8999 |
| 0.5259 | 3.1786 | 534 | 0.7962 | 0.1538 | 0.7962 | 0.8923 |
| 0.5259 | 3.1905 | 536 | 0.8020 | 0.1295 | 0.8020 | 0.8955 |
| 0.5259 | 3.2024 | 538 | 0.8073 | 0.1295 | 0.8073 | 0.8985 |
| 0.5259 | 3.2143 | 540 | 0.8133 | -0.0342 | 0.8133 | 0.9018 |
| 0.5259 | 3.2262 | 542 | 0.8178 | -0.0342 | 0.8178 | 0.9043 |
| 0.5259 | 3.2381 | 544 | 0.7871 | -0.0342 | 0.7871 | 0.8872 |
| 0.5259 | 3.25 | 546 | 0.7799 | 0.0179 | 0.7799 | 0.8831 |
| 0.5259 | 3.2619 | 548 | 0.7773 | 0.0 | 0.7773 | 0.8816 |
| 0.5259 | 3.2738 | 550 | 0.7942 | 0.0179 | 0.7942 | 0.8912 |
| 0.5259 | 3.2857 | 552 | 0.8120 | 0.2080 | 0.8120 | 0.9011 |
| 0.5259 | 3.2976 | 554 | 0.7730 | 0.0 | 0.7730 | 0.8792 |
| 0.5259 | 3.3095 | 556 | 0.7554 | 0.0 | 0.7554 | 0.8692 |
| 0.5259 | 3.3214 | 558 | 0.7584 | 0.0 | 0.7584 | 0.8709 |
| 0.5259 | 3.3333 | 560 | 0.7674 | 0.0179 | 0.7674 | 0.8760 |
| 0.5259 | 3.3452 | 562 | 0.7986 | 0.2080 | 0.7986 | 0.8937 |
| 0.5259 | 3.3571 | 564 | 0.7604 | 0.0179 | 0.7604 | 0.8720 |
| 0.5259 | 3.3690 | 566 | 0.7533 | 0.0 | 0.7533 | 0.8679 |
| 0.5259 | 3.3810 | 568 | 0.7717 | -0.0342 | 0.7717 | 0.8784 |
| 0.5259 | 3.3929 | 570 | 0.7649 | 0.0 | 0.7649 | 0.8746 |
| 0.5259 | 3.4048 | 572 | 0.7581 | 0.0179 | 0.7581 | 0.8707 |
| 0.5259 | 3.4167 | 574 | 0.7940 | 0.2080 | 0.7940 | 0.8911 |
| 0.5259 | 3.4286 | 576 | 0.8771 | 0.1987 | 0.8771 | 0.9366 |
| 0.5259 | 3.4405 | 578 | 0.7997 | 0.2029 | 0.7997 | 0.8943 |
| 0.5259 | 3.4524 | 580 | 0.7565 | 0.0179 | 0.7565 | 0.8698 |
| 0.5259 | 3.4643 | 582 | 0.7533 | 0.0179 | 0.7533 | 0.8679 |
| 0.5259 | 3.4762 | 584 | 0.7978 | 0.2029 | 0.7978 | 0.8932 |
| 0.5259 | 3.4881 | 586 | 0.8845 | 0.1987 | 0.8845 | 0.9405 |
| 0.5259 | 3.5 | 588 | 0.9490 | 0.3293 | 0.9490 | 0.9742 |
| 0.5259 | 3.5119 | 590 | 0.8247 | 0.1987 | 0.8247 | 0.9081 |
| 0.5259 | 3.5238 | 592 | 0.7363 | 0.2029 | 0.7363 | 0.8581 |
| 0.5259 | 3.5357 | 594 | 0.7400 | 0.2029 | 0.7400 | 0.8602 |
| 0.5259 | 3.5476 | 596 | 0.7666 | 0.2029 | 0.7666 | 0.8756 |
| 0.5259 | 3.5595 | 598 | 0.8825 | 0.3293 | 0.8825 | 0.9394 |
| 0.5259 | 3.5714 | 600 | 0.9280 | 0.3293 | 0.9280 | 0.9633 |
| 0.5259 | 3.5833 | 602 | 0.8312 | 0.2029 | 0.8312 | 0.9117 |
| 0.5259 | 3.5952 | 604 | 0.7416 | 0.2029 | 0.7416 | 0.8612 |
| 0.5259 | 3.6071 | 606 | 0.7245 | -0.0342 | 0.7245 | 0.8512 |
| 0.5259 | 3.6190 | 608 | 0.7373 | -0.0342 | 0.7373 | 0.8586 |
| 0.5259 | 3.6310 | 610 | 0.7294 | -0.0185 | 0.7294 | 0.8540 |
| 0.5259 | 3.6429 | 612 | 0.8363 | 0.2029 | 0.8363 | 0.9145 |
| 0.5259 | 3.6548 | 614 | 0.9326 | 0.0530 | 0.9326 | 0.9657 |
| 0.5259 | 3.6667 | 616 | 0.8848 | 0.2029 | 0.8848 | 0.9407 |
| 0.5259 | 3.6786 | 618 | 0.7591 | 0.0 | 0.7591 | 0.8712 |
| 0.5259 | 3.6905 | 620 | 0.7260 | 0.0 | 0.7260 | 0.8521 |
| 0.5259 | 3.7024 | 622 | 0.7247 | -0.0342 | 0.7247 | 0.8513 |
| 0.5259 | 3.7143 | 624 | 0.7142 | -0.0342 | 0.7142 | 0.8451 |
| 0.5259 | 3.7262 | 626 | 0.7197 | 0.0 | 0.7197 | 0.8483 |
| 0.5259 | 3.7381 | 628 | 0.8526 | 0.2029 | 0.8526 | 0.9234 |
| 0.5259 | 3.75 | 630 | 0.8841 | 0.2029 | 0.8841 | 0.9403 |
| 0.5259 | 3.7619 | 632 | 0.7722 | 0.2080 | 0.7722 | 0.8787 |
| 0.5259 | 3.7738 | 634 | 0.7268 | 0.0 | 0.7268 | 0.8525 |
| 0.5259 | 3.7857 | 636 | 0.7255 | -0.0185 | 0.7255 | 0.8518 |
| 0.5259 | 3.7976 | 638 | 0.7280 | 0.0 | 0.7280 | 0.8533 |
| 0.5259 | 3.8095 | 640 | 0.7285 | 0.0 | 0.7285 | 0.8535 |
| 0.5259 | 3.8214 | 642 | 0.7540 | 0.0 | 0.7540 | 0.8683 |
| 0.5259 | 3.8333 | 644 | 0.8925 | 0.2029 | 0.8925 | 0.9447 |
| 0.5259 | 3.8452 | 646 | 1.2858 | 0.0737 | 1.2858 | 1.1339 |
| 0.5259 | 3.8571 | 648 | 1.4838 | 0.0737 | 1.4838 | 1.2181 |
| 0.5259 | 3.8690 | 650 | 1.2778 | 0.0737 | 1.2778 | 1.1304 |
| 0.5259 | 3.8810 | 652 | 0.9365 | 0.2029 | 0.9365 | 0.9677 |
| 0.5259 | 3.8929 | 654 | 0.7553 | 0.0 | 0.7553 | 0.8691 |
| 0.5259 | 3.9048 | 656 | 0.7440 | -0.0185 | 0.7440 | 0.8626 |
| 0.5259 | 3.9167 | 658 | 0.7384 | -0.0185 | 0.7384 | 0.8593 |
| 0.5259 | 3.9286 | 660 | 0.7460 | 0.0 | 0.7460 | 0.8637 |
| 0.5259 | 3.9405 | 662 | 0.7938 | 0.2029 | 0.7938 | 0.8910 |
| 0.5259 | 3.9524 | 664 | 0.8235 | 0.2029 | 0.8235 | 0.9075 |
| 0.5259 | 3.9643 | 666 | 0.8646 | 0.2029 | 0.8646 | 0.9298 |
| 0.5259 | 3.9762 | 668 | 0.8600 | 0.2029 | 0.8600 | 0.9274 |
| 0.5259 | 3.9881 | 670 | 0.7874 | 0.2029 | 0.7874 | 0.8874 |
| 0.5259 | 4.0 | 672 | 0.7267 | -0.0185 | 0.7267 | 0.8525 |
| 0.5259 | 4.0119 | 674 | 0.7404 | 0.0 | 0.7404 | 0.8605 |
| 0.5259 | 4.0238 | 676 | 0.7823 | 0.2080 | 0.7823 | 0.8845 |
| 0.5259 | 4.0357 | 678 | 0.7502 | 0.1791 | 0.7502 | 0.8661 |
| 0.5259 | 4.0476 | 680 | 0.7222 | -0.0185 | 0.7222 | 0.8498 |
| 0.5259 | 4.0595 | 682 | 0.7212 | -0.0185 | 0.7212 | 0.8492 |
| 0.5259 | 4.0714 | 684 | 0.7363 | 0.1791 | 0.7363 | 0.8581 |
| 0.5259 | 4.0833 | 686 | 0.7552 | 0.2080 | 0.7552 | 0.8690 |
| 0.5259 | 4.0952 | 688 | 0.7596 | 0.2080 | 0.7596 | 0.8716 |
| 0.5259 | 4.1071 | 690 | 0.7275 | 0.1791 | 0.7275 | 0.8529 |
| 0.5259 | 4.1190 | 692 | 0.7248 | 0.1818 | 0.7248 | 0.8514 |
| 0.5259 | 4.1310 | 694 | 0.7648 | 0.2029 | 0.7648 | 0.8745 |
| 0.5259 | 4.1429 | 696 | 0.8960 | 0.1987 | 0.8960 | 0.9466 |
| 0.5259 | 4.1548 | 698 | 1.0098 | 0.1987 | 1.0098 | 1.0049 |
| 0.5259 | 4.1667 | 700 | 1.0147 | 0.1987 | 1.0147 | 1.0073 |
| 0.5259 | 4.1786 | 702 | 0.8410 | 0.2029 | 0.8410 | 0.9171 |
| 0.5259 | 4.1905 | 704 | 0.7469 | 0.1538 | 0.7469 | 0.8642 |
| 0.5259 | 4.2024 | 706 | 0.7404 | -0.0342 | 0.7404 | 0.8605 |
| 0.5259 | 4.2143 | 708 | 0.7516 | -0.0185 | 0.7516 | 0.8669 |
| 0.5259 | 4.2262 | 710 | 0.8381 | 0.2029 | 0.8381 | 0.9155 |
| 0.5259 | 4.2381 | 712 | 1.1092 | 0.0610 | 1.1092 | 1.0532 |
| 0.5259 | 4.25 | 714 | 1.2668 | 0.0737 | 1.2668 | 1.1255 |
| 0.5259 | 4.2619 | 716 | 1.1770 | 0.0737 | 1.1770 | 1.0849 |
| 0.5259 | 4.2738 | 718 | 0.9534 | 0.1987 | 0.9534 | 0.9764 |
| 0.5259 | 4.2857 | 720 | 0.8011 | -0.0154 | 0.8011 | 0.8951 |
| 0.5259 | 4.2976 | 722 | 0.8074 | -0.0154 | 0.8074 | 0.8986 |
| 0.5259 | 4.3095 | 724 | 0.8301 | -0.0154 | 0.8301 | 0.9111 |
| 0.5259 | 4.3214 | 726 | 0.8510 | 0.1538 | 0.8510 | 0.9225 |
| 0.5259 | 4.3333 | 728 | 0.8263 | 0.0 | 0.8263 | 0.9090 |
| 0.5259 | 4.3452 | 730 | 0.8024 | 0.0149 | 0.8024 | 0.8958 |
| 0.5259 | 4.3571 | 732 | 0.8108 | 0.2029 | 0.8108 | 0.9004 |
| 0.5259 | 4.3690 | 734 | 0.8048 | 0.2080 | 0.8048 | 0.8971 |
| 0.5259 | 4.3810 | 736 | 0.8598 | 0.1987 | 0.8598 | 0.9273 |
| 0.5259 | 4.3929 | 738 | 0.9739 | 0.3293 | 0.9739 | 0.9869 |
| 0.5259 | 4.4048 | 740 | 1.0005 | 0.3293 | 1.0005 | 1.0003 |
| 0.5259 | 4.4167 | 742 | 0.9486 | 0.3293 | 0.9486 | 0.9740 |
| 0.5259 | 4.4286 | 744 | 0.8682 | 0.1538 | 0.8682 | 0.9318 |
| 0.5259 | 4.4405 | 746 | 0.8337 | 0.1538 | 0.8337 | 0.9131 |
| 0.5259 | 4.4524 | 748 | 0.9053 | 0.1538 | 0.9053 | 0.9515 |
| 0.5259 | 4.4643 | 750 | 1.0359 | 0.1921 | 1.0359 | 1.0178 |
| 0.5259 | 4.4762 | 752 | 1.0986 | 0.0737 | 1.0986 | 1.0481 |
| 0.5259 | 4.4881 | 754 | 1.0732 | 0.1921 | 1.0732 | 1.0360 |
| 0.5259 | 4.5 | 756 | 0.9648 | 0.0610 | 0.9648 | 0.9822 |
| 0.5259 | 4.5119 | 758 | 0.8739 | 0.2080 | 0.8739 | 0.9348 |
| 0.5259 | 4.5238 | 760 | 0.8856 | 0.2029 | 0.8856 | 0.9410 |
| 0.5259 | 4.5357 | 762 | 0.9307 | 0.0610 | 0.9307 | 0.9647 |
| 0.5259 | 4.5476 | 764 | 0.9131 | 0.2029 | 0.9131 | 0.9556 |
| 0.5259 | 4.5595 | 766 | 0.8234 | 0.2080 | 0.8234 | 0.9074 |
| 0.5259 | 4.5714 | 768 | 0.7668 | 0.0 | 0.7668 | 0.8757 |
| 0.5259 | 4.5833 | 770 | 0.7719 | 0.0179 | 0.7719 | 0.8786 |
| 0.5259 | 4.5952 | 772 | 0.8930 | 0.1987 | 0.8930 | 0.9450 |
| 0.5259 | 4.6071 | 774 | 0.9713 | 0.1987 | 0.9713 | 0.9855 |
| 0.5259 | 4.6190 | 776 | 0.8943 | 0.1987 | 0.8943 | 0.9457 |
| 0.5259 | 4.6310 | 778 | 0.7562 | 0.2080 | 0.7562 | 0.8696 |
| 0.5259 | 4.6429 | 780 | 0.7451 | 0.2143 | 0.7451 | 0.8632 |
| 0.5259 | 4.6548 | 782 | 0.7726 | 0.2029 | 0.7726 | 0.8790 |
| 0.5259 | 4.6667 | 784 | 0.7889 | 0.1987 | 0.7889 | 0.8882 |
| 0.5259 | 4.6786 | 786 | 0.8397 | 0.1987 | 0.8397 | 0.9163 |
| 0.5259 | 4.6905 | 788 | 0.8890 | 0.1987 | 0.8890 | 0.9428 |
| 0.5259 | 4.7024 | 790 | 0.8875 | 0.1987 | 0.8875 | 0.9421 |
| 0.5259 | 4.7143 | 792 | 0.8656 | 0.1987 | 0.8656 | 0.9304 |
| 0.5259 | 4.7262 | 794 | 0.8193 | 0.1987 | 0.8193 | 0.9051 |
| 0.5259 | 4.7381 | 796 | 0.8387 | 0.1987 | 0.8387 | 0.9158 |
| 0.5259 | 4.75 | 798 | 0.8779 | 0.1987 | 0.8779 | 0.9370 |
| 0.5259 | 4.7619 | 800 | 0.9698 | 0.1987 | 0.9698 | 0.9848 |
| 0.5259 | 4.7738 | 802 | 0.9419 | 0.1987 | 0.9419 | 0.9705 |
| 0.5259 | 4.7857 | 804 | 0.8865 | 0.1987 | 0.8865 | 0.9415 |
| 0.5259 | 4.7976 | 806 | 0.9090 | 0.1987 | 0.9090 | 0.9534 |
| 0.5259 | 4.8095 | 808 | 0.9970 | 0.3293 | 0.9970 | 0.9985 |
| 0.5259 | 4.8214 | 810 | 0.9692 | 0.1987 | 0.9692 | 0.9845 |
| 0.5259 | 4.8333 | 812 | 0.8818 | 0.1987 | 0.8818 | 0.9390 |
| 0.5259 | 4.8452 | 814 | 0.7989 | 0.2080 | 0.7989 | 0.8938 |
| 0.5259 | 4.8571 | 816 | 0.7425 | 0.0 | 0.7425 | 0.8617 |
| 0.5259 | 4.8690 | 818 | 0.7483 | 0.2143 | 0.7483 | 0.8650 |
| 0.5259 | 4.8810 | 820 | 0.8207 | 0.1987 | 0.8207 | 0.9059 |
| 0.5259 | 4.8929 | 822 | 0.9208 | 0.1987 | 0.9208 | 0.9596 |
| 0.5259 | 4.9048 | 824 | 0.9913 | 0.3293 | 0.9913 | 0.9956 |
| 0.5259 | 4.9167 | 826 | 0.9893 | 0.3293 | 0.9893 | 0.9946 |
| 0.5259 | 4.9286 | 828 | 0.9297 | 0.1987 | 0.9297 | 0.9642 |
| 0.5259 | 4.9405 | 830 | 0.9038 | 0.1987 | 0.9038 | 0.9507 |
| 0.5259 | 4.9524 | 832 | 0.8935 | 0.1987 | 0.8935 | 0.9453 |
| 0.5259 | 4.9643 | 834 | 0.7878 | 0.2029 | 0.7878 | 0.8876 |
| 0.5259 | 4.9762 | 836 | 0.7317 | 0.0 | 0.7317 | 0.8554 |
| 0.5259 | 4.9881 | 838 | 0.7320 | 0.2143 | 0.7320 | 0.8556 |
| 0.5259 | 5.0 | 840 | 0.7635 | 0.2029 | 0.7635 | 0.8738 |
| 0.5259 | 5.0119 | 842 | 0.8684 | 0.2029 | 0.8684 | 0.9319 |
| 0.5259 | 5.0238 | 844 | 0.9529 | 0.1987 | 0.9529 | 0.9762 |
| 0.5259 | 5.0357 | 846 | 0.8706 | 0.2029 | 0.8706 | 0.9331 |
| 0.5259 | 5.0476 | 848 | 0.7971 | 0.2029 | 0.7971 | 0.8928 |
| 0.5259 | 5.0595 | 850 | 0.8034 | 0.2029 | 0.8034 | 0.8963 |
| 0.5259 | 5.0714 | 852 | 0.8279 | 0.2029 | 0.8279 | 0.9099 |
| 0.5259 | 5.0833 | 854 | 0.8918 | 0.2029 | 0.8918 | 0.9444 |
| 0.5259 | 5.0952 | 856 | 1.0317 | 0.1921 | 1.0317 | 1.0157 |
| 0.5259 | 5.1071 | 858 | 1.0454 | 0.1921 | 1.0454 | 1.0225 |
| 0.5259 | 5.1190 | 860 | 0.9129 | 0.2029 | 0.9129 | 0.9554 |
| 0.5259 | 5.1310 | 862 | 0.8217 | 0.1769 | 0.8217 | 0.9065 |
| 0.5259 | 5.1429 | 864 | 0.8010 | 0.2029 | 0.8010 | 0.8950 |
| 0.5259 | 5.1548 | 866 | 0.7993 | 0.2029 | 0.7993 | 0.8940 |
| 0.5259 | 5.1667 | 868 | 0.9021 | 0.2029 | 0.9021 | 0.9498 |
| 0.5259 | 5.1786 | 870 | 1.0437 | 0.1921 | 1.0437 | 1.0216 |
| 0.5259 | 5.1905 | 872 | 1.1343 | 0.0737 | 1.1343 | 1.0650 |
| 0.5259 | 5.2024 | 874 | 1.0665 | 0.0737 | 1.0665 | 1.0327 |
| 0.5259 | 5.2143 | 876 | 0.9229 | 0.2029 | 0.9229 | 0.9607 |
| 0.5259 | 5.2262 | 878 | 0.8077 | 0.2029 | 0.8077 | 0.8987 |
| 0.5259 | 5.2381 | 880 | 0.7738 | 0.2029 | 0.7738 | 0.8797 |
| 0.5259 | 5.25 | 882 | 0.8028 | 0.2029 | 0.8028 | 0.8960 |
| 0.5259 | 5.2619 | 884 | 0.8752 | 0.2029 | 0.8752 | 0.9355 |
| 0.5259 | 5.2738 | 886 | 0.9420 | 0.1987 | 0.9420 | 0.9705 |
| 0.5259 | 5.2857 | 888 | 0.9554 | 0.1987 | 0.9554 | 0.9775 |
| 0.5259 | 5.2976 | 890 | 0.9088 | 0.2029 | 0.9088 | 0.9533 |
| 0.5259 | 5.3095 | 892 | 0.8130 | 0.0179 | 0.8130 | 0.9016 |
| 0.5259 | 5.3214 | 894 | 0.7475 | 0.0 | 0.7475 | 0.8646 |
| 0.5259 | 5.3333 | 896 | 0.7325 | 0.0 | 0.7325 | 0.8559 |
| 0.5259 | 5.3452 | 898 | 0.7552 | 0.0179 | 0.7552 | 0.8690 |
| 0.5259 | 5.3571 | 900 | 0.8660 | 0.2029 | 0.8660 | 0.9306 |
| 0.5259 | 5.3690 | 902 | 1.0336 | 0.1921 | 1.0336 | 1.0166 |
| 0.5259 | 5.3810 | 904 | 1.0736 | 0.0737 | 1.0736 | 1.0361 |
| 0.5259 | 5.3929 | 906 | 0.9563 | 0.3293 | 0.9563 | 0.9779 |
| 0.5259 | 5.4048 | 908 | 0.7624 | 0.2029 | 0.7624 | 0.8731 |
| 0.5259 | 5.4167 | 910 | 0.6759 | 0.2143 | 0.6759 | 0.8221 |
| 0.5259 | 5.4286 | 912 | 0.6780 | 0.0 | 0.6780 | 0.8234 |
| 0.5259 | 5.4405 | 914 | 0.6799 | 0.2143 | 0.6799 | 0.8245 |
| 0.5259 | 5.4524 | 916 | 0.7084 | 0.2080 | 0.7084 | 0.8417 |
| 0.5259 | 5.4643 | 918 | 0.7697 | 0.2029 | 0.7697 | 0.8773 |
| 0.5259 | 5.4762 | 920 | 0.9220 | 0.3293 | 0.9220 | 0.9602 |
| 0.5259 | 5.4881 | 922 | 0.9740 | 0.3293 | 0.9740 | 0.9869 |
| 0.5259 | 5.5 | 924 | 0.9361 | 0.3293 | 0.9361 | 0.9675 |
| 0.5259 | 5.5119 | 926 | 0.9081 | 0.3293 | 0.9081 | 0.9529 |
| 0.5259 | 5.5238 | 928 | 0.8200 | 0.2029 | 0.8200 | 0.9055 |
| 0.5259 | 5.5357 | 930 | 0.7258 | 0.2080 | 0.7258 | 0.8519 |
| 0.5259 | 5.5476 | 932 | 0.6913 | 0.0 | 0.6913 | 0.8314 |
| 0.5259 | 5.5595 | 934 | 0.6902 | 0.0 | 0.6902 | 0.8308 |
| 0.5259 | 5.5714 | 936 | 0.6955 | 0.2143 | 0.6955 | 0.8340 |
| 0.5259 | 5.5833 | 938 | 0.8207 | 0.2029 | 0.8207 | 0.9059 |
| 0.5259 | 5.5952 | 940 | 1.0169 | 0.1921 | 1.0169 | 1.0084 |
| 0.5259 | 5.6071 | 942 | 1.1173 | 0.0737 | 1.1173 | 1.0570 |
| 0.5259 | 5.6190 | 944 | 1.1486 | 0.0737 | 1.1486 | 1.0717 |
| 0.5259 | 5.6310 | 946 | 1.1068 | 0.1921 | 1.1068 | 1.0521 |
| 0.5259 | 5.6429 | 948 | 1.0105 | 0.0610 | 1.0105 | 1.0052 |
| 0.5259 | 5.6548 | 950 | 0.8879 | 0.0435 | 0.8879 | 0.9423 |
| 0.5259 | 5.6667 | 952 | 0.8611 | 0.0435 | 0.8611 | 0.9279 |
| 0.5259 | 5.6786 | 954 | 0.8867 | 0.0435 | 0.8867 | 0.9417 |
| 0.5259 | 5.6905 | 956 | 0.8936 | 0.0435 | 0.8936 | 0.9453 |
| 0.5259 | 5.7024 | 958 | 0.9393 | 0.0435 | 0.9393 | 0.9692 |
| 0.5259 | 5.7143 | 960 | 1.0262 | 0.0610 | 1.0262 | 1.0130 |
| 0.5259 | 5.7262 | 962 | 1.0711 | 0.1921 | 1.0711 | 1.0349 |
| 0.5259 | 5.7381 | 964 | 1.0851 | 0.1921 | 1.0851 | 1.0417 |
| 0.5259 | 5.75 | 966 | 1.0042 | 0.0530 | 1.0042 | 1.0021 |
| 0.5259 | 5.7619 | 968 | 0.9383 | 0.0272 | 0.9383 | 0.9687 |
| 0.5259 | 5.7738 | 970 | 0.9119 | -0.1846 | 0.9119 | 0.9549 |
| 0.5259 | 5.7857 | 972 | 0.8768 | -0.0342 | 0.8768 | 0.9364 |
| 0.5259 | 5.7976 | 974 | 0.8603 | -0.0342 | 0.8603 | 0.9275 |
| 0.5259 | 5.8095 | 976 | 0.9071 | 0.0435 | 0.9071 | 0.9524 |
| 0.5259 | 5.8214 | 978 | 1.0331 | 0.1951 | 1.0331 | 1.0164 |
| 0.5259 | 5.8333 | 980 | 1.1170 | 0.1921 | 1.1170 | 1.0569 |
| 0.5259 | 5.8452 | 982 | 1.1007 | 0.1921 | 1.1007 | 1.0491 |
| 0.5259 | 5.8571 | 984 | 1.0127 | 0.0530 | 1.0127 | 1.0063 |
| 0.5259 | 5.8690 | 986 | 0.9300 | 0.0435 | 0.9300 | 0.9644 |
| 0.5259 | 5.8810 | 988 | 0.8907 | -0.1786 | 0.8907 | 0.9438 |
| 0.5259 | 5.8929 | 990 | 0.8955 | -0.1786 | 0.8955 | 0.9463 |
| 0.5259 | 5.9048 | 992 | 0.9010 | -0.1440 | 0.9010 | 0.9492 |
| 0.5259 | 5.9167 | 994 | 0.9059 | 0.0435 | 0.9059 | 0.9518 |
| 0.5259 | 5.9286 | 996 | 0.9484 | 0.0435 | 0.9484 | 0.9739 |
| 0.5259 | 5.9405 | 998 | 0.9966 | 0.1951 | 0.9966 | 0.9983 |
| 0.1099 | 5.9524 | 1000 | 0.9791 | 0.0530 | 0.9791 | 0.9895 |
| 0.1099 | 5.9643 | 1002 | 0.9142 | 0.0435 | 0.9142 | 0.9561 |
| 0.1099 | 5.9762 | 1004 | 0.8819 | 0.0435 | 0.8819 | 0.9391 |
| 0.1099 | 5.9881 | 1006 | 0.8820 | -0.1440 | 0.8820 | 0.9392 |
| 0.1099 | 6.0 | 1008 | 0.8861 | -0.1440 | 0.8861 | 0.9414 |
| 0.1099 | 6.0119 | 1010 | 0.9044 | 0.0435 | 0.9044 | 0.9510 |
| 0.1099 | 6.0238 | 1012 | 0.9729 | 0.0435 | 0.9729 | 0.9864 |
| 0.1099 | 6.0357 | 1014 | 1.0294 | 0.0530 | 1.0294 | 1.0146 |
| 0.1099 | 6.0476 | 1016 | 1.0111 | 0.0530 | 1.0111 | 1.0056 |
| 0.1099 | 6.0595 | 1018 | 0.9626 | 0.0435 | 0.9626 | 0.9811 |
| 0.1099 | 6.0714 | 1020 | 0.9307 | 0.0435 | 0.9307 | 0.9647 |
| 0.1099 | 6.0833 | 1022 | 0.9212 | -0.1440 | 0.9212 | 0.9598 |
| 0.1099 | 6.0952 | 1024 | 0.9308 | 0.0435 | 0.9308 | 0.9648 |
| 0.1099 | 6.1071 | 1026 | 0.9793 | 0.0435 | 0.9793 | 0.9896 |
| 0.1099 | 6.1190 | 1028 | 1.1339 | 0.0530 | 1.1339 | 1.0648 |
| 0.1099 | 6.1310 | 1030 | 1.2051 | 0.0530 | 1.2051 | 1.0978 |
| 0.1099 | 6.1429 | 1032 | 1.0891 | 0.0530 | 1.0891 | 1.0436 |
| 0.1099 | 6.1548 | 1034 | 0.9546 | 0.0435 | 0.9546 | 0.9770 |
| 0.1099 | 6.1667 | 1036 | 0.9228 | -0.1786 | 0.9228 | 0.9606 |
| 0.1099 | 6.1786 | 1038 | 0.9167 | -0.1786 | 0.9167 | 0.9574 |
| 0.1099 | 6.1905 | 1040 | 0.8873 | -0.1786 | 0.8873 | 0.9420 |
| 0.1099 | 6.2024 | 1042 | 0.8779 | 0.0435 | 0.8779 | 0.9370 |
| 0.1099 | 6.2143 | 1044 | 0.8932 | 0.0435 | 0.8932 | 0.9451 |
| 0.1099 | 6.2262 | 1046 | 0.9547 | 0.0435 | 0.9547 | 0.9771 |
| 0.1099 | 6.2381 | 1048 | 0.9325 | 0.0435 | 0.9325 | 0.9657 |
| 0.1099 | 6.25 | 1050 | 0.8685 | 0.2080 | 0.8685 | 0.9319 |
| 0.1099 | 6.2619 | 1052 | 0.8403 | 0.2080 | 0.8403 | 0.9167 |
| 0.1099 | 6.2738 | 1054 | 0.8458 | 0.2080 | 0.8458 | 0.9197 |
| 0.1099 | 6.2857 | 1056 | 0.8139 | 0.2080 | 0.8139 | 0.9022 |
| 0.1099 | 6.2976 | 1058 | 0.8133 | 0.2080 | 0.8133 | 0.9019 |
| 0.1099 | 6.3095 | 1060 | 0.7744 | 0.2080 | 0.7744 | 0.8800 |
| 0.1099 | 6.3214 | 1062 | 0.7624 | 0.2080 | 0.7624 | 0.8731 |
| 0.1099 | 6.3333 | 1064 | 0.7652 | 0.2080 | 0.7652 | 0.8748 |
| 0.1099 | 6.3452 | 1066 | 0.7520 | 0.2143 | 0.7520 | 0.8672 |
| 0.1099 | 6.3571 | 1068 | 0.7412 | 0.0 | 0.7412 | 0.8609 |
| 0.1099 | 6.3690 | 1070 | 0.7393 | 0.0 | 0.7393 | 0.8599 |
| 0.1099 | 6.3810 | 1072 | 0.7444 | 0.0 | 0.7444 | 0.8628 |
| 0.1099 | 6.3929 | 1074 | 0.7616 | 0.2080 | 0.7616 | 0.8727 |
| 0.1099 | 6.4048 | 1076 | 0.8591 | 0.2080 | 0.8591 | 0.9269 |
| 0.1099 | 6.4167 | 1078 | 0.9253 | 0.1987 | 0.9253 | 0.9619 |
| 0.1099 | 6.4286 | 1080 | 0.9045 | 0.1987 | 0.9045 | 0.9511 |
| 0.1099 | 6.4405 | 1082 | 0.8798 | 0.2029 | 0.8798 | 0.9380 |
| 0.1099 | 6.4524 | 1084 | 0.7839 | 0.2080 | 0.7839 | 0.8854 |
| 0.1099 | 6.4643 | 1086 | 0.7212 | 0.2143 | 0.7212 | 0.8492 |
| 0.1099 | 6.4762 | 1088 | 0.7150 | 0.0 | 0.7150 | 0.8456 |
| 0.1099 | 6.4881 | 1090 | 0.7157 | 0.0 | 0.7157 | 0.8460 |
| 0.1099 | 6.5 | 1092 | 0.7196 | 0.2143 | 0.7196 | 0.8483 |
| 0.1099 | 6.5119 | 1094 | 0.7575 | 0.2080 | 0.7575 | 0.8703 |
| 0.1099 | 6.5238 | 1096 | 0.8570 | 0.2029 | 0.8570 | 0.9257 |
| 0.1099 | 6.5357 | 1098 | 0.9133 | 0.3293 | 0.9133 | 0.9557 |
| 0.1099 | 6.5476 | 1100 | 0.9089 | 0.3293 | 0.9089 | 0.9534 |
| 0.1099 | 6.5595 | 1102 | 0.8480 | 0.2029 | 0.8480 | 0.9209 |
| 0.1099 | 6.5714 | 1104 | 0.7441 | 0.2080 | 0.7441 | 0.8626 |
| 0.1099 | 6.5833 | 1106 | 0.7041 | 0.0 | 0.7041 | 0.8391 |
| 0.1099 | 6.5952 | 1108 | 0.7418 | -0.0185 | 0.7418 | 0.8613 |
| 0.1099 | 6.6071 | 1110 | 0.7542 | -0.0185 | 0.7542 | 0.8685 |
| 0.1099 | 6.6190 | 1112 | 0.7243 | -0.0185 | 0.7243 | 0.8511 |
| 0.1099 | 6.6310 | 1114 | 0.6996 | 0.2080 | 0.6996 | 0.8364 |
| 0.1099 | 6.6429 | 1116 | 0.7405 | 0.2080 | 0.7405 | 0.8605 |
| 0.1099 | 6.6548 | 1118 | 0.7684 | 0.2080 | 0.7684 | 0.8766 |
| 0.1099 | 6.6667 | 1120 | 0.7977 | 0.2080 | 0.7977 | 0.8932 |
| 0.1099 | 6.6786 | 1122 | 0.8250 | 0.2080 | 0.8250 | 0.9083 |
| 0.1099 | 6.6905 | 1124 | 0.7845 | 0.2080 | 0.7845 | 0.8857 |
| 0.1099 | 6.7024 | 1126 | 0.7479 | 0.2080 | 0.7479 | 0.8648 |
| 0.1099 | 6.7143 | 1128 | 0.7445 | 0.2080 | 0.7445 | 0.8628 |
| 0.1099 | 6.7262 | 1130 | 0.7498 | 0.0179 | 0.7498 | 0.8659 |
| 0.1099 | 6.7381 | 1132 | 0.7501 | 0.0179 | 0.7501 | 0.8661 |
| 0.1099 | 6.75 | 1134 | 0.7513 | 0.2080 | 0.7513 | 0.8668 |
| 0.1099 | 6.7619 | 1136 | 0.7500 | 0.0179 | 0.7500 | 0.8661 |
| 0.1099 | 6.7738 | 1138 | 0.7491 | 0.2080 | 0.7491 | 0.8655 |
| 0.1099 | 6.7857 | 1140 | 0.7542 | 0.2080 | 0.7542 | 0.8685 |
| 0.1099 | 6.7976 | 1142 | 0.7555 | 0.2080 | 0.7555 | 0.8692 |
| 0.1099 | 6.8095 | 1144 | 0.7529 | 0.2080 | 0.7529 | 0.8677 |
| 0.1099 | 6.8214 | 1146 | 0.7625 | 0.2080 | 0.7625 | 0.8732 |
| 0.1099 | 6.8333 | 1148 | 0.7962 | 0.2080 | 0.7962 | 0.8923 |
| 0.1099 | 6.8452 | 1150 | 0.7995 | 0.2080 | 0.7995 | 0.8941 |
| 0.1099 | 6.8571 | 1152 | 0.7651 | 0.2080 | 0.7651 | 0.8747 |
| 0.1099 | 6.8690 | 1154 | 0.7416 | 0.1791 | 0.7416 | 0.8612 |
| 0.1099 | 6.8810 | 1156 | 0.7467 | -0.0185 | 0.7467 | 0.8641 |
| 0.1099 | 6.8929 | 1158 | 0.7516 | -0.0185 | 0.7516 | 0.8669 |
| 0.1099 | 6.9048 | 1160 | 0.7639 | -0.0185 | 0.7639 | 0.8740 |
| 0.1099 | 6.9167 | 1162 | 0.7580 | -0.0185 | 0.7580 | 0.8706 |
| 0.1099 | 6.9286 | 1164 | 0.7476 | -0.0185 | 0.7476 | 0.8646 |
| 0.1099 | 6.9405 | 1166 | 0.7441 | 0.2143 | 0.7441 | 0.8626 |
| 0.1099 | 6.9524 | 1168 | 0.7466 | 0.2080 | 0.7466 | 0.8640 |
| 0.1099 | 6.9643 | 1170 | 0.7523 | 0.2080 | 0.7523 | 0.8673 |
| 0.1099 | 6.9762 | 1172 | 0.7567 | 0.2080 | 0.7567 | 0.8699 |
| 0.1099 | 6.9881 | 1174 | 0.7724 | 0.2080 | 0.7724 | 0.8788 |
| 0.1099 | 7.0 | 1176 | 0.7780 | 0.2080 | 0.7780 | 0.8820 |
| 0.1099 | 7.0119 | 1178 | 0.7533 | 0.2080 | 0.7533 | 0.8679 |
| 0.1099 | 7.0238 | 1180 | 0.7393 | 0.0 | 0.7393 | 0.8599 |
| 0.1099 | 7.0357 | 1182 | 0.7470 | -0.0185 | 0.7470 | 0.8643 |
| 0.1099 | 7.0476 | 1184 | 0.7501 | -0.0185 | 0.7501 | 0.8661 |
| 0.1099 | 7.0595 | 1186 | 0.7548 | -0.0185 | 0.7548 | 0.8688 |
| 0.1099 | 7.0714 | 1188 | 0.7472 | 0.0 | 0.7472 | 0.8644 |
| 0.1099 | 7.0833 | 1190 | 0.7493 | 0.2143 | 0.7493 | 0.8656 |
| 0.1099 | 7.0952 | 1192 | 0.7755 | 0.2080 | 0.7755 | 0.8806 |
| 0.1099 | 7.1071 | 1194 | 0.8024 | 0.2080 | 0.8024 | 0.8958 |
| 0.1099 | 7.1190 | 1196 | 0.7920 | 0.2080 | 0.7920 | 0.8899 |
| 0.1099 | 7.1310 | 1198 | 0.7966 | 0.2080 | 0.7966 | 0.8926 |
| 0.1099 | 7.1429 | 1200 | 0.8267 | 0.2080 | 0.8267 | 0.9092 |
| 0.1099 | 7.1548 | 1202 | 0.8390 | 0.2080 | 0.8390 | 0.9160 |
| 0.1099 | 7.1667 | 1204 | 0.8220 | 0.2080 | 0.8220 | 0.9066 |
| 0.1099 | 7.1786 | 1206 | 0.7828 | 0.2080 | 0.7828 | 0.8847 |
| 0.1099 | 7.1905 | 1208 | 0.7608 | 0.0 | 0.7608 | 0.8722 |
| 0.1099 | 7.2024 | 1210 | 0.7571 | 0.0 | 0.7571 | 0.8701 |
| 0.1099 | 7.2143 | 1212 | 0.7555 | 0.0 | 0.7555 | 0.8692 |
| 0.1099 | 7.2262 | 1214 | 0.7665 | 0.2080 | 0.7665 | 0.8755 |
| 0.1099 | 7.2381 | 1216 | 0.7752 | 0.2080 | 0.7752 | 0.8804 |
| 0.1099 | 7.25 | 1218 | 0.7997 | 0.2080 | 0.7997 | 0.8942 |
| 0.1099 | 7.2619 | 1220 | 0.8919 | 0.2080 | 0.8919 | 0.9444 |
| 0.1099 | 7.2738 | 1222 | 0.9599 | 0.0530 | 0.9599 | 0.9797 |
| 0.1099 | 7.2857 | 1224 | 0.9873 | 0.0530 | 0.9873 | 0.9936 |
| 0.1099 | 7.2976 | 1226 | 1.0134 | 0.0610 | 1.0134 | 1.0067 |
| 0.1099 | 7.3095 | 1228 | 0.9467 | 0.2029 | 0.9467 | 0.9730 |
| 0.1099 | 7.3214 | 1230 | 0.9016 | 0.2080 | 0.9016 | 0.9495 |
| 0.1099 | 7.3333 | 1232 | 0.8354 | 0.2080 | 0.8354 | 0.9140 |
| 0.1099 | 7.3452 | 1234 | 0.7772 | 0.1791 | 0.7772 | 0.8816 |
| 0.1099 | 7.3571 | 1236 | 0.7568 | 0.1791 | 0.7568 | 0.8699 |
| 0.1099 | 7.3690 | 1238 | 0.7571 | 0.1791 | 0.7571 | 0.8701 |
| 0.1099 | 7.3810 | 1240 | 0.7888 | 0.2080 | 0.7888 | 0.8881 |
| 0.1099 | 7.3929 | 1242 | 0.8249 | 0.2080 | 0.8249 | 0.9082 |
| 0.1099 | 7.4048 | 1244 | 0.8272 | 0.2080 | 0.8272 | 0.9095 |
| 0.1099 | 7.4167 | 1246 | 0.8273 | 0.2080 | 0.8273 | 0.9095 |
| 0.1099 | 7.4286 | 1248 | 0.8050 | 0.2080 | 0.8050 | 0.8972 |
| 0.1099 | 7.4405 | 1250 | 0.8100 | 0.2080 | 0.8100 | 0.9000 |
| 0.1099 | 7.4524 | 1252 | 0.7895 | 0.2080 | 0.7895 | 0.8886 |
| 0.1099 | 7.4643 | 1254 | 0.7775 | 0.2080 | 0.7775 | 0.8818 |
| 0.1099 | 7.4762 | 1256 | 0.7551 | 0.2080 | 0.7551 | 0.8690 |
| 0.1099 | 7.4881 | 1258 | 0.7453 | 0.2080 | 0.7453 | 0.8633 |
| 0.1099 | 7.5 | 1260 | 0.7613 | 0.2080 | 0.7613 | 0.8725 |
| 0.1099 | 7.5119 | 1262 | 0.8047 | 0.2080 | 0.8047 | 0.8970 |
| 0.1099 | 7.5238 | 1264 | 0.8850 | 0.2080 | 0.8850 | 0.9407 |
| 0.1099 | 7.5357 | 1266 | 0.9135 | 0.2029 | 0.9135 | 0.9557 |
| 0.1099 | 7.5476 | 1268 | 0.8743 | 0.2080 | 0.8743 | 0.9350 |
| 0.1099 | 7.5595 | 1270 | 0.8359 | 0.2080 | 0.8359 | 0.9143 |
| 0.1099 | 7.5714 | 1272 | 0.8067 | 0.2080 | 0.8067 | 0.8982 |
| 0.1099 | 7.5833 | 1274 | 0.7845 | 0.2080 | 0.7845 | 0.8857 |
| 0.1099 | 7.5952 | 1276 | 0.7971 | 0.2080 | 0.7971 | 0.8928 |
| 0.1099 | 7.6071 | 1278 | 0.8167 | 0.2080 | 0.8167 | 0.9037 |
| 0.1099 | 7.6190 | 1280 | 0.8518 | 0.2080 | 0.8518 | 0.9229 |
| 0.1099 | 7.6310 | 1282 | 0.8701 | 0.2080 | 0.8701 | 0.9328 |
| 0.1099 | 7.6429 | 1284 | 0.9061 | 0.2080 | 0.9061 | 0.9519 |
| 0.1099 | 7.6548 | 1286 | 0.9453 | 0.2029 | 0.9453 | 0.9723 |
| 0.1099 | 7.6667 | 1288 | 0.9667 | 0.2029 | 0.9667 | 0.9832 |
| 0.1099 | 7.6786 | 1290 | 0.9690 | 0.2029 | 0.9690 | 0.9844 |
| 0.1099 | 7.6905 | 1292 | 0.9565 | 0.2029 | 0.9565 | 0.9780 |
| 0.1099 | 7.7024 | 1294 | 0.9258 | 0.2080 | 0.9258 | 0.9622 |
| 0.1099 | 7.7143 | 1296 | 0.8896 | 0.2080 | 0.8896 | 0.9432 |
| 0.1099 | 7.7262 | 1298 | 0.8938 | 0.2080 | 0.8938 | 0.9454 |
| 0.1099 | 7.7381 | 1300 | 0.8941 | 0.2080 | 0.8941 | 0.9456 |
| 0.1099 | 7.75 | 1302 | 0.8899 | 0.2080 | 0.8899 | 0.9434 |
| 0.1099 | 7.7619 | 1304 | 0.8399 | 0.2080 | 0.8399 | 0.9164 |
| 0.1099 | 7.7738 | 1306 | 0.8009 | 0.0179 | 0.8009 | 0.8949 |
| 0.1099 | 7.7857 | 1308 | 0.7781 | 0.0179 | 0.7781 | 0.8821 |
| 0.1099 | 7.7976 | 1310 | 0.7794 | 0.0179 | 0.7794 | 0.8828 |
| 0.1099 | 7.8095 | 1312 | 0.7971 | 0.0179 | 0.7971 | 0.8928 |
| 0.1099 | 7.8214 | 1314 | 0.8381 | 0.2080 | 0.8381 | 0.9155 |
| 0.1099 | 7.8333 | 1316 | 0.8792 | 0.2080 | 0.8792 | 0.9376 |
| 0.1099 | 7.8452 | 1318 | 0.9448 | 0.2029 | 0.9448 | 0.9720 |
| 0.1099 | 7.8571 | 1320 | 0.9540 | 0.2029 | 0.9540 | 0.9767 |
| 0.1099 | 7.8690 | 1322 | 0.9157 | 0.2080 | 0.9157 | 0.9569 |
| 0.1099 | 7.8810 | 1324 | 0.8622 | 0.2080 | 0.8622 | 0.9285 |
| 0.1099 | 7.8929 | 1326 | 0.8013 | 0.0179 | 0.8013 | 0.8951 |
| 0.1099 | 7.9048 | 1328 | 0.7713 | 0.0179 | 0.7713 | 0.8782 |
| 0.1099 | 7.9167 | 1330 | 0.7634 | -0.0185 | 0.7634 | 0.8737 |
| 0.1099 | 7.9286 | 1332 | 0.7624 | 0.0179 | 0.7624 | 0.8731 |
| 0.1099 | 7.9405 | 1334 | 0.7677 | 0.0179 | 0.7677 | 0.8762 |
| 0.1099 | 7.9524 | 1336 | 0.7829 | 0.0179 | 0.7829 | 0.8848 |
| 0.1099 | 7.9643 | 1338 | 0.8144 | 0.2080 | 0.8144 | 0.9024 |
| 0.1099 | 7.9762 | 1340 | 0.8392 | 0.2080 | 0.8392 | 0.9161 |
| 0.1099 | 7.9881 | 1342 | 0.8344 | 0.2080 | 0.8344 | 0.9135 |
| 0.1099 | 8.0 | 1344 | 0.8160 | 0.2080 | 0.8160 | 0.9033 |
| 0.1099 | 8.0119 | 1346 | 0.7981 | 0.2080 | 0.7981 | 0.8934 |
| 0.1099 | 8.0238 | 1348 | 0.7935 | 0.2080 | 0.7935 | 0.8908 |
| 0.1099 | 8.0357 | 1350 | 0.7986 | 0.2080 | 0.7986 | 0.8936 |
| 0.1099 | 8.0476 | 1352 | 0.8080 | 0.2080 | 0.8080 | 0.8989 |
| 0.1099 | 8.0595 | 1354 | 0.8041 | 0.2080 | 0.8041 | 0.8967 |
| 0.1099 | 8.0714 | 1356 | 0.8043 | 0.2080 | 0.8043 | 0.8968 |
| 0.1099 | 8.0833 | 1358 | 0.8247 | 0.2080 | 0.8247 | 0.9081 |
| 0.1099 | 8.0952 | 1360 | 0.8604 | 0.2080 | 0.8604 | 0.9276 |
| 0.1099 | 8.1071 | 1362 | 0.8946 | 0.2029 | 0.8946 | 0.9459 |
| 0.1099 | 8.1190 | 1364 | 0.8898 | 0.2029 | 0.8898 | 0.9433 |
| 0.1099 | 8.1310 | 1366 | 0.8716 | 0.2080 | 0.8716 | 0.9336 |
| 0.1099 | 8.1429 | 1368 | 0.8278 | 0.2080 | 0.8278 | 0.9098 |
| 0.1099 | 8.1548 | 1370 | 0.8007 | 0.2080 | 0.8007 | 0.8948 |
| 0.1099 | 8.1667 | 1372 | 0.7819 | 0.0179 | 0.7819 | 0.8843 |
| 0.1099 | 8.1786 | 1374 | 0.7678 | 0.0179 | 0.7678 | 0.8762 |
| 0.1099 | 8.1905 | 1376 | 0.7668 | 0.0179 | 0.7668 | 0.8757 |
| 0.1099 | 8.2024 | 1378 | 0.7695 | 0.0179 | 0.7695 | 0.8772 |
| 0.1099 | 8.2143 | 1380 | 0.7806 | 0.0179 | 0.7806 | 0.8835 |
| 0.1099 | 8.2262 | 1382 | 0.7857 | 0.0179 | 0.7857 | 0.8864 |
| 0.1099 | 8.2381 | 1384 | 0.7974 | 0.2080 | 0.7974 | 0.8930 |
| 0.1099 | 8.25 | 1386 | 0.8133 | 0.2080 | 0.8133 | 0.9018 |
| 0.1099 | 8.2619 | 1388 | 0.8301 | 0.2080 | 0.8301 | 0.9111 |
| 0.1099 | 8.2738 | 1390 | 0.8790 | 0.2029 | 0.8790 | 0.9376 |
| 0.1099 | 8.2857 | 1392 | 0.9398 | 0.2029 | 0.9398 | 0.9694 |
| 0.1099 | 8.2976 | 1394 | 0.9720 | 0.2029 | 0.9720 | 0.9859 |
| 0.1099 | 8.3095 | 1396 | 1.0037 | 0.1987 | 1.0037 | 1.0019 |
| 0.1099 | 8.3214 | 1398 | 1.0486 | 0.0610 | 1.0486 | 1.0240 |
| 0.1099 | 8.3333 | 1400 | 1.0634 | -0.0565 | 1.0634 | 1.0312 |
| 0.1099 | 8.3452 | 1402 | 1.0311 | 0.0610 | 1.0311 | 1.0154 |
| 0.1099 | 8.3571 | 1404 | 0.9700 | 0.2029 | 0.9700 | 0.9849 |
| 0.1099 | 8.3690 | 1406 | 0.9112 | 0.2029 | 0.9112 | 0.9546 |
| 0.1099 | 8.3810 | 1408 | 0.8817 | 0.2029 | 0.8817 | 0.9390 |
| 0.1099 | 8.3929 | 1410 | 0.8882 | 0.2029 | 0.8882 | 0.9424 |
| 0.1099 | 8.4048 | 1412 | 0.8734 | 0.2029 | 0.8734 | 0.9346 |
| 0.1099 | 8.4167 | 1414 | 0.8541 | 0.2029 | 0.8541 | 0.9242 |
| 0.1099 | 8.4286 | 1416 | 0.8432 | 0.2080 | 0.8432 | 0.9183 |
| 0.1099 | 8.4405 | 1418 | 0.8524 | 0.2029 | 0.8524 | 0.9233 |
| 0.1099 | 8.4524 | 1420 | 0.9103 | 0.2029 | 0.9103 | 0.9541 |
| 0.1099 | 8.4643 | 1422 | 0.9391 | 0.2029 | 0.9391 | 0.9691 |
| 0.1099 | 8.4762 | 1424 | 0.9604 | 0.2029 | 0.9604 | 0.9800 |
| 0.1099 | 8.4881 | 1426 | 0.9882 | 0.0530 | 0.9882 | 0.9941 |
| 0.1099 | 8.5 | 1428 | 1.0279 | 0.0530 | 1.0279 | 1.0139 |
| 0.1099 | 8.5119 | 1430 | 1.0621 | 0.0530 | 1.0621 | 1.0306 |
| 0.1099 | 8.5238 | 1432 | 1.0733 | 0.0530 | 1.0733 | 1.0360 |
| 0.1099 | 8.5357 | 1434 | 1.0723 | 0.0530 | 1.0723 | 1.0355 |
| 0.1099 | 8.5476 | 1436 | 1.0589 | 0.0530 | 1.0589 | 1.0290 |
| 0.1099 | 8.5595 | 1438 | 1.0366 | 0.0530 | 1.0366 | 1.0181 |
| 0.1099 | 8.5714 | 1440 | 1.0074 | 0.0530 | 1.0074 | 1.0037 |
| 0.1099 | 8.5833 | 1442 | 0.9844 | 0.0530 | 0.9844 | 0.9921 |
| 0.1099 | 8.5952 | 1444 | 0.9590 | 0.2029 | 0.9590 | 0.9793 |
| 0.1099 | 8.6071 | 1446 | 0.9419 | 0.0320 | 0.9419 | 0.9705 |
| 0.1099 | 8.6190 | 1448 | 0.9353 | 0.0320 | 0.9353 | 0.9671 |
| 0.1099 | 8.6310 | 1450 | 0.9563 | 0.2029 | 0.9563 | 0.9779 |
| 0.1099 | 8.6429 | 1452 | 0.9633 | 0.2029 | 0.9633 | 0.9815 |
| 0.1099 | 8.6548 | 1454 | 0.9727 | 0.0530 | 0.9727 | 0.9863 |
| 0.1099 | 8.6667 | 1456 | 0.9519 | 0.0320 | 0.9519 | 0.9757 |
| 0.1099 | 8.6786 | 1458 | 0.9494 | 0.0320 | 0.9494 | 0.9744 |
| 0.1099 | 8.6905 | 1460 | 0.9600 | 0.2029 | 0.9600 | 0.9798 |
| 0.1099 | 8.7024 | 1462 | 0.9871 | 0.0530 | 0.9871 | 0.9936 |
| 0.1099 | 8.7143 | 1464 | 1.0068 | 0.0530 | 1.0068 | 1.0034 |
| 0.1099 | 8.7262 | 1466 | 1.0179 | 0.0530 | 1.0179 | 1.0089 |
| 0.1099 | 8.7381 | 1468 | 1.0256 | 0.0530 | 1.0256 | 1.0127 |
| 0.1099 | 8.75 | 1470 | 1.0288 | 0.0530 | 1.0288 | 1.0143 |
| 0.1099 | 8.7619 | 1472 | 1.0162 | 0.0530 | 1.0162 | 1.0081 |
| 0.1099 | 8.7738 | 1474 | 1.0000 | 0.0530 | 1.0000 | 1.0000 |
| 0.1099 | 8.7857 | 1476 | 0.9854 | 0.0530 | 0.9854 | 0.9927 |
| 0.1099 | 8.7976 | 1478 | 0.9596 | 0.0530 | 0.9596 | 0.9796 |
| 0.1099 | 8.8095 | 1480 | 0.9382 | 0.2080 | 0.9382 | 0.9686 |
| 0.1099 | 8.8214 | 1482 | 0.9281 | 0.2080 | 0.9281 | 0.9634 |
| 0.1099 | 8.8333 | 1484 | 0.9417 | 0.2029 | 0.9417 | 0.9704 |
| 0.1099 | 8.8452 | 1486 | 0.9627 | 0.0530 | 0.9627 | 0.9812 |
| 0.1099 | 8.8571 | 1488 | 0.9933 | 0.0530 | 0.9933 | 0.9966 |
| 0.1099 | 8.8690 | 1490 | 1.0368 | 0.0530 | 1.0368 | 1.0183 |
| 0.1099 | 8.8810 | 1492 | 1.0519 | 0.0530 | 1.0519 | 1.0256 |
| 0.1099 | 8.8929 | 1494 | 1.0626 | 0.0530 | 1.0626 | 1.0308 |
| 0.1099 | 8.9048 | 1496 | 1.0735 | 0.0530 | 1.0735 | 1.0361 |
| 0.1099 | 8.9167 | 1498 | 1.0702 | 0.0530 | 1.0702 | 1.0345 |
| 0.0542 | 8.9286 | 1500 | 1.0394 | 0.0530 | 1.0394 | 1.0195 |
| 0.0542 | 8.9405 | 1502 | 0.9888 | 0.0530 | 0.9888 | 0.9944 |
| 0.0542 | 8.9524 | 1504 | 0.9258 | 0.2080 | 0.9258 | 0.9622 |
| 0.0542 | 8.9643 | 1506 | 0.8779 | 0.2080 | 0.8779 | 0.9370 |
| 0.0542 | 8.9762 | 1508 | 0.8564 | 0.0179 | 0.8564 | 0.9254 |
| 0.0542 | 8.9881 | 1510 | 0.8435 | 0.0179 | 0.8435 | 0.9184 |
| 0.0542 | 9.0 | 1512 | 0.8229 | 0.0179 | 0.8229 | 0.9072 |
| 0.0542 | 9.0119 | 1514 | 0.8197 | 0.0179 | 0.8197 | 0.9054 |
| 0.0542 | 9.0238 | 1516 | 0.8307 | 0.0179 | 0.8307 | 0.9114 |
| 0.0542 | 9.0357 | 1518 | 0.8496 | 0.0179 | 0.8496 | 0.9217 |
| 0.0542 | 9.0476 | 1520 | 0.8609 | 0.2080 | 0.8609 | 0.9278 |
| 0.0542 | 9.0595 | 1522 | 0.8788 | 0.2080 | 0.8788 | 0.9375 |
| 0.0542 | 9.0714 | 1524 | 0.8879 | 0.2080 | 0.8879 | 0.9423 |
| 0.0542 | 9.0833 | 1526 | 0.8916 | 0.2080 | 0.8916 | 0.9442 |
| 0.0542 | 9.0952 | 1528 | 0.8852 | 0.2080 | 0.8852 | 0.9408 |
| 0.0542 | 9.1071 | 1530 | 0.8716 | 0.2080 | 0.8716 | 0.9336 |
| 0.0542 | 9.1190 | 1532 | 0.8678 | 0.2080 | 0.8678 | 0.9316 |
| 0.0542 | 9.1310 | 1534 | 0.8727 | 0.2080 | 0.8727 | 0.9342 |
| 0.0542 | 9.1429 | 1536 | 0.8823 | 0.2080 | 0.8823 | 0.9393 |
| 0.0542 | 9.1548 | 1538 | 0.8799 | 0.2080 | 0.8799 | 0.9380 |
| 0.0542 | 9.1667 | 1540 | 0.8890 | 0.2080 | 0.8890 | 0.9429 |
| 0.0542 | 9.1786 | 1542 | 0.9041 | 0.2080 | 0.9041 | 0.9509 |
| 0.0542 | 9.1905 | 1544 | 0.9277 | 0.2029 | 0.9277 | 0.9632 |
| 0.0542 | 9.2024 | 1546 | 0.9512 | 0.0530 | 0.9512 | 0.9753 |
| 0.0542 | 9.2143 | 1548 | 0.9778 | 0.0530 | 0.9778 | 0.9888 |
| 0.0542 | 9.2262 | 1550 | 1.0059 | 0.0530 | 1.0059 | 1.0030 |
| 0.0542 | 9.2381 | 1552 | 1.0217 | 0.0530 | 1.0217 | 1.0108 |
| 0.0542 | 9.25 | 1554 | 1.0407 | 0.0530 | 1.0407 | 1.0201 |
| 0.0542 | 9.2619 | 1556 | 1.0462 | 0.0530 | 1.0462 | 1.0228 |
| 0.0542 | 9.2738 | 1558 | 1.0418 | 0.0530 | 1.0418 | 1.0207 |
| 0.0542 | 9.2857 | 1560 | 1.0421 | 0.0530 | 1.0421 | 1.0208 |
| 0.0542 | 9.2976 | 1562 | 1.0422 | 0.0530 | 1.0422 | 1.0209 |
| 0.0542 | 9.3095 | 1564 | 1.0390 | 0.0530 | 1.0390 | 1.0193 |
| 0.0542 | 9.3214 | 1566 | 1.0384 | 0.0530 | 1.0384 | 1.0190 |
| 0.0542 | 9.3333 | 1568 | 1.0407 | 0.0530 | 1.0407 | 1.0201 |
| 0.0542 | 9.3452 | 1570 | 1.0371 | 0.0530 | 1.0371 | 1.0184 |
| 0.0542 | 9.3571 | 1572 | 1.0240 | 0.0530 | 1.0240 | 1.0119 |
| 0.0542 | 9.3690 | 1574 | 1.0208 | 0.0530 | 1.0208 | 1.0103 |
| 0.0542 | 9.3810 | 1576 | 1.0239 | 0.0530 | 1.0239 | 1.0119 |
| 0.0542 | 9.3929 | 1578 | 1.0320 | 0.0530 | 1.0320 | 1.0159 |
| 0.0542 | 9.4048 | 1580 | 1.0320 | 0.0530 | 1.0320 | 1.0159 |
| 0.0542 | 9.4167 | 1582 | 1.0316 | 0.0530 | 1.0316 | 1.0157 |
| 0.0542 | 9.4286 | 1584 | 1.0409 | 0.0530 | 1.0409 | 1.0203 |
| 0.0542 | 9.4405 | 1586 | 1.0617 | 0.0530 | 1.0617 | 1.0304 |
| 0.0542 | 9.4524 | 1588 | 1.0795 | 0.0530 | 1.0795 | 1.0390 |
| 0.0542 | 9.4643 | 1590 | 1.0884 | 0.0530 | 1.0884 | 1.0432 |
| 0.0542 | 9.4762 | 1592 | 1.0970 | 0.0530 | 1.0970 | 1.0474 |
| 0.0542 | 9.4881 | 1594 | 1.0980 | 0.0610 | 1.0980 | 1.0479 |
| 0.0542 | 9.5 | 1596 | 1.0853 | 0.0530 | 1.0853 | 1.0418 |
| 0.0542 | 9.5119 | 1598 | 1.0741 | 0.0530 | 1.0741 | 1.0364 |
| 0.0542 | 9.5238 | 1600 | 1.0591 | 0.0530 | 1.0591 | 1.0291 |
| 0.0542 | 9.5357 | 1602 | 1.0429 | 0.0530 | 1.0429 | 1.0212 |
| 0.0542 | 9.5476 | 1604 | 1.0301 | 0.0530 | 1.0301 | 1.0149 |
| 0.0542 | 9.5595 | 1606 | 1.0238 | 0.0530 | 1.0238 | 1.0118 |
| 0.0542 | 9.5714 | 1608 | 1.0203 | 0.0530 | 1.0203 | 1.0101 |
| 0.0542 | 9.5833 | 1610 | 1.0203 | 0.0530 | 1.0203 | 1.0101 |
| 0.0542 | 9.5952 | 1612 | 1.0180 | 0.0530 | 1.0180 | 1.0090 |
| 0.0542 | 9.6071 | 1614 | 1.0202 | 0.0530 | 1.0202 | 1.0101 |
| 0.0542 | 9.6190 | 1616 | 1.0223 | 0.0530 | 1.0223 | 1.0111 |
| 0.0542 | 9.6310 | 1618 | 1.0223 | 0.0530 | 1.0223 | 1.0111 |
| 0.0542 | 9.6429 | 1620 | 1.0165 | 0.0530 | 1.0165 | 1.0082 |
| 0.0542 | 9.6548 | 1622 | 1.0154 | 0.0530 | 1.0154 | 1.0077 |
| 0.0542 | 9.6667 | 1624 | 1.0161 | 0.0530 | 1.0161 | 1.0080 |
| 0.0542 | 9.6786 | 1626 | 1.0200 | 0.0530 | 1.0200 | 1.0100 |
| 0.0542 | 9.6905 | 1628 | 1.0251 | 0.0530 | 1.0251 | 1.0125 |
| 0.0542 | 9.7024 | 1630 | 1.0362 | 0.0530 | 1.0362 | 1.0179 |
| 0.0542 | 9.7143 | 1632 | 1.0447 | 0.0530 | 1.0447 | 1.0221 |
| 0.0542 | 9.7262 | 1634 | 1.0496 | 0.0530 | 1.0496 | 1.0245 |
| 0.0542 | 9.7381 | 1636 | 1.0562 | 0.0530 | 1.0562 | 1.0277 |
| 0.0542 | 9.75 | 1638 | 1.0555 | 0.0530 | 1.0555 | 1.0274 |
| 0.0542 | 9.7619 | 1640 | 1.0513 | 0.0530 | 1.0513 | 1.0253 |
| 0.0542 | 9.7738 | 1642 | 1.0491 | 0.0530 | 1.0491 | 1.0243 |
| 0.0542 | 9.7857 | 1644 | 1.0433 | 0.0530 | 1.0433 | 1.0214 |
| 0.0542 | 9.7976 | 1646 | 1.0399 | 0.0530 | 1.0399 | 1.0198 |
| 0.0542 | 9.8095 | 1648 | 1.0373 | 0.0530 | 1.0373 | 1.0185 |
| 0.0542 | 9.8214 | 1650 | 1.0299 | 0.0530 | 1.0299 | 1.0148 |
| 0.0542 | 9.8333 | 1652 | 1.0238 | 0.0530 | 1.0238 | 1.0119 |
| 0.0542 | 9.8452 | 1654 | 1.0172 | 0.0530 | 1.0172 | 1.0085 |
| 0.0542 | 9.8571 | 1656 | 1.0116 | 0.0530 | 1.0116 | 1.0058 |
| 0.0542 | 9.8690 | 1658 | 1.0097 | 0.0530 | 1.0097 | 1.0048 |
| 0.0542 | 9.8810 | 1660 | 1.0107 | 0.0530 | 1.0107 | 1.0054 |
| 0.0542 | 9.8929 | 1662 | 1.0135 | 0.0530 | 1.0135 | 1.0067 |
| 0.0542 | 9.9048 | 1664 | 1.0160 | 0.0530 | 1.0160 | 1.0080 |
| 0.0542 | 9.9167 | 1666 | 1.0177 | 0.0530 | 1.0177 | 1.0088 |
| 0.0542 | 9.9286 | 1668 | 1.0199 | 0.0530 | 1.0199 | 1.0099 |
| 0.0542 | 9.9405 | 1670 | 1.0211 | 0.0530 | 1.0211 | 1.0105 |
| 0.0542 | 9.9524 | 1672 | 1.0218 | 0.0530 | 1.0218 | 1.0109 |
| 0.0542 | 9.9643 | 1674 | 1.0227 | 0.0530 | 1.0227 | 1.0113 |
| 0.0542 | 9.9762 | 1676 | 1.0228 | 0.0530 | 1.0228 | 1.0113 |
| 0.0542 | 9.9881 | 1678 | 1.0227 | 0.0530 | 1.0227 | 1.0113 |
| 0.0542 | 10.0 | 1680 | 1.0226 | 0.0530 | 1.0226 | 1.0112 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
MattMcG/qwen_vl_hooker_finetune | MattMcG | 2024-11-25T12:10:40Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-25T11:57:41Z | ---
base_model: unsloth/qwen2-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MattMcG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF | mradermacher | 2024-11-25T11:57:13Z | 6 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:AmberYifan/mistral-v0.1-7b-sft-ultrachat",
"base_model:quantized:AmberYifan/mistral-v0.1-7b-sft-ultrachat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T11:22:46Z | ---
base_model: AmberYifan/mistral-v0.1-7b-sft-ultrachat
language:
- en
library_name: transformers
model_name: mistral-v0.1-7b-sft-ultrachat
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/AmberYifan/mistral-v0.1-7b-sft-ultrachat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-v0.1-7b-sft-ultrachat-GGUF/resolve/main/mistral-v0.1-7b-sft-ultrachat.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k30_task3_organization_fold1 | MayBashendy | 2024-11-25T11:54:47Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T11:43:59Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k30_task3_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k30_task3_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8936
- Qwk: -0.1786
- Mse: 0.8936
- Rmse: 0.9453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0163 | 2 | 4.1809 | 0.0 | 4.1809 | 2.0447 |
| No log | 0.0325 | 4 | 2.4090 | -0.0168 | 2.4090 | 1.5521 |
| No log | 0.0488 | 6 | 1.8504 | 0.0 | 1.8504 | 1.3603 |
| No log | 0.0650 | 8 | 1.8777 | 0.0 | 1.8777 | 1.3703 |
| No log | 0.0813 | 10 | 1.2869 | 0.0 | 1.2869 | 1.1344 |
| No log | 0.0976 | 12 | 0.9604 | 0.0120 | 0.9604 | 0.9800 |
| No log | 0.1138 | 14 | 1.0617 | 0.0 | 1.0617 | 1.0304 |
| No log | 0.1301 | 16 | 0.9892 | 0.0 | 0.9892 | 0.9946 |
| No log | 0.1463 | 18 | 1.0098 | 0.0 | 1.0098 | 1.0049 |
| No log | 0.1626 | 20 | 0.9135 | 0.0 | 0.9135 | 0.9558 |
| No log | 0.1789 | 22 | 0.7592 | -0.0708 | 0.7592 | 0.8713 |
| No log | 0.1951 | 24 | 0.6694 | 0.0 | 0.6694 | 0.8181 |
| No log | 0.2114 | 26 | 0.7382 | 0.0571 | 0.7382 | 0.8592 |
| No log | 0.2276 | 28 | 1.3584 | 0.0 | 1.3584 | 1.1655 |
| No log | 0.2439 | 30 | 2.7341 | 0.0571 | 2.7341 | 1.6535 |
| No log | 0.2602 | 32 | 2.6204 | 0.1323 | 2.6204 | 1.6188 |
| No log | 0.2764 | 34 | 1.6001 | 0.0 | 1.6001 | 1.2650 |
| No log | 0.2927 | 36 | 0.8229 | 0.0 | 0.8229 | 0.9071 |
| No log | 0.3089 | 38 | 0.8646 | -0.0820 | 0.8646 | 0.9299 |
| No log | 0.3252 | 40 | 1.2291 | 0.0 | 1.2291 | 1.1087 |
| No log | 0.3415 | 42 | 0.9014 | -0.0916 | 0.9014 | 0.9494 |
| No log | 0.3577 | 44 | 0.7002 | 0.0 | 0.7002 | 0.8368 |
| No log | 0.3740 | 46 | 0.7647 | -0.0233 | 0.7647 | 0.8745 |
| No log | 0.3902 | 48 | 1.3480 | 0.0 | 1.3480 | 1.1610 |
| No log | 0.4065 | 50 | 1.3280 | 0.0 | 1.3280 | 1.1524 |
| No log | 0.4228 | 52 | 1.2867 | 0.0 | 1.2867 | 1.1343 |
| No log | 0.4390 | 54 | 0.9156 | 0.0 | 0.9156 | 0.9569 |
| No log | 0.4553 | 56 | 0.8747 | 0.0 | 0.8747 | 0.9352 |
| No log | 0.4715 | 58 | 0.7148 | 0.0 | 0.7148 | 0.8455 |
| No log | 0.4878 | 60 | 0.6856 | 0.0 | 0.6856 | 0.8280 |
| No log | 0.5041 | 62 | 1.0330 | 0.0 | 1.0330 | 1.0164 |
| No log | 0.5203 | 64 | 1.7612 | 0.0 | 1.7612 | 1.3271 |
| No log | 0.5366 | 66 | 1.6599 | 0.0 | 1.6599 | 1.2884 |
| No log | 0.5528 | 68 | 1.2845 | 0.0 | 1.2845 | 1.1334 |
| No log | 0.5691 | 70 | 1.0179 | 0.0 | 1.0179 | 1.0089 |
| No log | 0.5854 | 72 | 0.9382 | 0.0 | 0.9382 | 0.9686 |
| No log | 0.6016 | 74 | 0.9695 | 0.0 | 0.9695 | 0.9847 |
| No log | 0.6179 | 76 | 0.9170 | 0.0 | 0.9170 | 0.9576 |
| No log | 0.6341 | 78 | 0.7330 | 0.0 | 0.7330 | 0.8562 |
| No log | 0.6504 | 80 | 0.7180 | 0.0 | 0.7180 | 0.8473 |
| No log | 0.6667 | 82 | 0.7087 | 0.0 | 0.7087 | 0.8419 |
| No log | 0.6829 | 84 | 0.7006 | 0.0 | 0.7006 | 0.8370 |
| No log | 0.6992 | 86 | 0.7127 | 0.0 | 0.7127 | 0.8442 |
| No log | 0.7154 | 88 | 0.6691 | 0.0 | 0.6691 | 0.8180 |
| No log | 0.7317 | 90 | 0.7463 | 0.0 | 0.7463 | 0.8639 |
| No log | 0.7480 | 92 | 0.9760 | 0.0 | 0.9760 | 0.9879 |
| No log | 0.7642 | 94 | 1.1356 | 0.0 | 1.1356 | 1.0656 |
| No log | 0.7805 | 96 | 1.1051 | 0.0 | 1.1051 | 1.0512 |
| No log | 0.7967 | 98 | 0.9382 | 0.0 | 0.9382 | 0.9686 |
| No log | 0.8130 | 100 | 0.9447 | 0.0 | 0.9447 | 0.9720 |
| No log | 0.8293 | 102 | 1.0100 | 0.0 | 1.0100 | 1.0050 |
| No log | 0.8455 | 104 | 1.2204 | 0.0 | 1.2204 | 1.1047 |
| No log | 0.8618 | 106 | 1.1871 | 0.0 | 1.1871 | 1.0895 |
| No log | 0.8780 | 108 | 0.9094 | 0.0120 | 0.9094 | 0.9536 |
| No log | 0.8943 | 110 | 0.7087 | 0.0 | 0.7087 | 0.8419 |
| No log | 0.9106 | 112 | 0.7085 | 0.0 | 0.7085 | 0.8417 |
| No log | 0.9268 | 114 | 0.8529 | 0.0120 | 0.8529 | 0.9235 |
| No log | 0.9431 | 116 | 1.1586 | 0.0 | 1.1586 | 1.0764 |
| No log | 0.9593 | 118 | 1.3150 | 0.0 | 1.3150 | 1.1467 |
| No log | 0.9756 | 120 | 1.2441 | 0.0 | 1.2441 | 1.1154 |
| No log | 0.9919 | 122 | 1.0450 | 0.0 | 1.0450 | 1.0223 |
| No log | 1.0081 | 124 | 1.0904 | 0.0 | 1.0904 | 1.0442 |
| No log | 1.0244 | 126 | 1.0781 | 0.0 | 1.0781 | 1.0383 |
| No log | 1.0407 | 128 | 1.0436 | 0.0 | 1.0436 | 1.0216 |
| No log | 1.0569 | 130 | 1.1887 | 0.0 | 1.1887 | 1.0903 |
| No log | 1.0732 | 132 | 1.0492 | 0.0 | 1.0492 | 1.0243 |
| No log | 1.0894 | 134 | 0.8679 | 0.0120 | 0.8679 | 0.9316 |
| No log | 1.1057 | 136 | 0.7027 | 0.0 | 0.7027 | 0.8383 |
| No log | 1.1220 | 138 | 0.6443 | 0.0 | 0.6443 | 0.8027 |
| No log | 1.1382 | 140 | 0.6289 | 0.0 | 0.6289 | 0.7930 |
| No log | 1.1545 | 142 | 0.6286 | 0.0 | 0.6286 | 0.7928 |
| No log | 1.1707 | 144 | 0.7438 | -0.0421 | 0.7438 | 0.8624 |
| No log | 1.1870 | 146 | 0.8764 | 0.0403 | 0.8764 | 0.9361 |
| No log | 1.2033 | 148 | 1.0122 | 0.0 | 1.0122 | 1.0061 |
| No log | 1.2195 | 150 | 1.1796 | 0.0 | 1.1796 | 1.0861 |
| No log | 1.2358 | 152 | 1.2025 | 0.0 | 1.2025 | 1.0966 |
| No log | 1.2520 | 154 | 1.0416 | 0.0 | 1.0416 | 1.0206 |
| No log | 1.2683 | 156 | 0.9120 | 0.0 | 0.9120 | 0.9550 |
| No log | 1.2846 | 158 | 0.8707 | 0.0 | 0.8707 | 0.9331 |
| No log | 1.3008 | 160 | 0.8302 | 0.0253 | 0.8302 | 0.9112 |
| No log | 1.3171 | 162 | 0.7704 | 0.2787 | 0.7704 | 0.8778 |
| No log | 1.3333 | 164 | 0.6792 | 0.0 | 0.6792 | 0.8241 |
| No log | 1.3496 | 166 | 0.6638 | 0.0 | 0.6638 | 0.8147 |
| No log | 1.3659 | 168 | 0.6700 | 0.0 | 0.6700 | 0.8186 |
| No log | 1.3821 | 170 | 0.7012 | 0.0 | 0.7012 | 0.8374 |
| No log | 1.3984 | 172 | 0.8270 | -0.2623 | 0.8270 | 0.9094 |
| No log | 1.4146 | 174 | 1.1674 | 0.0 | 1.1674 | 1.0805 |
| No log | 1.4309 | 176 | 1.4633 | 0.0 | 1.4633 | 1.2097 |
| No log | 1.4472 | 178 | 1.7855 | 0.0 | 1.7855 | 1.3362 |
| No log | 1.4634 | 180 | 1.6411 | 0.0 | 1.6411 | 1.2811 |
| No log | 1.4797 | 182 | 1.1470 | 0.0 | 1.1470 | 1.0710 |
| No log | 1.4959 | 184 | 0.9113 | -0.1074 | 0.9113 | 0.9546 |
| No log | 1.5122 | 186 | 0.8601 | -0.0916 | 0.8601 | 0.9274 |
| No log | 1.5285 | 188 | 0.7877 | -0.0708 | 0.7877 | 0.8875 |
| No log | 1.5447 | 190 | 0.6458 | 0.0 | 0.6458 | 0.8036 |
| No log | 1.5610 | 192 | 0.6332 | 0.0 | 0.6332 | 0.7957 |
| No log | 1.5772 | 194 | 0.6331 | 0.0 | 0.6331 | 0.7957 |
| No log | 1.5935 | 196 | 0.7594 | -0.0421 | 0.7594 | 0.8714 |
| No log | 1.6098 | 198 | 1.3988 | 0.0 | 1.3988 | 1.1827 |
| No log | 1.6260 | 200 | 1.8174 | 0.0 | 1.8174 | 1.3481 |
| No log | 1.6423 | 202 | 1.4908 | 0.0 | 1.4908 | 1.2210 |
| No log | 1.6585 | 204 | 0.9313 | -0.2532 | 0.9313 | 0.9650 |
| No log | 1.6748 | 206 | 0.6991 | 0.0 | 0.6991 | 0.8361 |
| No log | 1.6911 | 208 | 0.6920 | 0.0 | 0.6920 | 0.8319 |
| No log | 1.7073 | 210 | 0.7426 | 0.1895 | 0.7426 | 0.8618 |
| No log | 1.7236 | 212 | 0.7728 | -0.0708 | 0.7728 | 0.8791 |
| No log | 1.7398 | 214 | 0.8565 | 0.0 | 0.8565 | 0.9254 |
| No log | 1.7561 | 216 | 1.0412 | 0.0 | 1.0412 | 1.0204 |
| No log | 1.7724 | 218 | 1.1639 | 0.0 | 1.1639 | 1.0788 |
| No log | 1.7886 | 220 | 1.0869 | 0.0 | 1.0869 | 1.0425 |
| No log | 1.8049 | 222 | 0.8795 | 0.0 | 0.8795 | 0.9378 |
| No log | 1.8211 | 224 | 0.8020 | 0.1646 | 0.8020 | 0.8956 |
| No log | 1.8374 | 226 | 0.7958 | 0.1646 | 0.7958 | 0.8921 |
| No log | 1.8537 | 228 | 0.7535 | 0.1895 | 0.7535 | 0.8680 |
| No log | 1.8699 | 230 | 0.7083 | 0.1895 | 0.7083 | 0.8416 |
| No log | 1.8862 | 232 | 0.7125 | -0.0577 | 0.7125 | 0.8441 |
| No log | 1.9024 | 234 | 0.7802 | -0.0577 | 0.7802 | 0.8833 |
| No log | 1.9187 | 236 | 0.8408 | -0.2623 | 0.8408 | 0.9170 |
| No log | 1.9350 | 238 | 0.8144 | -0.0708 | 0.8144 | 0.9024 |
| No log | 1.9512 | 240 | 0.7353 | -0.0708 | 0.7353 | 0.8575 |
| No log | 1.9675 | 242 | 0.6496 | 0.1895 | 0.6496 | 0.8060 |
| No log | 1.9837 | 244 | 0.6158 | 0.1895 | 0.6158 | 0.7847 |
| No log | 2.0 | 246 | 0.5764 | 0.0 | 0.5764 | 0.7592 |
| No log | 2.0163 | 248 | 0.5845 | 0.0 | 0.5845 | 0.7645 |
| No log | 2.0325 | 250 | 0.6314 | 0.0 | 0.6314 | 0.7946 |
| No log | 2.0488 | 252 | 0.6850 | 0.0 | 0.6850 | 0.8276 |
| No log | 2.0650 | 254 | 0.7090 | 0.0 | 0.7090 | 0.8420 |
| No log | 2.0813 | 256 | 0.6855 | 0.1895 | 0.6855 | 0.8279 |
| No log | 2.0976 | 258 | 0.6759 | 0.1895 | 0.6759 | 0.8221 |
| No log | 2.1138 | 260 | 0.7000 | 0.0 | 0.7000 | 0.8366 |
| No log | 2.1301 | 262 | 0.7554 | 0.0222 | 0.7554 | 0.8692 |
| No log | 2.1463 | 264 | 0.8181 | 0.1538 | 0.8181 | 0.9045 |
| No log | 2.1626 | 266 | 1.0370 | -0.0820 | 1.0370 | 1.0183 |
| No log | 2.1789 | 268 | 0.8913 | 0.1538 | 0.8913 | 0.9441 |
| No log | 2.1951 | 270 | 0.7867 | 0.0388 | 0.7867 | 0.8870 |
| No log | 2.2114 | 272 | 0.7621 | 0.1852 | 0.7621 | 0.8730 |
| No log | 2.2276 | 274 | 0.8231 | 0.1895 | 0.8231 | 0.9072 |
| No log | 2.2439 | 276 | 0.9130 | -0.0820 | 0.9130 | 0.9555 |
| No log | 2.2602 | 278 | 0.7698 | 0.2326 | 0.7698 | 0.8774 |
| No log | 2.2764 | 280 | 0.7915 | 0.2143 | 0.7915 | 0.8897 |
| No log | 2.2927 | 282 | 0.8423 | -0.1440 | 0.8423 | 0.9178 |
| No log | 2.3089 | 284 | 0.8522 | 0.1818 | 0.8522 | 0.9231 |
| No log | 2.3252 | 286 | 0.8931 | 0.1791 | 0.8931 | 0.9451 |
| No log | 2.3415 | 288 | 0.9268 | 0.0610 | 0.9268 | 0.9627 |
| No log | 2.3577 | 290 | 1.0169 | -0.0331 | 1.0169 | 1.0084 |
| No log | 2.3740 | 292 | 0.9643 | 0.0678 | 0.9643 | 0.9820 |
| No log | 2.3902 | 294 | 0.9172 | -0.1224 | 0.9172 | 0.9577 |
| No log | 2.4065 | 296 | 0.9315 | 0.1852 | 0.9315 | 0.9651 |
| No log | 2.4228 | 298 | 0.8248 | 0.1818 | 0.8248 | 0.9082 |
| No log | 2.4390 | 300 | 0.7961 | 0.2029 | 0.7961 | 0.8922 |
| No log | 2.4553 | 302 | 0.7549 | 0.2222 | 0.7549 | 0.8689 |
| No log | 2.4715 | 304 | 0.8073 | 0.1895 | 0.8073 | 0.8985 |
| No log | 2.4878 | 306 | 1.2779 | -0.0916 | 1.2779 | 1.1304 |
| No log | 2.5041 | 308 | 1.3774 | -0.0916 | 1.3774 | 1.1736 |
| No log | 2.5203 | 310 | 1.0959 | -0.0342 | 1.0959 | 1.0469 |
| No log | 2.5366 | 312 | 0.9462 | -0.1224 | 0.9462 | 0.9727 |
| No log | 2.5528 | 314 | 0.9814 | 0.0678 | 0.9814 | 0.9907 |
| No log | 2.5691 | 316 | 0.9405 | -0.0927 | 0.9405 | 0.9698 |
| No log | 2.5854 | 318 | 0.8845 | -0.1493 | 0.8845 | 0.9405 |
| No log | 2.6016 | 320 | 0.8428 | -0.2222 | 0.8428 | 0.9181 |
| No log | 2.6179 | 322 | 0.7843 | 0.1895 | 0.7843 | 0.8856 |
| No log | 2.6341 | 324 | 0.7688 | 0.0 | 0.7688 | 0.8768 |
| No log | 2.6504 | 326 | 0.7179 | 0.2326 | 0.7179 | 0.8473 |
| No log | 2.6667 | 328 | 0.6904 | 0.2326 | 0.6904 | 0.8309 |
| No log | 2.6829 | 330 | 0.6717 | 0.2326 | 0.6717 | 0.8196 |
| No log | 2.6992 | 332 | 0.6541 | 0.2326 | 0.6541 | 0.8088 |
| No log | 2.7154 | 334 | 0.6561 | 0.2326 | 0.6561 | 0.8100 |
| No log | 2.7317 | 336 | 0.6659 | 0.2326 | 0.6659 | 0.8160 |
| No log | 2.7480 | 338 | 0.6964 | 0.2326 | 0.6964 | 0.8345 |
| No log | 2.7642 | 340 | 0.8032 | -0.1440 | 0.8032 | 0.8962 |
| No log | 2.7805 | 342 | 0.8149 | -0.1159 | 0.8149 | 0.9027 |
| No log | 2.7967 | 344 | 0.7765 | 0.2143 | 0.7765 | 0.8812 |
| No log | 2.8130 | 346 | 0.8686 | 0.1895 | 0.8686 | 0.9320 |
| No log | 2.8293 | 348 | 1.2090 | 0.0654 | 1.2090 | 1.0995 |
| No log | 2.8455 | 350 | 1.1875 | 0.0654 | 1.1875 | 1.0897 |
| No log | 2.8618 | 352 | 0.8426 | 0.1895 | 0.8426 | 0.9179 |
| No log | 2.8780 | 354 | 0.7896 | -0.1440 | 0.7896 | 0.8886 |
| No log | 2.8943 | 356 | 0.7760 | -0.2222 | 0.7760 | 0.8809 |
| No log | 2.9106 | 358 | 0.7214 | 0.2326 | 0.7214 | 0.8494 |
| No log | 2.9268 | 360 | 0.7765 | 0.1895 | 0.7765 | 0.8812 |
| No log | 2.9431 | 362 | 0.9112 | -0.0708 | 0.9112 | 0.9546 |
| No log | 2.9593 | 364 | 0.8500 | -0.0708 | 0.8500 | 0.9220 |
| No log | 2.9756 | 366 | 0.7072 | 0.2326 | 0.7072 | 0.8409 |
| No log | 2.9919 | 368 | 0.6845 | 0.2326 | 0.6845 | 0.8274 |
| No log | 3.0081 | 370 | 0.6831 | 0.2326 | 0.6831 | 0.8265 |
| No log | 3.0244 | 372 | 0.7702 | 0.3419 | 0.7702 | 0.8776 |
| No log | 3.0407 | 374 | 0.8811 | 0.0833 | 0.8811 | 0.9387 |
| No log | 3.0569 | 376 | 1.0331 | 0.0654 | 1.0331 | 1.0164 |
| No log | 3.0732 | 378 | 0.8108 | 0.3419 | 0.8108 | 0.9005 |
| No log | 3.0894 | 380 | 0.6825 | 0.2326 | 0.6825 | 0.8262 |
| No log | 3.1057 | 382 | 0.7096 | 0.2326 | 0.7096 | 0.8424 |
| No log | 3.1220 | 384 | 0.7130 | 0.0 | 0.7130 | 0.8444 |
| No log | 3.1382 | 386 | 0.6736 | 0.2326 | 0.6736 | 0.8207 |
| No log | 3.1545 | 388 | 0.6367 | 0.2326 | 0.6367 | 0.7979 |
| No log | 3.1707 | 390 | 0.7298 | 0.2326 | 0.7298 | 0.8543 |
| No log | 3.1870 | 392 | 0.9839 | 0.0654 | 0.9839 | 0.9919 |
| No log | 3.2033 | 394 | 0.9588 | -0.0820 | 0.9588 | 0.9792 |
| No log | 3.2195 | 396 | 0.7526 | 0.2326 | 0.7526 | 0.8675 |
| No log | 3.2358 | 398 | 0.7932 | 0.0435 | 0.7932 | 0.8906 |
| No log | 3.2520 | 400 | 0.8493 | 0.0610 | 0.8493 | 0.9216 |
| No log | 3.2683 | 402 | 0.8237 | 0.0435 | 0.8237 | 0.9076 |
| No log | 3.2846 | 404 | 0.7196 | 0.2222 | 0.7196 | 0.8483 |
| No log | 3.3008 | 406 | 0.6435 | 0.2326 | 0.6435 | 0.8022 |
| No log | 3.3171 | 408 | 0.6222 | 0.2326 | 0.6222 | 0.7888 |
| No log | 3.3333 | 410 | 0.5807 | 0.2326 | 0.5807 | 0.7620 |
| No log | 3.3496 | 412 | 0.5808 | 0.2326 | 0.5808 | 0.7621 |
| No log | 3.3659 | 414 | 0.6082 | 0.2326 | 0.6082 | 0.7798 |
| No log | 3.3821 | 416 | 0.6502 | 0.2326 | 0.6502 | 0.8063 |
| No log | 3.3984 | 418 | 0.6864 | 0.2326 | 0.6864 | 0.8285 |
| No log | 3.4146 | 420 | 0.6774 | 0.2326 | 0.6774 | 0.8231 |
| No log | 3.4309 | 422 | 0.7467 | 0.2222 | 0.7467 | 0.8641 |
| No log | 3.4472 | 424 | 0.7635 | 0.0320 | 0.7635 | 0.8738 |
| No log | 3.4634 | 426 | 0.7422 | 0.2222 | 0.7422 | 0.8615 |
| No log | 3.4797 | 428 | 0.7573 | 0.2222 | 0.7573 | 0.8702 |
| No log | 3.4959 | 430 | 0.8136 | 0.2222 | 0.8136 | 0.9020 |
| No log | 3.5122 | 432 | 0.7946 | 0.2222 | 0.7946 | 0.8914 |
| No log | 3.5285 | 434 | 0.8019 | 0.0320 | 0.8019 | 0.8955 |
| No log | 3.5447 | 436 | 0.8242 | 0.0435 | 0.8242 | 0.9078 |
| No log | 3.5610 | 438 | 0.7982 | -0.1786 | 0.7982 | 0.8934 |
| No log | 3.5772 | 440 | 0.7674 | 0.2222 | 0.7674 | 0.8760 |
| No log | 3.5935 | 442 | 0.7639 | 0.2222 | 0.7639 | 0.8740 |
| No log | 3.6098 | 444 | 0.7318 | 0.2326 | 0.7318 | 0.8554 |
| No log | 3.6260 | 446 | 0.7165 | 0.2222 | 0.7165 | 0.8465 |
| No log | 3.6423 | 448 | 0.7176 | 0.2222 | 0.7176 | 0.8471 |
| No log | 3.6585 | 450 | 0.7335 | 0.2326 | 0.7335 | 0.8564 |
| No log | 3.6748 | 452 | 0.7536 | 0.2326 | 0.7536 | 0.8681 |
| No log | 3.6911 | 454 | 0.7253 | 0.2326 | 0.7253 | 0.8516 |
| No log | 3.7073 | 456 | 0.7199 | 0.2222 | 0.7199 | 0.8485 |
| No log | 3.7236 | 458 | 0.7069 | 0.2326 | 0.7069 | 0.8408 |
| No log | 3.7398 | 460 | 0.7456 | 0.1895 | 0.7456 | 0.8635 |
| No log | 3.7561 | 462 | 0.7391 | 0.2326 | 0.7391 | 0.8597 |
| No log | 3.7724 | 464 | 0.7658 | 0.1895 | 0.7658 | 0.8751 |
| No log | 3.7886 | 466 | 0.7694 | 0.1895 | 0.7694 | 0.8772 |
| No log | 3.8049 | 468 | 0.7222 | 0.2326 | 0.7222 | 0.8498 |
| No log | 3.8211 | 470 | 0.7030 | 0.2326 | 0.7030 | 0.8384 |
| No log | 3.8374 | 472 | 0.6966 | 0.2326 | 0.6966 | 0.8346 |
| No log | 3.8537 | 474 | 0.6520 | 0.2326 | 0.6520 | 0.8075 |
| No log | 3.8699 | 476 | 0.6628 | 0.2326 | 0.6628 | 0.8141 |
| No log | 3.8862 | 478 | 1.0160 | 0.0494 | 1.0160 | 1.0080 |
| No log | 3.9024 | 480 | 1.3571 | 0.0494 | 1.3571 | 1.1649 |
| No log | 3.9187 | 482 | 1.3803 | 0.0494 | 1.3803 | 1.1749 |
| No log | 3.9350 | 484 | 1.0124 | -0.0820 | 1.0124 | 1.0062 |
| No log | 3.9512 | 486 | 0.7742 | 0.0320 | 0.7742 | 0.8799 |
| No log | 3.9675 | 488 | 0.8435 | 0.0435 | 0.8435 | 0.9184 |
| No log | 3.9837 | 490 | 0.8551 | 0.0435 | 0.8551 | 0.9247 |
| No log | 4.0 | 492 | 0.8192 | 0.0435 | 0.8192 | 0.9051 |
| No log | 4.0163 | 494 | 0.7750 | -0.1786 | 0.7750 | 0.8804 |
| No log | 4.0325 | 496 | 0.7938 | 0.2326 | 0.7938 | 0.8910 |
| No log | 4.0488 | 498 | 0.8622 | 0.1895 | 0.8622 | 0.9285 |
| 0.5721 | 4.0650 | 500 | 0.8128 | 0.2326 | 0.8128 | 0.9015 |
| 0.5721 | 4.0813 | 502 | 0.8047 | 0.2326 | 0.8047 | 0.8970 |
| 0.5721 | 4.0976 | 504 | 0.8156 | 0.2326 | 0.8156 | 0.9031 |
| 0.5721 | 4.1138 | 506 | 0.7910 | 0.2326 | 0.7910 | 0.8894 |
| 0.5721 | 4.1301 | 508 | 0.8172 | 0.2326 | 0.8172 | 0.9040 |
| 0.5721 | 4.1463 | 510 | 0.9184 | -0.0916 | 0.9184 | 0.9584 |
| 0.5721 | 4.1626 | 512 | 1.0155 | -0.0916 | 1.0155 | 1.0077 |
| 0.5721 | 4.1789 | 514 | 1.1367 | -0.0916 | 1.1367 | 1.0662 |
| 0.5721 | 4.1951 | 516 | 1.1487 | -0.0916 | 1.1487 | 1.0718 |
| 0.5721 | 4.2114 | 518 | 1.0766 | -0.3378 | 1.0766 | 1.0376 |
| 0.5721 | 4.2276 | 520 | 1.0685 | -0.3134 | 1.0685 | 1.0337 |
| 0.5721 | 4.2439 | 522 | 1.0245 | -0.1440 | 1.0245 | 1.0122 |
| 0.5721 | 4.2602 | 524 | 1.0455 | -0.3134 | 1.0455 | 1.0225 |
| 0.5721 | 4.2764 | 526 | 1.1123 | -0.0916 | 1.1123 | 1.0547 |
| 0.5721 | 4.2927 | 528 | 1.0803 | -0.0916 | 1.0803 | 1.0394 |
| 0.5721 | 4.3089 | 530 | 0.9704 | -0.4259 | 0.9704 | 0.9851 |
| 0.5721 | 4.3252 | 532 | 0.9044 | -0.1440 | 0.9044 | 0.9510 |
| 0.5721 | 4.3415 | 534 | 0.9059 | -0.0421 | 0.9059 | 0.9518 |
| 0.5721 | 4.3577 | 536 | 0.9030 | -0.0421 | 0.9030 | 0.9503 |
| 0.5721 | 4.3740 | 538 | 0.8661 | -0.0421 | 0.8661 | 0.9306 |
| 0.5721 | 4.3902 | 540 | 0.8022 | 0.2326 | 0.8022 | 0.8957 |
| 0.5721 | 4.4065 | 542 | 0.7813 | 0.2326 | 0.7813 | 0.8839 |
| 0.5721 | 4.4228 | 544 | 0.7873 | -0.1440 | 0.7873 | 0.8873 |
| 0.5721 | 4.4390 | 546 | 0.7965 | -0.1440 | 0.7965 | 0.8925 |
| 0.5721 | 4.4553 | 548 | 0.8221 | 0.2326 | 0.8221 | 0.9067 |
| 0.5721 | 4.4715 | 550 | 0.8483 | 0.2326 | 0.8483 | 0.9210 |
| 0.5721 | 4.4878 | 552 | 0.8676 | 0.2326 | 0.8676 | 0.9315 |
| 0.5721 | 4.5041 | 554 | 0.8736 | 0.2326 | 0.8736 | 0.9347 |
| 0.5721 | 4.5203 | 556 | 0.8642 | 0.2326 | 0.8642 | 0.9296 |
| 0.5721 | 4.5366 | 558 | 0.8244 | 0.2326 | 0.8244 | 0.9079 |
| 0.5721 | 4.5528 | 560 | 0.7894 | -0.1440 | 0.7894 | 0.8885 |
| 0.5721 | 4.5691 | 562 | 0.7957 | -0.1440 | 0.7957 | 0.8920 |
| 0.5721 | 4.5854 | 564 | 0.7763 | 0.2326 | 0.7763 | 0.8811 |
| 0.5721 | 4.6016 | 566 | 0.7753 | 0.2326 | 0.7753 | 0.8805 |
| 0.5721 | 4.6179 | 568 | 0.7940 | 0.2326 | 0.7940 | 0.8911 |
| 0.5721 | 4.6341 | 570 | 0.8206 | 0.2326 | 0.8206 | 0.9059 |
| 0.5721 | 4.6504 | 572 | 0.8271 | 0.2326 | 0.8271 | 0.9094 |
| 0.5721 | 4.6667 | 574 | 0.7977 | 0.2326 | 0.7977 | 0.8932 |
| 0.5721 | 4.6829 | 576 | 0.7721 | 0.2326 | 0.7721 | 0.8787 |
| 0.5721 | 4.6992 | 578 | 0.7632 | 0.2326 | 0.7632 | 0.8736 |
| 0.5721 | 4.7154 | 580 | 0.7866 | 0.0320 | 0.7866 | 0.8869 |
| 0.5721 | 4.7317 | 582 | 0.8102 | 0.0435 | 0.8102 | 0.9001 |
| 0.5721 | 4.7480 | 584 | 0.8344 | -0.1786 | 0.8344 | 0.9134 |
| 0.5721 | 4.7642 | 586 | 0.8811 | 0.1895 | 0.8811 | 0.9387 |
| 0.5721 | 4.7805 | 588 | 0.9494 | -0.0916 | 0.9494 | 0.9744 |
| 0.5721 | 4.7967 | 590 | 0.9345 | -0.0820 | 0.9345 | 0.9667 |
| 0.5721 | 4.8130 | 592 | 0.8709 | -0.0577 | 0.8709 | 0.9332 |
| 0.5721 | 4.8293 | 594 | 0.8210 | -0.1786 | 0.8210 | 0.9061 |
| 0.5721 | 4.8455 | 596 | 0.8124 | 0.0320 | 0.8124 | 0.9013 |
| 0.5721 | 4.8618 | 598 | 0.7616 | 0.0320 | 0.7616 | 0.8727 |
| 0.5721 | 4.8780 | 600 | 0.7253 | 0.2326 | 0.7253 | 0.8516 |
| 0.5721 | 4.8943 | 602 | 0.8012 | 0.2326 | 0.8012 | 0.8951 |
| 0.5721 | 4.9106 | 604 | 0.8188 | -0.0421 | 0.8188 | 0.9049 |
| 0.5721 | 4.9268 | 606 | 0.7688 | 0.2326 | 0.7688 | 0.8768 |
| 0.5721 | 4.9431 | 608 | 0.7070 | 0.2326 | 0.7070 | 0.8408 |
| 0.5721 | 4.9593 | 610 | 0.6936 | 0.4444 | 0.6936 | 0.8328 |
| 0.5721 | 4.9756 | 612 | 0.7170 | 0.4107 | 0.7170 | 0.8467 |
| 0.5721 | 4.9919 | 614 | 0.7215 | 0.4107 | 0.7215 | 0.8494 |
| 0.5721 | 5.0081 | 616 | 0.7000 | 0.2326 | 0.7000 | 0.8366 |
| 0.5721 | 5.0244 | 618 | 0.6901 | 0.2326 | 0.6901 | 0.8307 |
| 0.5721 | 5.0407 | 620 | 0.6416 | 0.2326 | 0.6416 | 0.8010 |
| 0.5721 | 5.0569 | 622 | 0.6239 | 0.2326 | 0.6239 | 0.7899 |
| 0.5721 | 5.0732 | 624 | 0.6416 | 0.2326 | 0.6416 | 0.8010 |
| 0.5721 | 5.0894 | 626 | 0.6931 | 0.2326 | 0.6931 | 0.8325 |
| 0.5721 | 5.1057 | 628 | 0.7415 | 0.2326 | 0.7415 | 0.8611 |
| 0.5721 | 5.1220 | 630 | 0.8281 | 0.2326 | 0.8281 | 0.9100 |
| 0.5721 | 5.1382 | 632 | 0.8821 | 0.2222 | 0.8821 | 0.9392 |
| 0.5721 | 5.1545 | 634 | 0.8846 | -0.1440 | 0.8846 | 0.9405 |
| 0.5721 | 5.1707 | 636 | 0.8868 | -0.1440 | 0.8868 | 0.9417 |
| 0.5721 | 5.1870 | 638 | 0.8909 | 0.2143 | 0.8909 | 0.9439 |
| 0.5721 | 5.2033 | 640 | 0.8707 | 0.2143 | 0.8707 | 0.9331 |
| 0.5721 | 5.2195 | 642 | 0.8623 | 0.2143 | 0.8623 | 0.9286 |
| 0.5721 | 5.2358 | 644 | 0.8647 | 0.2222 | 0.8647 | 0.9299 |
| 0.5721 | 5.2520 | 646 | 0.8829 | 0.2326 | 0.8829 | 0.9396 |
| 0.5721 | 5.2683 | 648 | 0.8662 | 0.2222 | 0.8662 | 0.9307 |
| 0.5721 | 5.2846 | 650 | 0.8424 | 0.2143 | 0.8424 | 0.9178 |
| 0.5721 | 5.3008 | 652 | 0.8940 | -0.1159 | 0.8940 | 0.9455 |
| 0.5721 | 5.3171 | 654 | 0.9090 | -0.1159 | 0.9090 | 0.9534 |
| 0.5721 | 5.3333 | 656 | 0.8852 | -0.1159 | 0.8852 | 0.9409 |
| 0.5721 | 5.3496 | 658 | 0.8866 | 0.2222 | 0.8866 | 0.9416 |
| 0.5721 | 5.3659 | 660 | 0.9247 | -0.0421 | 0.9247 | 0.9616 |
| 0.5721 | 5.3821 | 662 | 0.9163 | -0.0421 | 0.9163 | 0.9572 |
| 0.5721 | 5.3984 | 664 | 0.9188 | 0.2222 | 0.9188 | 0.9586 |
| 0.5721 | 5.4146 | 666 | 0.9223 | -0.1159 | 0.9223 | 0.9604 |
| 0.5721 | 5.4309 | 668 | 0.9560 | -0.1159 | 0.9560 | 0.9777 |
| 0.5721 | 5.4472 | 670 | 0.9944 | -0.1159 | 0.9944 | 0.9972 |
| 0.5721 | 5.4634 | 672 | 0.9823 | -0.1159 | 0.9823 | 0.9911 |
| 0.5721 | 5.4797 | 674 | 0.9871 | -0.1159 | 0.9871 | 0.9935 |
| 0.5721 | 5.4959 | 676 | 1.0169 | -0.3538 | 1.0169 | 1.0084 |
| 0.5721 | 5.5122 | 678 | 1.0428 | -0.0708 | 1.0428 | 1.0212 |
| 0.5721 | 5.5285 | 680 | 1.0046 | -0.3538 | 1.0046 | 1.0023 |
| 0.5721 | 5.5447 | 682 | 0.9838 | -0.1159 | 0.9838 | 0.9919 |
| 0.5721 | 5.5610 | 684 | 0.9924 | -0.1159 | 0.9924 | 0.9962 |
| 0.5721 | 5.5772 | 686 | 1.0137 | -0.1159 | 1.0137 | 1.0068 |
| 0.5721 | 5.5935 | 688 | 1.0295 | -0.3134 | 1.0295 | 1.0146 |
| 0.5721 | 5.6098 | 690 | 0.9633 | -0.1440 | 0.9633 | 0.9815 |
| 0.5721 | 5.6260 | 692 | 0.9464 | -0.0185 | 0.9464 | 0.9728 |
| 0.5721 | 5.6423 | 694 | 0.9395 | -0.0421 | 0.9395 | 0.9693 |
| 0.5721 | 5.6585 | 696 | 0.9095 | -0.0421 | 0.9095 | 0.9537 |
| 0.5721 | 5.6748 | 698 | 0.8766 | -0.0421 | 0.8766 | 0.9362 |
| 0.5721 | 5.6911 | 700 | 0.8651 | 0.2222 | 0.8651 | 0.9301 |
| 0.5721 | 5.7073 | 702 | 0.8739 | 0.2222 | 0.8739 | 0.9349 |
| 0.5721 | 5.7236 | 704 | 0.8934 | 0.2222 | 0.8934 | 0.9452 |
| 0.5721 | 5.7398 | 706 | 0.9240 | -0.0185 | 0.9240 | 0.9612 |
| 0.5721 | 5.7561 | 708 | 0.9677 | -0.0185 | 0.9677 | 0.9837 |
| 0.5721 | 5.7724 | 710 | 0.9682 | -0.1786 | 0.9682 | 0.9840 |
| 0.5721 | 5.7886 | 712 | 0.9490 | -0.1159 | 0.9490 | 0.9742 |
| 0.5721 | 5.8049 | 714 | 0.9374 | -0.1159 | 0.9374 | 0.9682 |
| 0.5721 | 5.8211 | 716 | 0.9165 | -0.1159 | 0.9165 | 0.9573 |
| 0.5721 | 5.8374 | 718 | 0.8860 | -0.1786 | 0.8860 | 0.9413 |
| 0.5721 | 5.8537 | 720 | 0.8942 | 0.2222 | 0.8942 | 0.9456 |
| 0.5721 | 5.8699 | 722 | 0.9043 | -0.0421 | 0.9043 | 0.9510 |
| 0.5721 | 5.8862 | 724 | 0.9038 | -0.0421 | 0.9038 | 0.9507 |
| 0.5721 | 5.9024 | 726 | 0.8997 | -0.0421 | 0.8997 | 0.9485 |
| 0.5721 | 5.9187 | 728 | 0.8832 | 0.2326 | 0.8832 | 0.9398 |
| 0.5721 | 5.9350 | 730 | 0.8808 | 0.2222 | 0.8808 | 0.9385 |
| 0.5721 | 5.9512 | 732 | 0.8883 | 0.2222 | 0.8883 | 0.9425 |
| 0.5721 | 5.9675 | 734 | 0.8745 | 0.2222 | 0.8745 | 0.9351 |
| 0.5721 | 5.9837 | 736 | 0.8772 | 0.2222 | 0.8772 | 0.9366 |
| 0.5721 | 6.0 | 738 | 0.8734 | 0.2222 | 0.8734 | 0.9346 |
| 0.5721 | 6.0163 | 740 | 0.8806 | 0.2143 | 0.8806 | 0.9384 |
| 0.5721 | 6.0325 | 742 | 0.9102 | -0.1440 | 0.9102 | 0.9540 |
| 0.5721 | 6.0488 | 744 | 0.9371 | -0.1440 | 0.9371 | 0.9680 |
| 0.5721 | 6.0650 | 746 | 0.9762 | -0.1440 | 0.9762 | 0.9880 |
| 0.5721 | 6.0813 | 748 | 0.9744 | -0.1440 | 0.9744 | 0.9871 |
| 0.5721 | 6.0976 | 750 | 0.9846 | -0.3134 | 0.9846 | 0.9923 |
| 0.5721 | 6.1138 | 752 | 1.0050 | 0.0000 | 1.0050 | 1.0025 |
| 0.5721 | 6.1301 | 754 | 1.0268 | -0.3134 | 1.0268 | 1.0133 |
| 0.5721 | 6.1463 | 756 | 1.0264 | -0.3134 | 1.0264 | 1.0131 |
| 0.5721 | 6.1626 | 758 | 0.9937 | -0.1440 | 0.9937 | 0.9969 |
| 0.5721 | 6.1789 | 760 | 0.9672 | -0.1440 | 0.9672 | 0.9835 |
| 0.5721 | 6.1951 | 762 | 0.9356 | -0.1440 | 0.9356 | 0.9673 |
| 0.5721 | 6.2114 | 764 | 0.9086 | -0.1440 | 0.9086 | 0.9532 |
| 0.5721 | 6.2276 | 766 | 0.8827 | 0.2143 | 0.8827 | 0.9395 |
| 0.5721 | 6.2439 | 768 | 0.8668 | 0.2143 | 0.8668 | 0.9310 |
| 0.5721 | 6.2602 | 770 | 0.8759 | 0.2143 | 0.8759 | 0.9359 |
| 0.5721 | 6.2764 | 772 | 0.9173 | 0.2222 | 0.9173 | 0.9578 |
| 0.5721 | 6.2927 | 774 | 0.9397 | -0.0185 | 0.9397 | 0.9694 |
| 0.5721 | 6.3089 | 776 | 0.9247 | 0.2222 | 0.9247 | 0.9616 |
| 0.5721 | 6.3252 | 778 | 0.9260 | 0.2222 | 0.9260 | 0.9623 |
| 0.5721 | 6.3415 | 780 | 0.9363 | -0.1440 | 0.9363 | 0.9676 |
| 0.5721 | 6.3577 | 782 | 0.9807 | -0.3134 | 0.9807 | 0.9903 |
| 0.5721 | 6.3740 | 784 | 1.0023 | -0.1440 | 1.0023 | 1.0011 |
| 0.5721 | 6.3902 | 786 | 1.0101 | -0.1159 | 1.0101 | 1.0051 |
| 0.5721 | 6.4065 | 788 | 1.0165 | -0.1159 | 1.0165 | 1.0082 |
| 0.5721 | 6.4228 | 790 | 1.0295 | -0.1159 | 1.0295 | 1.0146 |
| 0.5721 | 6.4390 | 792 | 1.0307 | -0.3134 | 1.0307 | 1.0153 |
| 0.5721 | 6.4553 | 794 | 1.0299 | -0.3077 | 1.0299 | 1.0148 |
| 0.5721 | 6.4715 | 796 | 1.0208 | -0.1159 | 1.0208 | 1.0103 |
| 0.5721 | 6.4878 | 798 | 1.0212 | -0.1159 | 1.0212 | 1.0106 |
| 0.5721 | 6.5041 | 800 | 1.0242 | -0.1159 | 1.0242 | 1.0120 |
| 0.5721 | 6.5203 | 802 | 1.0257 | -0.1159 | 1.0257 | 1.0128 |
| 0.5721 | 6.5366 | 804 | 1.0287 | -0.1159 | 1.0287 | 1.0143 |
| 0.5721 | 6.5528 | 806 | 1.0453 | -0.2721 | 1.0453 | 1.0224 |
| 0.5721 | 6.5691 | 808 | 1.0545 | -0.3538 | 1.0545 | 1.0269 |
| 0.5721 | 6.5854 | 810 | 1.0191 | -0.3636 | 1.0191 | 1.0095 |
| 0.5721 | 6.6016 | 812 | 0.9733 | -0.1440 | 0.9733 | 0.9866 |
| 0.5721 | 6.6179 | 814 | 0.9541 | -0.1159 | 0.9541 | 0.9768 |
| 0.5721 | 6.6341 | 816 | 0.9553 | -0.1159 | 0.9553 | 0.9774 |
| 0.5721 | 6.6504 | 818 | 0.9653 | -0.1159 | 0.9653 | 0.9825 |
| 0.5721 | 6.6667 | 820 | 0.9748 | -0.1159 | 0.9748 | 0.9873 |
| 0.5721 | 6.6829 | 822 | 0.9572 | -0.1159 | 0.9572 | 0.9783 |
| 0.5721 | 6.6992 | 824 | 0.9512 | -0.1786 | 0.9512 | 0.9753 |
| 0.5721 | 6.7154 | 826 | 0.9631 | -0.0577 | 0.9631 | 0.9814 |
| 0.5721 | 6.7317 | 828 | 0.9400 | -0.0577 | 0.9400 | 0.9695 |
| 0.5721 | 6.7480 | 830 | 0.8963 | -0.0421 | 0.8963 | 0.9467 |
| 0.5721 | 6.7642 | 832 | 0.8834 | 0.2326 | 0.8834 | 0.9399 |
| 0.5721 | 6.7805 | 834 | 0.8798 | 0.2222 | 0.8798 | 0.9380 |
| 0.5721 | 6.7967 | 836 | 0.9061 | 0.2222 | 0.9061 | 0.9519 |
| 0.5721 | 6.8130 | 838 | 0.9303 | -0.3636 | 0.9303 | 0.9645 |
| 0.5721 | 6.8293 | 840 | 0.9190 | -0.1786 | 0.9190 | 0.9587 |
| 0.5721 | 6.8455 | 842 | 0.9063 | -0.1440 | 0.9063 | 0.9520 |
| 0.5721 | 6.8618 | 844 | 0.8876 | -0.1159 | 0.8876 | 0.9421 |
| 0.5721 | 6.8780 | 846 | 0.8625 | -0.1440 | 0.8625 | 0.9287 |
| 0.5721 | 6.8943 | 848 | 0.8468 | -0.1786 | 0.8468 | 0.9202 |
| 0.5721 | 6.9106 | 850 | 0.8660 | 0.2326 | 0.8660 | 0.9306 |
| 0.5721 | 6.9268 | 852 | 0.8657 | 0.2326 | 0.8657 | 0.9304 |
| 0.5721 | 6.9431 | 854 | 0.8512 | -0.1786 | 0.8512 | 0.9226 |
| 0.5721 | 6.9593 | 856 | 0.8515 | -0.1786 | 0.8515 | 0.9228 |
| 0.5721 | 6.9756 | 858 | 0.8441 | -0.1786 | 0.8441 | 0.9187 |
| 0.5721 | 6.9919 | 860 | 0.8465 | -0.1786 | 0.8465 | 0.9201 |
| 0.5721 | 7.0081 | 862 | 0.8475 | -0.1786 | 0.8475 | 0.9206 |
| 0.5721 | 7.0244 | 864 | 0.8451 | 0.2222 | 0.8451 | 0.9193 |
| 0.5721 | 7.0407 | 866 | 0.8598 | 0.2222 | 0.8598 | 0.9273 |
| 0.5721 | 7.0569 | 868 | 0.8794 | -0.1786 | 0.8794 | 0.9378 |
| 0.5721 | 7.0732 | 870 | 0.8848 | -0.1786 | 0.8848 | 0.9406 |
| 0.5721 | 7.0894 | 872 | 0.9053 | -0.1440 | 0.9053 | 0.9515 |
| 0.5721 | 7.1057 | 874 | 0.9328 | -0.1440 | 0.9328 | 0.9658 |
| 0.5721 | 7.1220 | 876 | 0.9442 | -0.1440 | 0.9442 | 0.9717 |
| 0.5721 | 7.1382 | 878 | 0.9625 | -0.1786 | 0.9625 | 0.9811 |
| 0.5721 | 7.1545 | 880 | 0.9972 | -0.3538 | 0.9972 | 0.9986 |
| 0.5721 | 7.1707 | 882 | 0.9947 | -0.3636 | 0.9947 | 0.9974 |
| 0.5721 | 7.1870 | 884 | 0.9686 | -0.1440 | 0.9686 | 0.9842 |
| 0.5721 | 7.2033 | 886 | 0.9343 | -0.1440 | 0.9343 | 0.9666 |
| 0.5721 | 7.2195 | 888 | 0.9107 | -0.1440 | 0.9107 | 0.9543 |
| 0.5721 | 7.2358 | 890 | 0.8954 | -0.1440 | 0.8954 | 0.9463 |
| 0.5721 | 7.2520 | 892 | 0.9045 | -0.1786 | 0.9045 | 0.9511 |
| 0.5721 | 7.2683 | 894 | 0.9278 | 0.2222 | 0.9278 | 0.9632 |
| 0.5721 | 7.2846 | 896 | 0.9480 | -0.1786 | 0.9480 | 0.9736 |
| 0.5721 | 7.3008 | 898 | 0.9448 | -0.1440 | 0.9448 | 0.9720 |
| 0.5721 | 7.3171 | 900 | 0.9827 | -0.1786 | 0.9827 | 0.9913 |
| 0.5721 | 7.3333 | 902 | 1.0266 | -0.3453 | 1.0266 | 1.0132 |
| 0.5721 | 7.3496 | 904 | 1.0471 | -0.3453 | 1.0471 | 1.0233 |
| 0.5721 | 7.3659 | 906 | 1.0608 | -0.3453 | 1.0608 | 1.0299 |
| 0.5721 | 7.3821 | 908 | 1.0453 | -0.3453 | 1.0453 | 1.0224 |
| 0.5721 | 7.3984 | 910 | 1.0187 | -0.1159 | 1.0187 | 1.0093 |
| 0.5721 | 7.4146 | 912 | 0.9886 | -0.1440 | 0.9886 | 0.9943 |
| 0.5721 | 7.4309 | 914 | 0.9652 | -0.1440 | 0.9652 | 0.9824 |
| 0.5721 | 7.4472 | 916 | 0.9316 | -0.1440 | 0.9316 | 0.9652 |
| 0.5721 | 7.4634 | 918 | 0.9207 | -0.1159 | 0.9207 | 0.9595 |
| 0.5721 | 7.4797 | 920 | 0.9239 | -0.1159 | 0.9239 | 0.9612 |
| 0.5721 | 7.4959 | 922 | 0.9140 | -0.1440 | 0.9140 | 0.9560 |
| 0.5721 | 7.5122 | 924 | 0.9093 | -0.1786 | 0.9093 | 0.9536 |
| 0.5721 | 7.5285 | 926 | 0.9244 | -0.2222 | 0.9244 | 0.9615 |
| 0.5721 | 7.5447 | 928 | 0.9345 | -0.2222 | 0.9345 | 0.9667 |
| 0.5721 | 7.5610 | 930 | 0.9361 | -0.2222 | 0.9361 | 0.9675 |
| 0.5721 | 7.5772 | 932 | 0.9434 | -0.1440 | 0.9434 | 0.9713 |
| 0.5721 | 7.5935 | 934 | 0.9655 | -0.1786 | 0.9655 | 0.9826 |
| 0.5721 | 7.6098 | 936 | 1.0181 | -0.4103 | 1.0181 | 1.0090 |
| 0.5721 | 7.6260 | 938 | 1.0641 | -0.0820 | 1.0641 | 1.0315 |
| 0.5721 | 7.6423 | 940 | 1.0721 | -0.0820 | 1.0721 | 1.0354 |
| 0.5721 | 7.6585 | 942 | 1.0423 | -0.0820 | 1.0423 | 1.0209 |
| 0.5721 | 7.6748 | 944 | 0.9898 | -0.3538 | 0.9898 | 0.9949 |
| 0.5721 | 7.6911 | 946 | 0.9552 | -0.1440 | 0.9552 | 0.9773 |
| 0.5721 | 7.7073 | 948 | 0.9532 | -0.1159 | 0.9532 | 0.9763 |
| 0.5721 | 7.7236 | 950 | 0.9676 | -0.1159 | 0.9676 | 0.9836 |
| 0.5721 | 7.7398 | 952 | 0.9889 | -0.1159 | 0.9889 | 0.9945 |
| 0.5721 | 7.7561 | 954 | 1.0033 | -0.1159 | 1.0033 | 1.0016 |
| 0.5721 | 7.7724 | 956 | 0.9999 | -0.1440 | 0.9999 | 0.9999 |
| 0.5721 | 7.7886 | 958 | 0.9718 | -0.1440 | 0.9718 | 0.9858 |
| 0.5721 | 7.8049 | 960 | 0.9586 | -0.1786 | 0.9586 | 0.9791 |
| 0.5721 | 7.8211 | 962 | 0.9513 | -0.1818 | 0.9513 | 0.9753 |
| 0.5721 | 7.8374 | 964 | 0.9530 | -0.2222 | 0.9530 | 0.9762 |
| 0.5721 | 7.8537 | 966 | 0.9476 | -0.2222 | 0.9476 | 0.9734 |
| 0.5721 | 7.8699 | 968 | 0.9444 | -0.2222 | 0.9444 | 0.9718 |
| 0.5721 | 7.8862 | 970 | 0.9236 | -0.2222 | 0.9236 | 0.9610 |
| 0.5721 | 7.9024 | 972 | 0.9123 | -0.1786 | 0.9123 | 0.9551 |
| 0.5721 | 7.9187 | 974 | 0.8966 | -0.1786 | 0.8966 | 0.9469 |
| 0.5721 | 7.9350 | 976 | 0.8815 | -0.1440 | 0.8815 | 0.9389 |
| 0.5721 | 7.9512 | 978 | 0.8792 | -0.1440 | 0.8792 | 0.9376 |
| 0.5721 | 7.9675 | 980 | 0.8783 | -0.1440 | 0.8783 | 0.9372 |
| 0.5721 | 7.9837 | 982 | 0.8771 | -0.1440 | 0.8771 | 0.9365 |
| 0.5721 | 8.0 | 984 | 0.8753 | -0.1786 | 0.8753 | 0.9356 |
| 0.5721 | 8.0163 | 986 | 0.8888 | -0.2222 | 0.8888 | 0.9428 |
| 0.5721 | 8.0325 | 988 | 0.9285 | 0.1895 | 0.9285 | 0.9636 |
| 0.5721 | 8.0488 | 990 | 0.9385 | 0.1895 | 0.9385 | 0.9688 |
| 0.5721 | 8.0650 | 992 | 0.9101 | 0.1895 | 0.9101 | 0.9540 |
| 0.5721 | 8.0813 | 994 | 0.8664 | 0.2326 | 0.8664 | 0.9308 |
| 0.5721 | 8.0976 | 996 | 0.8229 | -0.2222 | 0.8229 | 0.9071 |
| 0.5721 | 8.1138 | 998 | 0.8103 | -0.2222 | 0.8103 | 0.9002 |
| 0.0908 | 8.1301 | 1000 | 0.8011 | -0.2222 | 0.8011 | 0.8950 |
| 0.0908 | 8.1463 | 1002 | 0.7930 | 0.2326 | 0.7930 | 0.8905 |
| 0.0908 | 8.1626 | 1004 | 0.7979 | 0.2326 | 0.7979 | 0.8932 |
| 0.0908 | 8.1789 | 1006 | 0.7996 | 0.2326 | 0.7996 | 0.8942 |
| 0.0908 | 8.1951 | 1008 | 0.8103 | 0.2326 | 0.8103 | 0.9002 |
| 0.0908 | 8.2114 | 1010 | 0.8405 | 0.2326 | 0.8405 | 0.9168 |
| 0.0908 | 8.2276 | 1012 | 0.8733 | 0.2326 | 0.8733 | 0.9345 |
| 0.0908 | 8.2439 | 1014 | 0.9048 | 0.1895 | 0.9048 | 0.9512 |
| 0.0908 | 8.2602 | 1016 | 0.9179 | -0.1786 | 0.9179 | 0.9581 |
| 0.0908 | 8.2764 | 1018 | 0.9189 | -0.1440 | 0.9189 | 0.9586 |
| 0.0908 | 8.2927 | 1020 | 0.9251 | -0.1440 | 0.9251 | 0.9618 |
| 0.0908 | 8.3089 | 1022 | 0.9297 | -0.1440 | 0.9297 | 0.9642 |
| 0.0908 | 8.3252 | 1024 | 0.9180 | -0.1440 | 0.9180 | 0.9581 |
| 0.0908 | 8.3415 | 1026 | 0.9148 | -0.1493 | 0.9148 | 0.9564 |
| 0.0908 | 8.3577 | 1028 | 0.8985 | -0.1493 | 0.8985 | 0.9479 |
| 0.0908 | 8.3740 | 1030 | 0.9067 | -0.1818 | 0.9067 | 0.9522 |
| 0.0908 | 8.3902 | 1032 | 0.9151 | 0.1895 | 0.9151 | 0.9566 |
| 0.0908 | 8.4065 | 1034 | 0.9152 | -0.1818 | 0.9152 | 0.9567 |
| 0.0908 | 8.4228 | 1036 | 0.9060 | -0.1818 | 0.9060 | 0.9518 |
| 0.0908 | 8.4390 | 1038 | 0.9168 | -0.1818 | 0.9168 | 0.9575 |
| 0.0908 | 8.4553 | 1040 | 0.9344 | 0.1895 | 0.9344 | 0.9666 |
| 0.0908 | 8.4715 | 1042 | 0.9382 | -0.1818 | 0.9382 | 0.9686 |
| 0.0908 | 8.4878 | 1044 | 0.9546 | -0.1818 | 0.9546 | 0.9770 |
| 0.0908 | 8.5041 | 1046 | 0.9654 | -0.1818 | 0.9654 | 0.9825 |
| 0.0908 | 8.5203 | 1048 | 0.9807 | -0.1818 | 0.9807 | 0.9903 |
| 0.0908 | 8.5366 | 1050 | 0.9797 | -0.1818 | 0.9797 | 0.9898 |
| 0.0908 | 8.5528 | 1052 | 0.9678 | -0.1818 | 0.9678 | 0.9838 |
| 0.0908 | 8.5691 | 1054 | 0.9439 | -0.1786 | 0.9439 | 0.9716 |
| 0.0908 | 8.5854 | 1056 | 0.9175 | -0.1786 | 0.9175 | 0.9579 |
| 0.0908 | 8.6016 | 1058 | 0.8997 | -0.1440 | 0.8997 | 0.9485 |
| 0.0908 | 8.6179 | 1060 | 0.8869 | -0.1440 | 0.8869 | 0.9417 |
| 0.0908 | 8.6341 | 1062 | 0.8831 | -0.1440 | 0.8831 | 0.9398 |
| 0.0908 | 8.6504 | 1064 | 0.8861 | -0.1440 | 0.8861 | 0.9413 |
| 0.0908 | 8.6667 | 1066 | 0.8886 | -0.1440 | 0.8886 | 0.9427 |
| 0.0908 | 8.6829 | 1068 | 0.9002 | -0.1440 | 0.9002 | 0.9488 |
| 0.0908 | 8.6992 | 1070 | 0.9294 | -0.1786 | 0.9294 | 0.9640 |
| 0.0908 | 8.7154 | 1072 | 0.9610 | -0.1818 | 0.9610 | 0.9803 |
| 0.0908 | 8.7317 | 1074 | 0.9865 | -0.0476 | 0.9865 | 0.9932 |
| 0.0908 | 8.7480 | 1076 | 0.9914 | -0.0476 | 0.9914 | 0.9957 |
| 0.0908 | 8.7642 | 1078 | 0.9885 | -0.0708 | 0.9885 | 0.9942 |
| 0.0908 | 8.7805 | 1080 | 0.9672 | -0.0342 | 0.9672 | 0.9834 |
| 0.0908 | 8.7967 | 1082 | 0.9351 | 0.1852 | 0.9351 | 0.9670 |
| 0.0908 | 8.8130 | 1084 | 0.9112 | 0.1852 | 0.9112 | 0.9546 |
| 0.0908 | 8.8293 | 1086 | 0.8858 | -0.1786 | 0.8858 | 0.9412 |
| 0.0908 | 8.8455 | 1088 | 0.8729 | -0.1440 | 0.8729 | 0.9343 |
| 0.0908 | 8.8618 | 1090 | 0.8652 | -0.1440 | 0.8652 | 0.9301 |
| 0.0908 | 8.8780 | 1092 | 0.8653 | -0.1440 | 0.8653 | 0.9302 |
| 0.0908 | 8.8943 | 1094 | 0.8713 | -0.1440 | 0.8713 | 0.9334 |
| 0.0908 | 8.9106 | 1096 | 0.8843 | -0.1440 | 0.8843 | 0.9404 |
| 0.0908 | 8.9268 | 1098 | 0.9118 | -0.1786 | 0.9118 | 0.9549 |
| 0.0908 | 8.9431 | 1100 | 0.9475 | -0.1818 | 0.9475 | 0.9734 |
| 0.0908 | 8.9593 | 1102 | 0.9816 | -0.3538 | 0.9816 | 0.9908 |
| 0.0908 | 8.9756 | 1104 | 1.0171 | -0.3968 | 1.0171 | 1.0085 |
| 0.0908 | 8.9919 | 1106 | 1.0364 | -0.0708 | 1.0364 | 1.0181 |
| 0.0908 | 9.0081 | 1108 | 1.0393 | -0.0708 | 1.0393 | 1.0195 |
| 0.0908 | 9.0244 | 1110 | 1.0184 | -0.3968 | 1.0184 | 1.0092 |
| 0.0908 | 9.0407 | 1112 | 0.9818 | -0.3538 | 0.9818 | 0.9909 |
| 0.0908 | 9.0569 | 1114 | 0.9430 | -0.1786 | 0.9430 | 0.9711 |
| 0.0908 | 9.0732 | 1116 | 0.9240 | -0.1440 | 0.9240 | 0.9613 |
| 0.0908 | 9.0894 | 1118 | 0.9105 | -0.1440 | 0.9105 | 0.9542 |
| 0.0908 | 9.1057 | 1120 | 0.9008 | -0.1440 | 0.9008 | 0.9491 |
| 0.0908 | 9.1220 | 1122 | 0.8914 | -0.1440 | 0.8914 | 0.9441 |
| 0.0908 | 9.1382 | 1124 | 0.8868 | -0.1440 | 0.8868 | 0.9417 |
| 0.0908 | 9.1545 | 1126 | 0.8864 | -0.1440 | 0.8864 | 0.9415 |
| 0.0908 | 9.1707 | 1128 | 0.8935 | -0.1440 | 0.8935 | 0.9453 |
| 0.0908 | 9.1870 | 1130 | 0.9079 | -0.1786 | 0.9079 | 0.9528 |
| 0.0908 | 9.2033 | 1132 | 0.9204 | -0.1786 | 0.9204 | 0.9594 |
| 0.0908 | 9.2195 | 1134 | 0.9277 | -0.1786 | 0.9277 | 0.9632 |
| 0.0908 | 9.2358 | 1136 | 0.9300 | -0.1786 | 0.9300 | 0.9644 |
| 0.0908 | 9.2520 | 1138 | 0.9261 | -0.1786 | 0.9261 | 0.9624 |
| 0.0908 | 9.2683 | 1140 | 0.9274 | -0.1786 | 0.9274 | 0.9630 |
| 0.0908 | 9.2846 | 1142 | 0.9286 | -0.1786 | 0.9286 | 0.9636 |
| 0.0908 | 9.3008 | 1144 | 0.9227 | -0.1786 | 0.9227 | 0.9606 |
| 0.0908 | 9.3171 | 1146 | 0.9230 | -0.1786 | 0.9230 | 0.9607 |
| 0.0908 | 9.3333 | 1148 | 0.9228 | -0.1786 | 0.9228 | 0.9606 |
| 0.0908 | 9.3496 | 1150 | 0.9189 | -0.1440 | 0.9189 | 0.9586 |
| 0.0908 | 9.3659 | 1152 | 0.9161 | -0.1440 | 0.9161 | 0.9571 |
| 0.0908 | 9.3821 | 1154 | 0.9179 | -0.1440 | 0.9179 | 0.9581 |
| 0.0908 | 9.3984 | 1156 | 0.9209 | -0.1440 | 0.9209 | 0.9597 |
| 0.0908 | 9.4146 | 1158 | 0.9249 | -0.1440 | 0.9249 | 0.9617 |
| 0.0908 | 9.4309 | 1160 | 0.9262 | -0.1786 | 0.9262 | 0.9624 |
| 0.0908 | 9.4472 | 1162 | 0.9223 | -0.1786 | 0.9223 | 0.9604 |
| 0.0908 | 9.4634 | 1164 | 0.9208 | -0.1786 | 0.9208 | 0.9596 |
| 0.0908 | 9.4797 | 1166 | 0.9143 | -0.1786 | 0.9143 | 0.9562 |
| 0.0908 | 9.4959 | 1168 | 0.9043 | -0.1786 | 0.9043 | 0.9509 |
| 0.0908 | 9.5122 | 1170 | 0.8998 | -0.1786 | 0.8998 | 0.9486 |
| 0.0908 | 9.5285 | 1172 | 0.8940 | -0.1440 | 0.8940 | 0.9455 |
| 0.0908 | 9.5447 | 1174 | 0.8910 | -0.1440 | 0.8910 | 0.9439 |
| 0.0908 | 9.5610 | 1176 | 0.8886 | -0.1440 | 0.8886 | 0.9427 |
| 0.0908 | 9.5772 | 1178 | 0.8832 | -0.1440 | 0.8832 | 0.9398 |
| 0.0908 | 9.5935 | 1180 | 0.8800 | -0.1440 | 0.8800 | 0.9381 |
| 0.0908 | 9.6098 | 1182 | 0.8787 | -0.1440 | 0.8787 | 0.9374 |
| 0.0908 | 9.6260 | 1184 | 0.8798 | -0.1440 | 0.8798 | 0.9380 |
| 0.0908 | 9.6423 | 1186 | 0.8823 | -0.1440 | 0.8823 | 0.9393 |
| 0.0908 | 9.6585 | 1188 | 0.8833 | -0.1440 | 0.8833 | 0.9398 |
| 0.0908 | 9.6748 | 1190 | 0.8839 | -0.1440 | 0.8839 | 0.9401 |
| 0.0908 | 9.6911 | 1192 | 0.8845 | -0.1440 | 0.8845 | 0.9405 |
| 0.0908 | 9.7073 | 1194 | 0.8862 | -0.1440 | 0.8862 | 0.9414 |
| 0.0908 | 9.7236 | 1196 | 0.8882 | -0.1440 | 0.8882 | 0.9424 |
| 0.0908 | 9.7398 | 1198 | 0.8895 | -0.1440 | 0.8895 | 0.9431 |
| 0.0908 | 9.7561 | 1200 | 0.8884 | -0.1440 | 0.8884 | 0.9426 |
| 0.0908 | 9.7724 | 1202 | 0.8888 | -0.1440 | 0.8888 | 0.9428 |
| 0.0908 | 9.7886 | 1204 | 0.8896 | -0.1440 | 0.8896 | 0.9432 |
| 0.0908 | 9.8049 | 1206 | 0.8898 | -0.1440 | 0.8898 | 0.9433 |
| 0.0908 | 9.8211 | 1208 | 0.8897 | -0.1440 | 0.8897 | 0.9432 |
| 0.0908 | 9.8374 | 1210 | 0.8894 | -0.1440 | 0.8894 | 0.9431 |
| 0.0908 | 9.8537 | 1212 | 0.8895 | -0.1440 | 0.8895 | 0.9431 |
| 0.0908 | 9.8699 | 1214 | 0.8893 | -0.1786 | 0.8893 | 0.9430 |
| 0.0908 | 9.8862 | 1216 | 0.8901 | -0.1786 | 0.8901 | 0.9435 |
| 0.0908 | 9.9024 | 1218 | 0.8910 | -0.1786 | 0.8910 | 0.9439 |
| 0.0908 | 9.9187 | 1220 | 0.8917 | -0.1786 | 0.8917 | 0.9443 |
| 0.0908 | 9.9350 | 1222 | 0.8928 | -0.1786 | 0.8928 | 0.9449 |
| 0.0908 | 9.9512 | 1224 | 0.8935 | -0.1786 | 0.8935 | 0.9453 |
| 0.0908 | 9.9675 | 1226 | 0.8937 | -0.1786 | 0.8937 | 0.9454 |
| 0.0908 | 9.9837 | 1228 | 0.8937 | -0.1786 | 0.8937 | 0.9453 |
| 0.0908 | 10.0 | 1230 | 0.8936 | -0.1786 | 0.8936 | 0.9453 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
deadman44/Flux_Photoreal_LoRA | deadman44 | 2024-11-25T11:50:46Z | 406 | 17 | null | [
"gguf",
"text-to-image",
"stable-diffusion",
"safetensors",
"stable-diffusion-xl",
"en",
"license:other",
"region:us"
] | text-to-image | 2024-08-23T06:22:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
---
<style>
.title{
font-size: 2.5em;
letter-spacing: 0.01em;
padding: 0.5em 0;
}
.thumbwidth{
max-width: 180px;
}
.font_red{
color:red
}
</style>
## Recent Updates
24/11/25 [Add New Model](https://huggingface.co/deadman44/Flux_Photoreal_Models)<br>
24/10/01 (Check Point) [myjc_flux_v2 Finetune Test](#myjc)<br>
24/09/26 (LoRA) [myjc_flux_lora_v2-test](#myjc)<br>
---
<a id="myjk"></a>
<h1 class="title">
<span>myjk flux</span>
</h1>
-trained 2852+1316 images.<br/>
-The trigger doesn't seem valid...<br/>
<br/>
<br/>
[Download: myjk_flux_lora_v1](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjk_flux_lora_v1.safetensors?download=true) (LoRA)<br/>
[Download: myjk_flux-Q5_K_M.gguf](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjk_flux-Q5_K_M.gguf?download=true) (checkpoint)<br/>
[Download: version b](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjk_flux_b-Q5_K_M.gguf?download=true) (+109 images use adamwschedulefree optimizer)<br/>
<br/>
## Recommended:<br/>
The LoRA used for the test is [Flux Fusion DS v0 GGUF Q5_K_M](https://civitai.com/models/630820?modelVersionId=765575).
<br/>
VAE / Text Encoder: ae, clip_l, t5-v1_1-xxl-encoder-Q5_K_M<br/>
<table>
<tr>
<td>
<a href="https://img99.pixhost.to/images/705/514290586_20240920163149_myjk_flux-q5_k_m_1769369977.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/705/514290586_20240920163149_myjk_flux-q5_k_m_1769369977.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/705/514290592_20240920170058_myjk_flux-q5_k_m_872243841.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/705/514290592_20240920170058_myjk_flux-q5_k_m_872243841.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/705/514293472_20240920174336_myjk_flux-q5_k_m_2913518537.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/705/514293472_20240920174336_myjk_flux-q5_k_m_2913518537.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
</tr>
</table>
-refer to png info
<br />
## - sample prompt
[<img src=https://t99.pixhost.to/thumbs/705/514290595_20240920171937_myjk_flux-q5_k_m_3220485898.jpg />](https://img99.pixhost.to/images/705/514290595_20240920171937_myjk_flux-q5_k_m_3220485898.jpg)
```bash
japanese, 18yo, myjk, smile,
photograph of Two girls in idol costumes singing. The girl on the left has black ponytail hair and a guitar. The girl on the right has long black hair and a microphone. The stage at night is illuminated with lights and neon “myjk” signage.
Steps: 12, Sampler: Euler, Schedule type: Simple, CFG scale: 1, Seed: 3220485898, Size: 768x1024, Model hash: 33c0966fb8, Model: myjk_flux-Q5_K_M, Denoising strength: 0.3, Hires CFG Scale: 1, Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Version: f2.0.1v1.10.1-previous-535-gb20cb4bf0, Diffusion in Low Bits: Automatic (fp16 LoRA), Module 1: ae, Module 2: t5-v1_1-xxl-encoder-Q5_K_M, Module 3: clip_l
```
<br />
## - trigger
```bash
myjk, japanese, european,
and 16-18 yo,
and native english(recomended) or danbooru tags
```
<br/>
<a id="myjc"></a>
<h1 class="title">
<span>myjc flux</span>
</h1>
-trained 1543+1309 images.<br/>
-The trigger doesn't seem valid...<br/>
<br/>
<br/>
v2<br />
[Download: myjc_flux_lora_v2-test](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjc_flux_lora_v2-test.safetensors?download=true) (LoRA) #Flux-Dev2_Pro training Test
<span class="font_red">If the image is blurred, increase Sampling steps</span><br>
[Download: myjc_flux_v2_FTtest-Q5_K_M](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjc_flux_v2_FTtest-Q5_K_M.gguf?download=true) (Check Point) #Flux dev1 Finetune + LoRA Test <br/>
<br />
v1<br />
[Download: myjc_flux_lora_v1](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjc_flux_lora_v1.safetensors?download=true) (LoRA)<br/>
[Download: myjc_flux-Q5_K_M.gguf](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjc_flux-Q5_K_M.gguf?download=true) (checkpoint)<br/>
<br/>
## Recommended:<br/>
The LoRA used for the test is [Flux Fusion DS v0 GGUF Q4_0 (UNET)](https://civitai.com/models/630820?modelVersionId=736086) and [v0 GGUF Q5_K_M](https://civitai.com/models/630820?modelVersionId=765575).
<br/>
VAE / Text Encoder: ae, clip_l, t5-v1_1-xxl-encoder-Q5_K_M<br/>
<table>
<tr>
<td>
<a href="https://img99.pixhost.to/images/338/509944057_20240904212108_myjc_flux-q5_k_m_803013794.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/338/509944057_20240904212108_myjc_flux-q5_k_m_803013794.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/338/509944058_20240904214557_myjc_flux-q5_k_m_2287512062.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/338/509944058_20240904214557_myjc_flux-q5_k_m_2287512062.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/338/509944061_20240904220631_myjc_flux-q5_k_m_3636763026.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/338/509944061_20240904220631_myjc_flux-q5_k_m_3636763026.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
</tr>
</table>
-refer to png info
<br />
## - sample prompt
[<img src=https://t99.pixhost.to/thumbs/338/509944238_20240905080824_myjc_flux-q5_k_m_1298706659.jpg />](https://img99.pixhost.to/images/338/509944238_20240905080824_myjc_flux-q5_k_m_1298706659.jpg)
```bash
14yo, myjc, japanese, medium breasts,
This photograph captures a girl sitting on a grassy field at night. She has a light complexion and straight long black hair with bangs styled with a black bow. Her expression is cheerful with a slight smile. She is wearing a loose oversized shirt in a pastel gradient of pink yellow and blue which is slightly oversized giving it a cozy casual look. Her shirt is paired with white shorts and knee-high black socks with a small white bow on the top. The socks are adorned with a subtle pattern. She sits on a blanket with a white background featuring small amo,e characters. The grass is lush and green indicating a well-maintained lawn. The background is dark suggesting it is nighttime and the lighting is soft creating a warm and intimate atmosphere. The overall mood of the image is relaxed and playful with the subject's youthful and cheerful demeanor complementing the serene outdoor setting.
Steps: 12, Sampler: Euler, Schedule type: Simple, CFG scale: 1, Seed: 1298706659, Size: 768x1024, Model hash: c6b19f170d, Model: myjc_flux-Q5_K_M, Denoising strength: 0.3, Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Version: f2.0.1v1.10.1-previous-501-g668e87f92, Diffusion in Low Bits: Automatic (fp16 LoRA), Module 1: ae, Module 2: clip_l, Module 3: t5-v1_1-xxl-encoder-Q5_K_M
```
<br />
## - trigger
```bash
myjc, japanese, european,
and 13-15 yo,
and native english(recomended) or danbooru tags
```
<br/>
---
<a id="test03"></a>
<h1 class="title">
<span>lora_zipang_flux_test</span>
</h1>
-Training was based on a merged model of dev1 and lora test**.<br/>
<br/>
### -Trigger
```bash
japanese, european
```
<br/>
* [test04](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/lora_zipang_flux_test04.safetensors?download=true) +350 images
```bash
myjc, 13yo
```
* [test03](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/lora_zipang_flux_test03.safetensors?download=true) +920 images
```bash
myjsh, 12yo
```
<br/>
<a id="test02"></a>
<h1 class="title">
<span>myjsm_flux_test02</span>
</h1>
-It is a test lora of poor quality with only a few images learned.<br/>
-trained 273 images.<br/>
<br/>
Found a slightly better training setting.
But still hard to find things that don't show up in flux.
<br/>
<br/>
[Download:test02](https://huggingface.co/deadman44/Flux_Photoreal_LoRA/resolve/main/myjsm_flux_test02.safetensors?download=true) <br/>
<br/>
The model used for the test is [Flux Fusion DS v0 GGUF Q4_0 (UNET)](https://civitai.com/models/630820?modelVersionId=736086) and [v0 GGUF Q5_K_M](https://civitai.com/models/630820?modelVersionId=765575).
<table>
<tr>
<td colspan="3">
<div>
GGUF Q4_0 + t5xxl_fp8_e4m3fn : 4step
</div>
</td>
</tr>
<tr>
<td>
<a href="https://img99.pixhost.to/images/126/507626249_20240827094724_fusionds_v0_q4_456078958.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/126/507626249_20240827094724_fusionds_v0_q4_456078958.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/126/507626251_20240827103511_fusionds_v0_q4_482040669.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/126/507626251_20240827103511_fusionds_v0_q4_482040669.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/126/507626253_20240827112528_fusionds_v0_q4_1816421730.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/126/507626253_20240827112528_fusionds_v0_q4_1816421730.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
</tr>
<tr>
<td colspan="3">
<div>
GGUF Q5_K_M. + t5-v1_1-xxl-encoder-Q5_K_M : 12step
</div>
</td>
</tr>
<tr>
<td>
<a href="https://img99.pixhost.to/images/126/507626250_20240827102458_fluxfusionds_v0_q5_k_m_2418428235.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/126/507626250_20240827102458_fluxfusionds_v0_q5_k_m_2418428235.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/126/507626252_20240827110802_fluxfusionds_v0_q5_k_m_3216545735.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/126/507626252_20240827110802_fluxfusionds_v0_q5_k_m_3216545735.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
<td>
<a href="https://img99.pixhost.to/images/126/507626256_20240827121409_fluxfusionds_v0_q5_k_m_2982180625.jpg" target=”_blank”>
<div>
<img src="https://t99.pixhost.to/thumbs/126/507626256_20240827121409_fluxfusionds_v0_q5_k_m_2982180625.jpg" alt="sample1" class="thumbwidth" >
</div>
</td>
</tr>
</table>
-refer to png info
<br />
## - sample prompt
[<img src=https://t99.pixhost.to/thumbs/126/507626257_20240827124249_fusionds_v0_q4_642879771.jpg />](https://img99.pixhost.to/images/126/507626257_20240827124249_fusionds_v0_q4_642879771.jpg)
```bash
9yo, myjsm, japanese,
photograph of a girl sitting on a brick pavement with a pink umbrella in front of her. She is wearing a white camisole and a blue skirt with a anime print. She has shoulder-length dark hair and is smiling at the camera.
bangs, black eyes, skirt, rain
<lora:myjsm_flux_test02:1>
Steps: 4, Sampler: Euler, Schedule type: Simple, CFG scale: 1, Seed: 642879771, Size: 792x1056, Model hash: 5e21feb505, Model: FusionDS_v0_Q4, Lora hashes: "myjsm_flux_test02: 3fdff20b7d65", Version: f2.0.1v1.10.1-previous-419-gf82029c5c, Module 1: ae, Module 2: clip_l, Module 3: t5xxl_fp8_e4m3fn
```
<br />
## - trigger
```bash
myjsm, japanese, 9yo,
and native english
```
<br />
## -Train Settings
```bash
base model: flux1-dev.safetensors
vae/text encoder: clip_l.safetensors, t5xxl_fp8_e4m3fn.safetensors, ae.safetensors
tag: caption (native eng) + tags (danbooru)
--network_module "networks.lora_flux"
--gradient_checkpointing
--cache_latents
--cache_latents_to_disk
--cache_text_encoder_outputs
--cache_text_encoder_outputs_to_disk
--enable_bucket
--bucket_no_upscale
--optimizer_type "AdamW8bit"
--optimizer_args "weight_decay=0.01" "betas=0.9,0.999"
--learning_rate=0.0002
--network_dim=32
--network_alpha=4
--network_train_unet_only
--mixed_precision "bf16"
--save_precision "bf16"
--full_bf16
--loss_type "l2"
--huber_schedule "snr"
--model_prediction_type "raw"
--discrete_flow_shift 3
--timestep_sampling "sigma"
--max_grad_norm=1
--max_timestep=1000
--min_snr_gamma=5
--min_timestep=100
--noise_offset=0.0375
--adaptive_noise_scale=0.00375
--apply_t5_attn_mask
--split_mode
--network_args "loraplus_unet_lr_ratio=16" "train_blocks=single"
```
<br />
|
QuantFactory/Llama-3.1-Tulu-3-8B-GGUF | QuantFactory | 2024-11-25T11:48:38Z | 224 | 4 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"dataset:allenai/RLVR-GSM-MATH-IF-Mixed-Constraints",
"base_model:allenai/Llama-3.1-Tulu-3-8B-DPO",
"base_model:quantized:allenai/Llama-3.1-Tulu-3-8B-DPO",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-25T10:40:50Z |
---
license: llama3.1
language:
- en
pipeline_tag: text-generation
datasets:
- allenai/RLVR-GSM-MATH-IF-Mixed-Constraints
base_model:
- allenai/Llama-3.1-Tulu-3-8B-DPO
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama-3.1-Tulu-3-8B-GGUF
This is quantized version of [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) created using llama.cpp
# Original Model Card
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu3/Tulu3-logo.png" alt="Tulu 3 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Llama-3.1-Tulu-3-8B
Tülu3 is a leading instruction following model family, offering fully open-source data, code, and recipes designed to serve as a comprehensive guide for modern post-training techniques.
Tülu3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
## Model description
- **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Llama 3.1 Community License Agreement
- **Finetuned from model:** allenai/Llama-3.1-Tulu-3-8B-DPO
### Model Sources
- **Training Repository:** https://github.com/allenai/open-instruct
- **Eval Repository:** https://github.com/allenai/olmes
- **Paper:** https://allenai.org/papers/tulu-3-report.pdf (arXiv soon)
- **Demo:** https://playground.allenai.org/
### Model Family
| **Stage** | **Llama 3.1 8B** | **Llama 3.1 70B** |
|----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| **Base Model** | [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) |
| **SFT** | [allenai/Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT) | [allenai/Llama-3.1-Tulu-3-70B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-SFT) |
| **DPO** | [allenai/Llama-3.1-Tulu-3-8B-DPO](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-DPO) | [allenai/Llama-3.1-Tulu-3-70B-DPO](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B-DPO) |
| **Final Models (RLVR)** | [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B) | [allenai/Llama-3.1-Tulu-3-70B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-70B) |
| **Reward Model (RM)**| [allenai/Llama-3.1-Tulu-3-8B-RM](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-RM) | (Same as 8B) |
## Using the model
### Loading with HuggingFace
To load the model with HuggingFace, use the following snippet:
```
from transformers import AutoModelForCausalLM
tulu_model = AutoModelForCausalLM.from_pretrained("allenai/Llama-3.1-Tulu-3-8B")
```
### VLLM
As a Llama base model, the model can be easily served with:
```
vllm serve allenai/Llama-3.1-Tulu-3-8B
```
Note that given the long chat template of Llama, you may want to use `--max_model_len=8192`.
### Chat template
The chat template for our models is formatted as:
```
<|user|>\nHow are you doing?\n<|assistant|>\nI'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
Or with new lines expanded:
```
<|user|>
How are you doing?
<|assistant|>
I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
```
It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
### System prompt
In Ai2 demos, we use this system prompt by default:
```
You are Tulu 3, a helpful and harmless AI Assistant built by the Allen Institute for AI.
```
The model has not been trained with a specific system prompt in mind.
### Bias, Risks, and Limitations
The Tülu3 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base Llama 3.1 models, however it is likely to have included a mix of Web data and technical sources like books and code.
See the Falcon 180B model card for an example of this.
## Performance
| Benchmark (eval) | Tülu 3 SFT 8B | Tülu 3 DPO 8B | Tülu 3 8B | Llama 3.1 8B Instruct | Qwen 2.5 7B Instruct | Magpie 8B | Gemma 2 9B Instruct | Ministral 8B Instruct |
|---------------------------------|----------------|----------------|------------|------------------------|----------------------|-----------|---------------------|-----------------------|
| **Avg.** | 60.4 | 64.4 | **64.8** | 62.2 | 57.8 | 44.7 | 55.2 | 58.3 |
| **MMLU (0 shot, CoT)** | 65.9 | 68.7 | 68.2 | 71.2 | **76.6** | 62.0 | 74.6 | 68.5 |
| **PopQA (15 shot)** | **29.3** | 29.3 | 29.1 | 20.2 | 18.1 | 22.5 | 28.3 | 20.2 |
| **TruthfulQA (6 shot)** | 46.8 | 56.1 | 55.0 | 55.1 | **63.1** | 57.0 | 61.4 | 55.5 |
| **BigBenchHard (3 shot, CoT)** | **67.9** | 65.8 | 66.0 | 62.8 | 21.7 | 0.9 | 2.5 | 56.2 |
| **DROP (3 shot)** | 61.3 | 62.5 | **62.6** | 61.5 | 54.4 | 49.4 | 58.8 | 56.2 |
| **MATH (4 shot CoT, Flex)** | 31.5 | 42.0 | **43.7** | 42.5 | 14.8 | 5.1 | 29.8 | 40.0 |
| **GSM8K (8 shot, CoT)** | 76.2 | 84.3 | **87.6** | 83.4 | 83.8 | 61.2 | 79.7 | 80.0 |
| **HumanEval (pass@10)** | 86.2 | 83.9 | 83.9 | 86.3 | **93.1** | 75.4 | 71.7 | 91.0 |
| **HumanEval+ (pass@10)** | 81.4 | 78.6 | 79.2 | 82.9 | **89.7** | 69.1 | 67.0 | 88.5 |
| **IFEval (prompt loose)** | 72.8 | 81.1 | **82.4** | 80.6 | 74.7 | 38.8 | 69.9 | 56.4 |
| **AlpacaEval 2 (LC % win)** | 12.4 | 33.5 | 34.5 | 24.2 | 29.0 | **49.0** | 43.7 | 31.4 |
| **Safety (6 task avg.)** | **93.1** | 87.2 | 85.5 | 75.2 | 75.0 | 46.4 | 75.5 | 56.2 |
| Benchmark (eval) | Tülu 3 70B SFT | Tülu 3 DPO 70B | Tülu 3 70B | Llama 3.1 70B Instruct | Qwen 2.5 72B Instruct | Hermes 3 Llama 3.1 70B | Nemotron Llama 3.1 70B |
|---------------------------------|-----------------|-----------------|-------------|-------------------------|-----------------------|------------------------|-------------------------|
| **Avg.** | 72.6 | 75.9 | **76.0** | 73.4 | 71.5 | 68.3 | 65.5 |
| **MMLU (0 shot, CoT)** | 78.9 | 83.3 | 83.1 | 85.3 | **85.5** | 80.4 | 83.8 |
| **PopQA (15 shot)** | **48.6** | 46.3 | 46.5 | 46.4 | 30.6 | 48.1 | 36.4 |
| **TruthfulQA (6 shot)** | 55.7 | 67.9 | 67.6 | 66.8 | **69.9** | 66.5 | 62.6 |
| **BigBenchHard (3 shot, CoT)** | **82.7** | 81.8 | 82.0 | 73.8 | 67.2 | 82.1 | 0.7 |
| **DROP (3 shot)** | **77.2** | 74.1 | 74.3 | 77.0 | 34.2 | 73.2 | 68.8 |
| **MATH (4 shot CoT, Flex)** | 53.7 | 62.3 | 63.0 | 56.4 | **74.3** | 41.9 | 55.0 |
| **GSM8K (8 shot, CoT)** | 91.1 | 93.5 | 93.5 | **93.7** | 89.5 | 90.0 | 84.7 |
| **HumanEval (pass@10)** | 92.9 | 92.4 | 92.4 | 93.6 | 94.0 | 89.6 | **94.1** |
| **HumanEval+ (pass@10)** | 87.3 | 88.4 | 88.0 | 89.5 | **90.8** | 85.9 | 85.5 |
| **IFEval (prompt loose)** | 82.1 | 82.6 | 83.2 | **88.0** | 87.6 | 76.0 | 79.9 |
| **AlpacaEval 2 (LC % win)** | 26.3 | 49.6 | 49.8 | 33.4 | 47.7 | 28.4 | **66.1** |
| **Safety (6 task avg.)** | **94.4** | 89.0 | 88.3 | 76.5 | 87.0 | 57.9 | 69.0 |
## Hyperparamters
PPO settings for RLVR:
- **Learning Rate**: 3 × 10⁻⁷
- **Discount Factor (gamma)**: 1.0
- **General Advantage Estimation (lambda)**: 0.95
- **Mini-batches (N_mb)**: 1
- **PPO Update Iterations (K)**: 4
- **PPO's Clipping Coefficient (epsilon)**: 0.2
- **Value Function Coefficient (c1)**: 0.1
- **Gradient Norm Threshold**: 1.0
- **Learning Rate Schedule**: Linear
- **Generation Temperature**: 1.0
- **Batch Size (effective)**: 512
- **Max Token Length**: 2,048
- **Max Prompt Token Length**: 2,048
- **Penalty Reward Value for Responses without an EOS Token**: -10.0
- **Response Length**: 1,024 (but 2,048 for MATH)
- **Total Episodes**: 100,000
- **KL penalty coefficient (beta)**: [0.1, 0.05, 0.03, 0.01]
- **Warm up ratio (omega)**: 0.0
## License and use
All Llama 3.1 Tülu3 models are released under Meta's [Llama 3.1 Community License Agreement](https://www.llama.com/llama3_1/license/).
Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc.
Tülu3 is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
The models have been fine-tuned using a dataset mix with outputs generated from third party models and are subject to additional terms:
[Gemma Terms of Use](https://ai.google.dev/gemma/terms) and [Qwen License Agreement](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE) (models were improved using Qwen 2.5).
## Citation
If Tülu3 or any of the related materials were helpful to your work, please cite:
```
@article{lambert2024tulu3,
title = {Tülu 3: Pushing Frontiers in Open Language Model Post-Training},
author = {
Nathan Lambert and
Jacob Morrison and
Valentina Pyatkin and
Shengyi Huang and
Hamish Ivison and
Faeze Brahman and
Lester James V. Miranda and
Alisa Liu and
Nouha Dziri and
Shane Lyu and
Yuling Gu and
Saumya Malik and
Victoria Graf and
Jena D. Hwang and
Jiangjiang Yang and
Ronan Le Bras and
Oyvind Tafjord and
Chris Wilhelm and
Luca Soldaini and
Noah A. Smith and
Yizhong Wang and
Pradeep Dasigi and
Hannaneh Hajishirzi
},
year = {2024},
email = {[email protected]}
}
```
|
MoonKih/fashion-lora-model_2 | MoonKih | 2024-11-25T11:45:32Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"dataset:MoonKih/fassion_2",
"base_model:helenai/runwayml-stable-diffusion-v1-5-ov",
"base_model:adapter:helenai/runwayml-stable-diffusion-v1-5-ov",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-11-25T11:44:58Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A photo of a young Korean woman in her 20s to 30s, dressed in a chic and
minimalistic summer outfit perfect for casual urban outings. The outfit
consists of a pastel pink blouse with rolled-up sleeves paired with a fitted
white mini skirt, creating an elegant yet relaxed vibe. The ensemble is
completed with matching white flats and a small crossbody bag. The photo
captures a full-body view in a bright and modern indoor setting with
neutral-toned furniture and decor, emphasizing a clean and sophisticated
atmosphere.
parameters:
negative_prompt: >-
Male, partial body, face visible, cropped shoes, unrealistic details,
cluttered background, low quality, overly exaggerated poses
output:
url: images/KakaoTalk_Photo_2024-11-25-20-43-53.jpeg
base_model: helenai/runwayml-stable-diffusion-v1-5-ov
instance_prompt: null
license: apache-2.0
datasets:
- MoonKih/fassion_2
---
# fashion-lora-model
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/MoonKih/fashion-lora-model_2/tree/main) them in the Files & versions tab. |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k30_task3_organization_fold0 | MayBashendy | 2024-11-25T11:43:56Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T11:30:40Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k30_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k30_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1529
- Qwk: -0.0927
- Mse: 1.1529
- Rmse: 1.0737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0138 | 2 | 4.3800 | -0.0072 | 4.3800 | 2.0928 |
| No log | 0.0276 | 4 | 2.3540 | -0.0722 | 2.3540 | 1.5343 |
| No log | 0.0414 | 6 | 1.4238 | -0.0087 | 1.4238 | 1.1932 |
| No log | 0.0552 | 8 | 2.4354 | 0.0 | 2.4354 | 1.5606 |
| No log | 0.0690 | 10 | 3.3928 | -0.0815 | 3.3928 | 1.8420 |
| No log | 0.0828 | 12 | 2.1186 | 0.0 | 2.1186 | 1.4555 |
| No log | 0.0966 | 14 | 1.5550 | 0.0 | 1.5550 | 1.2470 |
| No log | 0.1103 | 16 | 1.4808 | -0.1048 | 1.4808 | 1.2169 |
| No log | 0.1241 | 18 | 1.8740 | 0.0 | 1.8740 | 1.3690 |
| No log | 0.1379 | 20 | 1.6607 | 0.0 | 1.6607 | 1.2887 |
| No log | 0.1517 | 22 | 1.5322 | 0.0 | 1.5322 | 1.2378 |
| No log | 0.1655 | 24 | 1.2767 | 0.0873 | 1.2767 | 1.1299 |
| No log | 0.1793 | 26 | 1.0164 | 0.0 | 1.0164 | 1.0082 |
| No log | 0.1931 | 28 | 0.8642 | 0.0 | 0.8642 | 0.9296 |
| No log | 0.2069 | 30 | 0.9709 | 0.2143 | 0.9709 | 0.9854 |
| No log | 0.2207 | 32 | 0.9821 | 0.2143 | 0.9821 | 0.9910 |
| No log | 0.2345 | 34 | 1.3583 | 0.0 | 1.3583 | 1.1655 |
| No log | 0.2483 | 36 | 1.5084 | 0.0 | 1.5084 | 1.2282 |
| No log | 0.2621 | 38 | 1.2965 | 0.0873 | 1.2965 | 1.1386 |
| No log | 0.2759 | 40 | 1.1208 | -0.1159 | 1.1208 | 1.0587 |
| No log | 0.2897 | 42 | 1.1061 | -0.1159 | 1.1061 | 1.0517 |
| No log | 0.3034 | 44 | 1.1524 | -0.1159 | 1.1524 | 1.0735 |
| No log | 0.3172 | 46 | 1.3171 | 0.0 | 1.3171 | 1.1477 |
| No log | 0.3310 | 48 | 1.5476 | 0.0 | 1.5476 | 1.2440 |
| No log | 0.3448 | 50 | 1.7155 | 0.0 | 1.7155 | 1.3098 |
| No log | 0.3586 | 52 | 1.6192 | 0.0 | 1.6192 | 1.2725 |
| No log | 0.3724 | 54 | 1.5089 | 0.0 | 1.5089 | 1.2284 |
| No log | 0.3862 | 56 | 1.5269 | 0.0 | 1.5269 | 1.2357 |
| No log | 0.4 | 58 | 1.8045 | 0.0 | 1.8045 | 1.3433 |
| No log | 0.4138 | 60 | 1.9218 | 0.0 | 1.9218 | 1.3863 |
| No log | 0.4276 | 62 | 1.7503 | 0.0 | 1.7503 | 1.3230 |
| No log | 0.4414 | 64 | 1.4438 | 0.0 | 1.4438 | 1.2016 |
| No log | 0.4552 | 66 | 1.1743 | -0.1159 | 1.1743 | 1.0836 |
| No log | 0.4690 | 68 | 1.0968 | -0.1159 | 1.0968 | 1.0473 |
| No log | 0.4828 | 70 | 1.1649 | 0.0530 | 1.1649 | 1.0793 |
| No log | 0.4966 | 72 | 1.3465 | 0.0873 | 1.3465 | 1.1604 |
| No log | 0.5103 | 74 | 1.6624 | 0.0 | 1.6624 | 1.2893 |
| No log | 0.5241 | 76 | 1.7077 | 0.0 | 1.7077 | 1.3068 |
| No log | 0.5379 | 78 | 1.5125 | 0.0 | 1.5125 | 1.2299 |
| No log | 0.5517 | 80 | 1.2846 | 0.0737 | 1.2846 | 1.1334 |
| No log | 0.5655 | 82 | 1.0513 | 0.0530 | 1.0513 | 1.0253 |
| No log | 0.5793 | 84 | 0.9464 | 0.0 | 0.9464 | 0.9728 |
| No log | 0.5931 | 86 | 0.9699 | 0.0 | 0.9699 | 0.9848 |
| No log | 0.6069 | 88 | 1.0345 | 0.0320 | 1.0345 | 1.0171 |
| No log | 0.6207 | 90 | 1.0179 | 0.0435 | 1.0179 | 1.0089 |
| No log | 0.6345 | 92 | 0.9228 | 0.3623 | 0.9228 | 0.9606 |
| No log | 0.6483 | 94 | 0.8921 | 0.3623 | 0.8921 | 0.9445 |
| No log | 0.6621 | 96 | 0.9123 | 0.3623 | 0.9123 | 0.9551 |
| No log | 0.6759 | 98 | 0.9269 | 0.3623 | 0.9269 | 0.9628 |
| No log | 0.6897 | 100 | 0.9427 | 0.3623 | 0.9427 | 0.9709 |
| No log | 0.7034 | 102 | 0.9845 | 0.3623 | 0.9845 | 0.9922 |
| No log | 0.7172 | 104 | 1.0126 | 0.0530 | 1.0126 | 1.0063 |
| No log | 0.7310 | 106 | 0.9905 | 0.0530 | 0.9905 | 0.9952 |
| No log | 0.7448 | 108 | 0.9382 | 0.3623 | 0.9382 | 0.9686 |
| No log | 0.7586 | 110 | 0.8424 | 0.2080 | 0.8424 | 0.9178 |
| No log | 0.7724 | 112 | 0.8013 | 0.2143 | 0.8013 | 0.8952 |
| No log | 0.7862 | 114 | 0.7891 | 0.0 | 0.7891 | 0.8883 |
| No log | 0.8 | 116 | 0.7868 | 0.0 | 0.7868 | 0.8870 |
| No log | 0.8138 | 118 | 0.7913 | 0.0 | 0.7913 | 0.8896 |
| No log | 0.8276 | 120 | 0.9041 | -0.1159 | 0.9041 | 0.9508 |
| No log | 0.8414 | 122 | 1.0737 | 0.0530 | 1.0737 | 1.0362 |
| No log | 0.8552 | 124 | 1.0001 | 0.0530 | 1.0001 | 1.0001 |
| No log | 0.8690 | 126 | 0.8899 | 0.0 | 0.8899 | 0.9433 |
| No log | 0.8828 | 128 | 0.8810 | 0.0 | 0.8810 | 0.9386 |
| No log | 0.8966 | 130 | 0.8820 | 0.0 | 0.8820 | 0.9391 |
| No log | 0.9103 | 132 | 0.8868 | 0.0 | 0.8868 | 0.9417 |
| No log | 0.9241 | 134 | 0.9192 | -0.1159 | 0.9192 | 0.9587 |
| No log | 0.9379 | 136 | 0.9018 | -0.1159 | 0.9018 | 0.9496 |
| No log | 0.9517 | 138 | 0.8307 | 0.2143 | 0.8307 | 0.9114 |
| No log | 0.9655 | 140 | 0.8261 | 0.2143 | 0.8261 | 0.9089 |
| No log | 0.9793 | 142 | 0.8286 | 0.2143 | 0.8286 | 0.9103 |
| No log | 0.9931 | 144 | 0.9230 | -0.1159 | 0.9230 | 0.9607 |
| No log | 1.0069 | 146 | 1.0254 | 0.0530 | 1.0254 | 1.0126 |
| No log | 1.0207 | 148 | 0.9932 | -0.1159 | 0.9932 | 0.9966 |
| No log | 1.0345 | 150 | 0.8704 | 0.2080 | 0.8704 | 0.9329 |
| No log | 1.0483 | 152 | 0.8232 | 0.2143 | 0.8232 | 0.9073 |
| No log | 1.0621 | 154 | 0.8140 | 0.3636 | 0.8140 | 0.9022 |
| No log | 1.0759 | 156 | 0.8406 | 0.2143 | 0.8406 | 0.9168 |
| No log | 1.0897 | 158 | 0.9177 | -0.1440 | 0.9177 | 0.9580 |
| No log | 1.1034 | 160 | 0.8723 | 0.3231 | 0.8723 | 0.9340 |
| No log | 1.1172 | 162 | 0.9079 | 0.1270 | 0.9079 | 0.9528 |
| No log | 1.1310 | 164 | 0.9634 | -0.0593 | 0.9634 | 0.9815 |
| No log | 1.1448 | 166 | 0.9578 | -0.0593 | 0.9578 | 0.9787 |
| No log | 1.1586 | 168 | 0.9055 | 0.2878 | 0.9055 | 0.9516 |
| No log | 1.1724 | 170 | 1.0659 | -0.0927 | 1.0659 | 1.0324 |
| No log | 1.1862 | 172 | 1.2559 | -0.0927 | 1.2559 | 1.1207 |
| No log | 1.2 | 174 | 1.0521 | -0.0927 | 1.0521 | 1.0257 |
| No log | 1.2138 | 176 | 0.8741 | 0.3231 | 0.8741 | 0.9349 |
| No log | 1.2276 | 178 | 0.8703 | 0.3231 | 0.8703 | 0.9329 |
| No log | 1.2414 | 180 | 0.9103 | 0.2080 | 0.9103 | 0.9541 |
| No log | 1.2552 | 182 | 0.8567 | 0.3231 | 0.8567 | 0.9256 |
| No log | 1.2690 | 184 | 0.9008 | 0.2080 | 0.9008 | 0.9491 |
| No log | 1.2828 | 186 | 0.9253 | 0.2080 | 0.9253 | 0.9619 |
| No log | 1.2966 | 188 | 0.8861 | 0.2080 | 0.8861 | 0.9413 |
| No log | 1.3103 | 190 | 0.7654 | 0.0179 | 0.7654 | 0.8749 |
| No log | 1.3241 | 192 | 0.7661 | 0.2080 | 0.7661 | 0.8753 |
| No log | 1.3379 | 194 | 0.8050 | 0.2080 | 0.8050 | 0.8972 |
| No log | 1.3517 | 196 | 0.9767 | -0.1224 | 0.9767 | 0.9883 |
| No log | 1.3655 | 198 | 0.8673 | 0.1791 | 0.8673 | 0.9313 |
| No log | 1.3793 | 200 | 0.7721 | 0.2878 | 0.7721 | 0.8787 |
| No log | 1.3931 | 202 | 0.7914 | 0.2878 | 0.7914 | 0.8896 |
| No log | 1.4069 | 204 | 0.8957 | 0.2763 | 0.8957 | 0.9464 |
| No log | 1.4207 | 206 | 1.3483 | 0.1921 | 1.3483 | 1.1611 |
| No log | 1.4345 | 208 | 1.2996 | 0.1720 | 1.2996 | 1.1400 |
| No log | 1.4483 | 210 | 0.9352 | 0.2667 | 0.9352 | 0.9671 |
| No log | 1.4621 | 212 | 0.9428 | 0.1037 | 0.9428 | 0.9710 |
| No log | 1.4759 | 214 | 0.9893 | 0.0833 | 0.9893 | 0.9947 |
| No log | 1.4897 | 216 | 0.7525 | 0.1270 | 0.7525 | 0.8675 |
| No log | 1.5034 | 218 | 0.8924 | 0.3444 | 0.8924 | 0.9447 |
| No log | 1.5172 | 220 | 1.2542 | 0.0788 | 1.2542 | 1.1199 |
| No log | 1.5310 | 222 | 1.0697 | 0.1895 | 1.0697 | 1.0343 |
| No log | 1.5448 | 224 | 0.8301 | 0.2080 | 0.8301 | 0.9111 |
| No log | 1.5586 | 226 | 0.7147 | 0.2143 | 0.7147 | 0.8454 |
| No log | 1.5724 | 228 | 0.7109 | 0.3231 | 0.7109 | 0.8431 |
| No log | 1.5862 | 230 | 0.7460 | 0.1270 | 0.7460 | 0.8637 |
| No log | 1.6 | 232 | 0.7309 | 0.1538 | 0.7309 | 0.8549 |
| No log | 1.6138 | 234 | 0.7751 | 0.2143 | 0.7751 | 0.8804 |
| No log | 1.6276 | 236 | 0.9248 | 0.2029 | 0.9248 | 0.9616 |
| No log | 1.6414 | 238 | 0.8749 | 0.2029 | 0.8749 | 0.9354 |
| No log | 1.6552 | 240 | 0.7377 | 0.2143 | 0.7377 | 0.8589 |
| No log | 1.6690 | 242 | 0.7468 | 0.1538 | 0.7468 | 0.8642 |
| No log | 1.6828 | 244 | 0.7312 | -0.0185 | 0.7312 | 0.8551 |
| No log | 1.6966 | 246 | 0.7481 | 0.2080 | 0.7481 | 0.8649 |
| No log | 1.7103 | 248 | 0.9317 | 0.2029 | 0.9317 | 0.9652 |
| No log | 1.7241 | 250 | 0.8827 | 0.2029 | 0.8827 | 0.9395 |
| No log | 1.7379 | 252 | 0.7142 | 0.2143 | 0.7142 | 0.8451 |
| No log | 1.7517 | 254 | 0.7266 | 0.0 | 0.7266 | 0.8524 |
| No log | 1.7655 | 256 | 0.7337 | 0.0 | 0.7337 | 0.8566 |
| No log | 1.7793 | 258 | 0.7137 | 0.0 | 0.7137 | 0.8448 |
| No log | 1.7931 | 260 | 0.7089 | 0.2143 | 0.7089 | 0.8419 |
| No log | 1.8069 | 262 | 0.7040 | 0.0 | 0.7040 | 0.8390 |
| No log | 1.8207 | 264 | 0.7337 | -0.0185 | 0.7337 | 0.8566 |
| No log | 1.8345 | 266 | 0.7630 | -0.0185 | 0.7630 | 0.8735 |
| No log | 1.8483 | 268 | 0.7739 | -0.0185 | 0.7739 | 0.8797 |
| No log | 1.8621 | 270 | 0.7534 | -0.0185 | 0.7534 | 0.8680 |
| No log | 1.8759 | 272 | 0.7483 | -0.0185 | 0.7483 | 0.8651 |
| No log | 1.8897 | 274 | 0.8482 | 0.2029 | 0.8482 | 0.9210 |
| No log | 1.9034 | 276 | 0.8476 | 0.2029 | 0.8476 | 0.9207 |
| No log | 1.9172 | 278 | 0.7575 | 0.0 | 0.7575 | 0.8703 |
| No log | 1.9310 | 280 | 0.7471 | 0.1270 | 0.7471 | 0.8644 |
| No log | 1.9448 | 282 | 0.7628 | 0.1270 | 0.7628 | 0.8734 |
| No log | 1.9586 | 284 | 0.7451 | 0.1270 | 0.7451 | 0.8632 |
| No log | 1.9724 | 286 | 0.7444 | 0.3433 | 0.7444 | 0.8628 |
| No log | 1.9862 | 288 | 0.7753 | 0.2080 | 0.7753 | 0.8805 |
| No log | 2.0 | 290 | 0.7421 | 0.2080 | 0.7421 | 0.8614 |
| No log | 2.0138 | 292 | 0.7299 | 0.2143 | 0.7299 | 0.8544 |
| No log | 2.0276 | 294 | 0.7200 | 0.0 | 0.7200 | 0.8486 |
| No log | 2.0414 | 296 | 0.7244 | 0.0 | 0.7244 | 0.8511 |
| No log | 2.0552 | 298 | 0.7613 | 0.0179 | 0.7613 | 0.8725 |
| No log | 2.0690 | 300 | 0.8788 | 0.2029 | 0.8788 | 0.9374 |
| No log | 2.0828 | 302 | 0.8615 | 0.2029 | 0.8615 | 0.9282 |
| No log | 2.0966 | 304 | 0.7672 | 0.0 | 0.7672 | 0.8759 |
| No log | 2.1103 | 306 | 0.7401 | 0.0 | 0.7401 | 0.8603 |
| No log | 2.1241 | 308 | 0.7582 | 0.0 | 0.7582 | 0.8708 |
| No log | 2.1379 | 310 | 0.7641 | 0.0 | 0.7641 | 0.8741 |
| No log | 2.1517 | 312 | 0.7679 | 0.0 | 0.7679 | 0.8763 |
| No log | 2.1655 | 314 | 0.9326 | 0.2029 | 0.9326 | 0.9657 |
| No log | 2.1793 | 316 | 1.0661 | 0.2029 | 1.0661 | 1.0325 |
| No log | 2.1931 | 318 | 1.0467 | 0.2029 | 1.0467 | 1.0231 |
| No log | 2.2069 | 320 | 0.9104 | 0.0320 | 0.9104 | 0.9542 |
| No log | 2.2207 | 322 | 0.8685 | 0.0179 | 0.8685 | 0.9320 |
| No log | 2.2345 | 324 | 0.8486 | 0.1538 | 0.8486 | 0.9212 |
| No log | 2.2483 | 326 | 0.8777 | 0.1270 | 0.8777 | 0.9369 |
| No log | 2.2621 | 328 | 0.9891 | 0.0435 | 0.9891 | 0.9946 |
| No log | 2.2759 | 330 | 1.1034 | -0.1808 | 1.1034 | 1.0504 |
| No log | 2.2897 | 332 | 1.1085 | 0.1987 | 1.1085 | 1.0528 |
| No log | 2.3034 | 334 | 1.0092 | 0.2584 | 1.0092 | 1.0046 |
| No log | 2.3172 | 336 | 0.8689 | 0.1270 | 0.8689 | 0.9322 |
| No log | 2.3310 | 338 | 0.8353 | 0.1270 | 0.8353 | 0.9139 |
| No log | 2.3448 | 340 | 0.8611 | 0.1270 | 0.8611 | 0.9280 |
| No log | 2.3586 | 342 | 0.8332 | 0.1538 | 0.8332 | 0.9128 |
| No log | 2.3724 | 344 | 0.7891 | -0.0185 | 0.7891 | 0.8883 |
| No log | 2.3862 | 346 | 0.8823 | 0.2080 | 0.8823 | 0.9393 |
| No log | 2.4 | 348 | 0.9395 | 0.2080 | 0.9395 | 0.9693 |
| No log | 2.4138 | 350 | 0.9209 | 0.2080 | 0.9209 | 0.9597 |
| No log | 2.4276 | 352 | 0.8515 | 0.0 | 0.8515 | 0.9227 |
| No log | 2.4414 | 354 | 0.8957 | 0.0 | 0.8957 | 0.9464 |
| No log | 2.4552 | 356 | 0.9649 | 0.2029 | 0.9649 | 0.9823 |
| No log | 2.4690 | 358 | 1.0001 | 0.0610 | 1.0001 | 1.0000 |
| No log | 2.4828 | 360 | 0.9548 | 0.0 | 0.9548 | 0.9772 |
| No log | 2.4966 | 362 | 0.9418 | -0.1493 | 0.9418 | 0.9705 |
| No log | 2.5103 | 364 | 0.9893 | 0.0610 | 0.9893 | 0.9946 |
| No log | 2.5241 | 366 | 1.0219 | 0.0610 | 1.0219 | 1.0109 |
| No log | 2.5379 | 368 | 1.1871 | 0.1921 | 1.1871 | 1.0895 |
| No log | 2.5517 | 370 | 1.2125 | 0.1921 | 1.2125 | 1.1011 |
| No log | 2.5655 | 372 | 1.0953 | 0.0610 | 1.0953 | 1.0466 |
| No log | 2.5793 | 374 | 0.9460 | 0.0530 | 0.9460 | 0.9726 |
| No log | 2.5931 | 376 | 0.8645 | 0.0179 | 0.8645 | 0.9298 |
| No log | 2.6069 | 378 | 0.8526 | 0.0179 | 0.8526 | 0.9234 |
| No log | 2.6207 | 380 | 0.8915 | 0.0320 | 0.8915 | 0.9442 |
| No log | 2.6345 | 382 | 0.8761 | 0.0320 | 0.8761 | 0.9360 |
| No log | 2.6483 | 384 | 0.8407 | 0.0179 | 0.8407 | 0.9169 |
| No log | 2.6621 | 386 | 0.8172 | 0.0 | 0.8172 | 0.9040 |
| No log | 2.6759 | 388 | 0.8118 | -0.0185 | 0.8118 | 0.9010 |
| No log | 2.6897 | 390 | 0.8103 | 0.0 | 0.8103 | 0.9002 |
| No log | 2.7034 | 392 | 0.8052 | 0.0 | 0.8052 | 0.8973 |
| No log | 2.7172 | 394 | 0.8037 | -0.0185 | 0.8037 | 0.8965 |
| No log | 2.7310 | 396 | 0.8075 | 0.0 | 0.8075 | 0.8986 |
| No log | 2.7448 | 398 | 0.8103 | 0.0149 | 0.8103 | 0.9002 |
| No log | 2.7586 | 400 | 0.7931 | 0.1538 | 0.7931 | 0.8905 |
| No log | 2.7724 | 402 | 0.8138 | 0.1270 | 0.8138 | 0.9021 |
| No log | 2.7862 | 404 | 0.7799 | 0.1538 | 0.7799 | 0.8831 |
| No log | 2.8 | 406 | 0.7632 | -0.0185 | 0.7632 | 0.8736 |
| No log | 2.8138 | 408 | 0.7631 | -0.0185 | 0.7631 | 0.8736 |
| No log | 2.8276 | 410 | 0.7823 | -0.0185 | 0.7823 | 0.8845 |
| No log | 2.8414 | 412 | 0.8210 | 0.1538 | 0.8210 | 0.9061 |
| No log | 2.8552 | 414 | 0.7917 | -0.0185 | 0.7917 | 0.8898 |
| No log | 2.8690 | 416 | 0.7705 | 0.0 | 0.7705 | 0.8778 |
| No log | 2.8828 | 418 | 0.7814 | 0.0 | 0.7814 | 0.8839 |
| No log | 2.8966 | 420 | 0.7809 | 0.0 | 0.7809 | 0.8837 |
| No log | 2.9103 | 422 | 0.7812 | 0.0 | 0.7812 | 0.8838 |
| No log | 2.9241 | 424 | 0.8022 | -0.0185 | 0.8022 | 0.8956 |
| No log | 2.9379 | 426 | 0.7877 | 0.0 | 0.7877 | 0.8875 |
| No log | 2.9517 | 428 | 0.7603 | 0.0 | 0.7603 | 0.8720 |
| No log | 2.9655 | 430 | 0.7605 | 0.0 | 0.7605 | 0.8721 |
| No log | 2.9793 | 432 | 0.7640 | 0.0 | 0.7640 | 0.8741 |
| No log | 2.9931 | 434 | 0.7500 | 0.0 | 0.7500 | 0.8660 |
| No log | 3.0069 | 436 | 0.7652 | 0.0 | 0.7652 | 0.8748 |
| No log | 3.0207 | 438 | 0.7688 | 0.0 | 0.7688 | 0.8768 |
| No log | 3.0345 | 440 | 0.7830 | 0.0 | 0.7830 | 0.8848 |
| No log | 3.0483 | 442 | 0.8788 | 0.0320 | 0.8788 | 0.9374 |
| No log | 3.0621 | 444 | 0.9149 | 0.0320 | 0.9149 | 0.9565 |
| No log | 3.0759 | 446 | 0.9018 | 0.0320 | 0.9018 | 0.9496 |
| No log | 3.0897 | 448 | 0.9284 | 0.0320 | 0.9284 | 0.9635 |
| No log | 3.1034 | 450 | 0.9529 | 0.0320 | 0.9529 | 0.9762 |
| No log | 3.1172 | 452 | 0.9590 | 0.0320 | 0.9590 | 0.9793 |
| No log | 3.1310 | 454 | 0.8779 | 0.0320 | 0.8779 | 0.9369 |
| No log | 3.1448 | 456 | 0.8619 | 0.0320 | 0.8619 | 0.9284 |
| No log | 3.1586 | 458 | 0.8329 | 0.0 | 0.8329 | 0.9126 |
| No log | 3.1724 | 460 | 0.8360 | 0.0 | 0.8360 | 0.9143 |
| No log | 3.1862 | 462 | 0.9018 | 0.0320 | 0.9018 | 0.9496 |
| No log | 3.2 | 464 | 1.0618 | 0.0610 | 1.0618 | 1.0304 |
| No log | 3.2138 | 466 | 1.2150 | 0.1921 | 1.2150 | 1.1023 |
| No log | 3.2276 | 468 | 1.1807 | 0.1921 | 1.1807 | 1.0866 |
| No log | 3.2414 | 470 | 1.0041 | 0.0610 | 1.0041 | 1.0020 |
| No log | 3.2552 | 472 | 0.8783 | 0.0179 | 0.8783 | 0.9372 |
| No log | 3.2690 | 474 | 0.8454 | 0.1270 | 0.8454 | 0.9195 |
| No log | 3.2828 | 476 | 0.8460 | 0.1270 | 0.8460 | 0.9198 |
| No log | 3.2966 | 478 | 0.8682 | -0.0185 | 0.8682 | 0.9318 |
| No log | 3.3103 | 480 | 0.9532 | 0.0610 | 0.9532 | 0.9763 |
| No log | 3.3241 | 482 | 1.0132 | 0.0610 | 1.0132 | 1.0066 |
| No log | 3.3379 | 484 | 0.9957 | 0.0610 | 0.9957 | 0.9978 |
| No log | 3.3517 | 486 | 0.8740 | 0.1791 | 0.8740 | 0.9349 |
| No log | 3.3655 | 488 | 0.8196 | 0.1270 | 0.8196 | 0.9053 |
| No log | 3.3793 | 490 | 0.8066 | -0.0185 | 0.8066 | 0.8981 |
| No log | 3.3931 | 492 | 0.8170 | 0.1818 | 0.8170 | 0.9039 |
| No log | 3.4069 | 494 | 0.7965 | 0.1818 | 0.7965 | 0.8924 |
| No log | 3.4207 | 496 | 0.7852 | 0.3231 | 0.7852 | 0.8861 |
| No log | 3.4345 | 498 | 0.7950 | 0.1791 | 0.7950 | 0.8916 |
| 0.4279 | 3.4483 | 500 | 0.8033 | 0.2949 | 0.8033 | 0.8963 |
| 0.4279 | 3.4621 | 502 | 0.7989 | 0.2667 | 0.7989 | 0.8938 |
| 0.4279 | 3.4759 | 504 | 0.7886 | 0.2763 | 0.7886 | 0.8880 |
| 0.4279 | 3.4897 | 506 | 0.7890 | 0.2949 | 0.7890 | 0.8882 |
| 0.4279 | 3.5034 | 508 | 0.7856 | 0.1769 | 0.7856 | 0.8864 |
| 0.4279 | 3.5172 | 510 | 0.8125 | 0.2029 | 0.8125 | 0.9014 |
| 0.4279 | 3.5310 | 512 | 0.8669 | 0.2029 | 0.8669 | 0.9311 |
| 0.4279 | 3.5448 | 514 | 0.9419 | 0.2029 | 0.9419 | 0.9705 |
| 0.4279 | 3.5586 | 516 | 1.0561 | 0.1987 | 1.0561 | 1.0276 |
| 0.4279 | 3.5724 | 518 | 1.0222 | 0.0530 | 1.0222 | 1.0110 |
| 0.4279 | 3.5862 | 520 | 0.9673 | 0.0530 | 0.9673 | 0.9835 |
| 0.4279 | 3.6 | 522 | 0.9547 | 0.0530 | 0.9547 | 0.9771 |
| 0.4279 | 3.6138 | 524 | 1.0174 | 0.0530 | 1.0174 | 1.0087 |
| 0.4279 | 3.6276 | 526 | 1.0821 | 0.0610 | 1.0821 | 1.0403 |
| 0.4279 | 3.6414 | 528 | 1.1288 | 0.0610 | 1.1288 | 1.0625 |
| 0.4279 | 3.6552 | 530 | 1.2216 | 0.1921 | 1.2216 | 1.1052 |
| 0.4279 | 3.6690 | 532 | 1.1583 | 0.0610 | 1.1583 | 1.0762 |
| 0.4279 | 3.6828 | 534 | 1.0739 | 0.0530 | 1.0739 | 1.0363 |
| 0.4279 | 3.6966 | 536 | 1.1053 | 0.0610 | 1.1053 | 1.0513 |
| 0.4279 | 3.7103 | 538 | 1.1206 | 0.0610 | 1.1206 | 1.0586 |
| 0.4279 | 3.7241 | 540 | 1.1629 | 0.0610 | 1.1629 | 1.0784 |
| 0.4279 | 3.7379 | 542 | 1.1923 | 0.1921 | 1.1923 | 1.0919 |
| 0.4279 | 3.7517 | 544 | 1.0950 | 0.0610 | 1.0950 | 1.0464 |
| 0.4279 | 3.7655 | 546 | 1.0142 | 0.0530 | 1.0142 | 1.0071 |
| 0.4279 | 3.7793 | 548 | 1.0166 | 0.0530 | 1.0166 | 1.0082 |
| 0.4279 | 3.7931 | 550 | 0.9908 | 0.0375 | 0.9908 | 0.9954 |
| 0.4279 | 3.8069 | 552 | 0.9837 | 0.0272 | 0.9837 | 0.9918 |
| 0.4279 | 3.8207 | 554 | 1.0083 | 0.0375 | 1.0083 | 1.0041 |
| 0.4279 | 3.8345 | 556 | 1.0294 | 0.0530 | 1.0294 | 1.0146 |
| 0.4279 | 3.8483 | 558 | 1.0268 | 0.0610 | 1.0268 | 1.0133 |
| 0.4279 | 3.8621 | 560 | 0.9950 | -0.1159 | 0.9950 | 0.9975 |
| 0.4279 | 3.8759 | 562 | 1.0077 | -0.1159 | 1.0077 | 1.0038 |
| 0.4279 | 3.8897 | 564 | 1.0652 | 0.0610 | 1.0652 | 1.0321 |
| 0.4279 | 3.9034 | 566 | 1.0442 | 0.0610 | 1.0442 | 1.0219 |
| 0.4279 | 3.9172 | 568 | 0.9749 | -0.1786 | 0.9749 | 0.9874 |
| 0.4279 | 3.9310 | 570 | 0.9621 | -0.1818 | 0.9621 | 0.9809 |
| 0.4279 | 3.9448 | 572 | 0.9697 | -0.1818 | 0.9697 | 0.9848 |
| 0.4279 | 3.9586 | 574 | 1.0112 | -0.1159 | 1.0112 | 1.0056 |
| 0.4279 | 3.9724 | 576 | 1.1413 | 0.0610 | 1.1413 | 1.0683 |
| 0.4279 | 3.9862 | 578 | 1.2202 | 0.0610 | 1.2202 | 1.1046 |
| 0.4279 | 4.0 | 580 | 1.1648 | 0.0610 | 1.1648 | 1.0792 |
| 0.4279 | 4.0138 | 582 | 1.0804 | 0.0610 | 1.0804 | 1.0394 |
| 0.4279 | 4.0276 | 584 | 1.0558 | 0.0610 | 1.0558 | 1.0275 |
| 0.4279 | 4.0414 | 586 | 1.0692 | 0.0610 | 1.0692 | 1.0340 |
| 0.4279 | 4.0552 | 588 | 1.0836 | 0.0610 | 1.0836 | 1.0410 |
| 0.4279 | 4.0690 | 590 | 1.0455 | 0.0610 | 1.0455 | 1.0225 |
| 0.4279 | 4.0828 | 592 | 1.0127 | 0.0375 | 1.0127 | 1.0063 |
| 0.4279 | 4.0966 | 594 | 1.0098 | 0.0375 | 1.0098 | 1.0049 |
| 0.4279 | 4.1103 | 596 | 1.0262 | 0.0375 | 1.0262 | 1.0130 |
| 0.4279 | 4.1241 | 598 | 0.9955 | 0.0375 | 0.9955 | 0.9977 |
| 0.4279 | 4.1379 | 600 | 0.9736 | 0.0375 | 0.9736 | 0.9867 |
| 0.4279 | 4.1517 | 602 | 0.9789 | 0.0530 | 0.9789 | 0.9894 |
| 0.4279 | 4.1655 | 604 | 0.9582 | 0.0530 | 0.9582 | 0.9789 |
| 0.4279 | 4.1793 | 606 | 0.9390 | 0.0530 | 0.9390 | 0.9690 |
| 0.4279 | 4.1931 | 608 | 0.9236 | 0.0530 | 0.9236 | 0.9610 |
| 0.4279 | 4.2069 | 610 | 0.9379 | 0.0530 | 0.9379 | 0.9685 |
| 0.4279 | 4.2207 | 612 | 0.9450 | 0.0530 | 0.9450 | 0.9721 |
| 0.4279 | 4.2345 | 614 | 0.8983 | 0.0435 | 0.8983 | 0.9478 |
| 0.4279 | 4.2483 | 616 | 0.8724 | -0.1440 | 0.8724 | 0.9340 |
| 0.4279 | 4.2621 | 618 | 0.8848 | 0.0435 | 0.8848 | 0.9406 |
| 0.4279 | 4.2759 | 620 | 0.9571 | 0.0435 | 0.9571 | 0.9783 |
| 0.4279 | 4.2897 | 622 | 1.0785 | 0.0610 | 1.0785 | 1.0385 |
| 0.4279 | 4.3034 | 624 | 1.1633 | 0.1921 | 1.1633 | 1.0786 |
| 0.4279 | 4.3172 | 626 | 1.2829 | 0.1921 | 1.2829 | 1.1327 |
| 0.4279 | 4.3310 | 628 | 1.2322 | 0.1921 | 1.2322 | 1.1101 |
| 0.4279 | 4.3448 | 630 | 1.0387 | 0.0530 | 1.0387 | 1.0192 |
| 0.4279 | 4.3586 | 632 | 0.9101 | 0.0 | 0.9101 | 0.9540 |
| 0.4279 | 4.3724 | 634 | 0.8971 | 0.0 | 0.8971 | 0.9471 |
| 0.4279 | 4.3862 | 636 | 0.9048 | 0.0 | 0.9048 | 0.9512 |
| 0.4279 | 4.4 | 638 | 0.9448 | 0.0179 | 0.9448 | 0.9720 |
| 0.4279 | 4.4138 | 640 | 1.0068 | 0.0435 | 1.0068 | 1.0034 |
| 0.4279 | 4.4276 | 642 | 1.2055 | -0.0565 | 1.2055 | 1.0979 |
| 0.4279 | 4.4414 | 644 | 1.3086 | 0.0737 | 1.3086 | 1.1440 |
| 0.4279 | 4.4552 | 646 | 1.2052 | 0.0610 | 1.2052 | 1.0978 |
| 0.4279 | 4.4690 | 648 | 1.0426 | -0.1493 | 1.0426 | 1.0211 |
| 0.4279 | 4.4828 | 650 | 1.0263 | -0.1493 | 1.0263 | 1.0130 |
| 0.4279 | 4.4966 | 652 | 1.0440 | -0.1493 | 1.0440 | 1.0218 |
| 0.4279 | 4.5103 | 654 | 1.1652 | 0.0610 | 1.1652 | 1.0794 |
| 0.4279 | 4.5241 | 656 | 1.1864 | 0.0610 | 1.1864 | 1.0892 |
| 0.4279 | 4.5379 | 658 | 1.1910 | 0.0610 | 1.1910 | 1.0913 |
| 0.4279 | 4.5517 | 660 | 1.1122 | 0.0530 | 1.1122 | 1.0546 |
| 0.4279 | 4.5655 | 662 | 1.0285 | -0.1440 | 1.0285 | 1.0142 |
| 0.4279 | 4.5793 | 664 | 1.0361 | -0.1440 | 1.0361 | 1.0179 |
| 0.4279 | 4.5931 | 666 | 1.0381 | -0.1440 | 1.0381 | 1.0189 |
| 0.4279 | 4.6069 | 668 | 1.0336 | -0.1440 | 1.0336 | 1.0167 |
| 0.4279 | 4.6207 | 670 | 1.0454 | -0.1440 | 1.0454 | 1.0224 |
| 0.4279 | 4.6345 | 672 | 1.1337 | 0.0610 | 1.1337 | 1.0648 |
| 0.4279 | 4.6483 | 674 | 1.2372 | 0.0610 | 1.2372 | 1.1123 |
| 0.4279 | 4.6621 | 676 | 1.1774 | 0.0610 | 1.1774 | 1.0851 |
| 0.4279 | 4.6759 | 678 | 1.0993 | 0.0610 | 1.0993 | 1.0485 |
| 0.4279 | 4.6897 | 680 | 1.0540 | 0.0530 | 1.0540 | 1.0266 |
| 0.4279 | 4.7034 | 682 | 1.0649 | 0.0530 | 1.0649 | 1.0320 |
| 0.4279 | 4.7172 | 684 | 1.1260 | 0.0610 | 1.1260 | 1.0611 |
| 0.4279 | 4.7310 | 686 | 1.1500 | 0.0610 | 1.1500 | 1.0724 |
| 0.4279 | 4.7448 | 688 | 1.1008 | 0.0610 | 1.1008 | 1.0492 |
| 0.4279 | 4.7586 | 690 | 1.0401 | -0.1440 | 1.0401 | 1.0199 |
| 0.4279 | 4.7724 | 692 | 1.0396 | -0.1440 | 1.0396 | 1.0196 |
| 0.4279 | 4.7862 | 694 | 1.0924 | 0.0610 | 1.0924 | 1.0452 |
| 0.4279 | 4.8 | 696 | 1.1834 | 0.0610 | 1.1834 | 1.0878 |
| 0.4279 | 4.8138 | 698 | 1.2486 | 0.1921 | 1.2486 | 1.1174 |
| 0.4279 | 4.8276 | 700 | 1.2306 | 0.1921 | 1.2306 | 1.1093 |
| 0.4279 | 4.8414 | 702 | 1.1124 | 0.0610 | 1.1124 | 1.0547 |
| 0.4279 | 4.8552 | 704 | 0.9866 | -0.1440 | 0.9866 | 0.9933 |
| 0.4279 | 4.8690 | 706 | 0.9543 | -0.1440 | 0.9543 | 0.9769 |
| 0.4279 | 4.8828 | 708 | 0.9494 | -0.1818 | 0.9494 | 0.9743 |
| 0.4279 | 4.8966 | 710 | 0.9468 | -0.1440 | 0.9468 | 0.9730 |
| 0.4279 | 4.9103 | 712 | 0.9562 | -0.1440 | 0.9562 | 0.9779 |
| 0.4279 | 4.9241 | 714 | 1.0045 | 0.0530 | 1.0045 | 1.0023 |
| 0.4279 | 4.9379 | 716 | 1.0262 | 0.0530 | 1.0262 | 1.0130 |
| 0.4279 | 4.9517 | 718 | 1.0023 | 0.0530 | 1.0023 | 1.0012 |
| 0.4279 | 4.9655 | 720 | 0.9611 | -0.1440 | 0.9611 | 0.9803 |
| 0.4279 | 4.9793 | 722 | 0.9324 | -0.1440 | 0.9324 | 0.9656 |
| 0.4279 | 4.9931 | 724 | 0.9350 | -0.1440 | 0.9350 | 0.9669 |
| 0.4279 | 5.0069 | 726 | 0.9487 | -0.1440 | 0.9487 | 0.9740 |
| 0.4279 | 5.0207 | 728 | 0.9748 | -0.1440 | 0.9748 | 0.9873 |
| 0.4279 | 5.0345 | 730 | 0.9894 | -0.1440 | 0.9894 | 0.9947 |
| 0.4279 | 5.0483 | 732 | 1.0042 | -0.1440 | 1.0042 | 1.0021 |
| 0.4279 | 5.0621 | 734 | 1.0204 | 0.0435 | 1.0204 | 1.0102 |
| 0.4279 | 5.0759 | 736 | 1.0062 | -0.1440 | 1.0062 | 1.0031 |
| 0.4279 | 5.0897 | 738 | 0.9826 | -0.1440 | 0.9826 | 0.9912 |
| 0.4279 | 5.1034 | 740 | 0.9786 | -0.1440 | 0.9786 | 0.9893 |
| 0.4279 | 5.1172 | 742 | 0.9762 | -0.1440 | 0.9762 | 0.9880 |
| 0.4279 | 5.1310 | 744 | 0.9975 | -0.1440 | 0.9975 | 0.9987 |
| 0.4279 | 5.1448 | 746 | 1.0379 | 0.0435 | 1.0379 | 1.0188 |
| 0.4279 | 5.1586 | 748 | 1.0334 | -0.1440 | 1.0334 | 1.0166 |
| 0.4279 | 5.1724 | 750 | 1.0010 | -0.1440 | 1.0010 | 1.0005 |
| 0.4279 | 5.1862 | 752 | 0.9827 | -0.1440 | 0.9827 | 0.9913 |
| 0.4279 | 5.2 | 754 | 0.9917 | -0.1440 | 0.9917 | 0.9958 |
| 0.4279 | 5.2138 | 756 | 1.0245 | -0.1786 | 1.0245 | 1.0122 |
| 0.4279 | 5.2276 | 758 | 1.0635 | -0.1818 | 1.0635 | 1.0313 |
| 0.4279 | 5.2414 | 760 | 1.0781 | -0.1818 | 1.0781 | 1.0383 |
| 0.4279 | 5.2552 | 762 | 1.0971 | -0.1493 | 1.0971 | 1.0474 |
| 0.4279 | 5.2690 | 764 | 1.1353 | -0.0927 | 1.1353 | 1.0655 |
| 0.4279 | 5.2828 | 766 | 1.1191 | 0.0610 | 1.1191 | 1.0579 |
| 0.4279 | 5.2966 | 768 | 1.0578 | -0.1493 | 1.0578 | 1.0285 |
| 0.4279 | 5.3103 | 770 | 1.0286 | -0.1493 | 1.0286 | 1.0142 |
| 0.4279 | 5.3241 | 772 | 1.0394 | -0.1818 | 1.0394 | 1.0195 |
| 0.4279 | 5.3379 | 774 | 1.0420 | -0.1818 | 1.0420 | 1.0208 |
| 0.4279 | 5.3517 | 776 | 1.0448 | -0.1818 | 1.0448 | 1.0222 |
| 0.4279 | 5.3655 | 778 | 1.0291 | -0.1440 | 1.0291 | 1.0144 |
| 0.4279 | 5.3793 | 780 | 1.0268 | -0.1440 | 1.0268 | 1.0133 |
| 0.4279 | 5.3931 | 782 | 1.0151 | -0.1440 | 1.0151 | 1.0075 |
| 0.4279 | 5.4069 | 784 | 1.0024 | -0.1440 | 1.0024 | 1.0012 |
| 0.4279 | 5.4207 | 786 | 1.0041 | -0.1786 | 1.0041 | 1.0020 |
| 0.4279 | 5.4345 | 788 | 1.0093 | -0.1786 | 1.0093 | 1.0047 |
| 0.4279 | 5.4483 | 790 | 1.0284 | -0.1440 | 1.0284 | 1.0141 |
| 0.4279 | 5.4621 | 792 | 1.0376 | -0.1440 | 1.0376 | 1.0186 |
| 0.4279 | 5.4759 | 794 | 1.0424 | -0.1440 | 1.0424 | 1.0210 |
| 0.4279 | 5.4897 | 796 | 1.1031 | -0.1159 | 1.1031 | 1.0503 |
| 0.4279 | 5.5034 | 798 | 1.1558 | 0.0610 | 1.1558 | 1.0751 |
| 0.4279 | 5.5172 | 800 | 1.2270 | 0.0610 | 1.2270 | 1.1077 |
| 0.4279 | 5.5310 | 802 | 1.3159 | 0.0737 | 1.3159 | 1.1471 |
| 0.4279 | 5.5448 | 804 | 1.3091 | -0.0565 | 1.3091 | 1.1441 |
| 0.4279 | 5.5586 | 806 | 1.2158 | 0.0610 | 1.2158 | 1.1027 |
| 0.4279 | 5.5724 | 808 | 1.1623 | 0.0610 | 1.1623 | 1.0781 |
| 0.4279 | 5.5862 | 810 | 1.1674 | 0.0610 | 1.1674 | 1.0804 |
| 0.4279 | 5.6 | 812 | 1.2021 | 0.0610 | 1.2021 | 1.0964 |
| 0.4279 | 5.6138 | 814 | 1.2120 | 0.0610 | 1.2120 | 1.1009 |
| 0.4279 | 5.6276 | 816 | 1.1683 | 0.0610 | 1.1683 | 1.0809 |
| 0.4279 | 5.6414 | 818 | 1.1050 | -0.1159 | 1.1050 | 1.0512 |
| 0.4279 | 5.6552 | 820 | 1.1130 | -0.1493 | 1.1130 | 1.0550 |
| 0.4279 | 5.6690 | 822 | 1.1048 | -0.1538 | 1.1048 | 1.0511 |
| 0.4279 | 5.6828 | 824 | 1.0620 | -0.1493 | 1.0620 | 1.0305 |
| 0.4279 | 5.6966 | 826 | 1.0192 | -0.1440 | 1.0192 | 1.0095 |
| 0.4279 | 5.7103 | 828 | 0.9802 | -0.1440 | 0.9802 | 0.9901 |
| 0.4279 | 5.7241 | 830 | 0.9519 | -0.1440 | 0.9519 | 0.9756 |
| 0.4279 | 5.7379 | 832 | 0.9346 | -0.1440 | 0.9346 | 0.9668 |
| 0.4279 | 5.7517 | 834 | 0.9456 | -0.1440 | 0.9456 | 0.9724 |
| 0.4279 | 5.7655 | 836 | 0.9905 | 0.0435 | 0.9905 | 0.9953 |
| 0.4279 | 5.7793 | 838 | 0.9936 | -0.1440 | 0.9936 | 0.9968 |
| 0.4279 | 5.7931 | 840 | 0.9647 | -0.1440 | 0.9647 | 0.9822 |
| 0.4279 | 5.8069 | 842 | 0.9600 | -0.1786 | 0.9600 | 0.9798 |
| 0.4279 | 5.8207 | 844 | 0.9657 | -0.1786 | 0.9657 | 0.9827 |
| 0.4279 | 5.8345 | 846 | 0.9671 | -0.1786 | 0.9671 | 0.9834 |
| 0.4279 | 5.8483 | 848 | 0.9569 | -0.1786 | 0.9569 | 0.9782 |
| 0.4279 | 5.8621 | 850 | 0.9633 | -0.1786 | 0.9633 | 0.9815 |
| 0.4279 | 5.8759 | 852 | 1.0174 | 0.0435 | 1.0174 | 1.0086 |
| 0.4279 | 5.8897 | 854 | 1.0677 | 0.0530 | 1.0677 | 1.0333 |
| 0.4279 | 5.9034 | 856 | 1.1105 | 0.0530 | 1.1105 | 1.0538 |
| 0.4279 | 5.9172 | 858 | 1.1009 | 0.0530 | 1.1009 | 1.0492 |
| 0.4279 | 5.9310 | 860 | 1.0843 | 0.0530 | 1.0843 | 1.0413 |
| 0.4279 | 5.9448 | 862 | 1.0668 | 0.0435 | 1.0668 | 1.0329 |
| 0.4279 | 5.9586 | 864 | 1.0377 | -0.1440 | 1.0377 | 1.0187 |
| 0.4279 | 5.9724 | 866 | 1.0193 | -0.1440 | 1.0193 | 1.0096 |
| 0.4279 | 5.9862 | 868 | 1.0270 | 0.0435 | 1.0270 | 1.0134 |
| 0.4279 | 6.0 | 870 | 1.0947 | 0.0530 | 1.0947 | 1.0463 |
| 0.4279 | 6.0138 | 872 | 1.2172 | 0.0610 | 1.2172 | 1.1033 |
| 0.4279 | 6.0276 | 874 | 1.3239 | 0.1921 | 1.3239 | 1.1506 |
| 0.4279 | 6.0414 | 876 | 1.3344 | 0.1921 | 1.3344 | 1.1552 |
| 0.4279 | 6.0552 | 878 | 1.2525 | 0.0610 | 1.2525 | 1.1191 |
| 0.4279 | 6.0690 | 880 | 1.1448 | 0.0610 | 1.1448 | 1.0700 |
| 0.4279 | 6.0828 | 882 | 1.0660 | -0.1224 | 1.0660 | 1.0325 |
| 0.4279 | 6.0966 | 884 | 1.0663 | -0.1818 | 1.0663 | 1.0326 |
| 0.4279 | 6.1103 | 886 | 1.0706 | -0.1818 | 1.0706 | 1.0347 |
| 0.4279 | 6.1241 | 888 | 1.0622 | -0.1818 | 1.0622 | 1.0306 |
| 0.4279 | 6.1379 | 890 | 1.0718 | -0.1224 | 1.0718 | 1.0353 |
| 0.4279 | 6.1517 | 892 | 1.1340 | 0.0610 | 1.1340 | 1.0649 |
| 0.4279 | 6.1655 | 894 | 1.2181 | 0.0610 | 1.2181 | 1.1037 |
| 0.4279 | 6.1793 | 896 | 1.2489 | 0.0610 | 1.2489 | 1.1175 |
| 0.4279 | 6.1931 | 898 | 1.2195 | 0.0610 | 1.2195 | 1.1043 |
| 0.4279 | 6.2069 | 900 | 1.1229 | 0.0610 | 1.1229 | 1.0597 |
| 0.4279 | 6.2207 | 902 | 1.0686 | -0.1159 | 1.0686 | 1.0337 |
| 0.4279 | 6.2345 | 904 | 1.0403 | -0.1440 | 1.0403 | 1.0199 |
| 0.4279 | 6.2483 | 906 | 1.0342 | -0.1786 | 1.0342 | 1.0169 |
| 0.4279 | 6.2621 | 908 | 1.0386 | -0.1440 | 1.0386 | 1.0191 |
| 0.4279 | 6.2759 | 910 | 1.0331 | -0.1440 | 1.0331 | 1.0164 |
| 0.4279 | 6.2897 | 912 | 1.0286 | -0.1440 | 1.0286 | 1.0142 |
| 0.4279 | 6.3034 | 914 | 1.0228 | -0.1440 | 1.0228 | 1.0114 |
| 0.4279 | 6.3172 | 916 | 1.0056 | -0.1440 | 1.0056 | 1.0028 |
| 0.4279 | 6.3310 | 918 | 1.0032 | 0.0435 | 1.0032 | 1.0016 |
| 0.4279 | 6.3448 | 920 | 1.0345 | 0.0530 | 1.0345 | 1.0171 |
| 0.4279 | 6.3586 | 922 | 1.0439 | 0.0530 | 1.0439 | 1.0217 |
| 0.4279 | 6.3724 | 924 | 1.0241 | 0.0530 | 1.0241 | 1.0120 |
| 0.4279 | 6.3862 | 926 | 1.0174 | 0.0435 | 1.0174 | 1.0086 |
| 0.4279 | 6.4 | 928 | 0.9842 | 0.0435 | 0.9842 | 0.9921 |
| 0.4279 | 6.4138 | 930 | 0.9827 | -0.1818 | 0.9827 | 0.9913 |
| 0.4279 | 6.4276 | 932 | 0.9984 | -0.1818 | 0.9984 | 0.9992 |
| 0.4279 | 6.4414 | 934 | 0.9969 | -0.1818 | 0.9969 | 0.9985 |
| 0.4279 | 6.4552 | 936 | 0.9949 | -0.1818 | 0.9949 | 0.9975 |
| 0.4279 | 6.4690 | 938 | 1.0179 | 0.0272 | 1.0179 | 1.0089 |
| 0.4279 | 6.4828 | 940 | 1.1051 | 0.0610 | 1.1051 | 1.0513 |
| 0.4279 | 6.4966 | 942 | 1.1573 | 0.0610 | 1.1573 | 1.0758 |
| 0.4279 | 6.5103 | 944 | 1.1640 | 0.0610 | 1.1640 | 1.0789 |
| 0.4279 | 6.5241 | 946 | 1.1388 | 0.0610 | 1.1388 | 1.0671 |
| 0.4279 | 6.5379 | 948 | 1.1299 | 0.0610 | 1.1299 | 1.0630 |
| 0.4279 | 6.5517 | 950 | 1.0863 | 0.0610 | 1.0863 | 1.0423 |
| 0.4279 | 6.5655 | 952 | 1.0630 | 0.0435 | 1.0630 | 1.0310 |
| 0.4279 | 6.5793 | 954 | 1.0679 | 0.0530 | 1.0679 | 1.0334 |
| 0.4279 | 6.5931 | 956 | 1.0580 | 0.0530 | 1.0580 | 1.0286 |
| 0.4279 | 6.6069 | 958 | 1.0535 | 0.0530 | 1.0535 | 1.0264 |
| 0.4279 | 6.6207 | 960 | 1.0390 | -0.1440 | 1.0390 | 1.0193 |
| 0.4279 | 6.6345 | 962 | 1.0608 | -0.1493 | 1.0608 | 1.0299 |
| 0.4279 | 6.6483 | 964 | 1.0795 | -0.1493 | 1.0795 | 1.0390 |
| 0.4279 | 6.6621 | 966 | 1.0476 | -0.1493 | 1.0476 | 1.0235 |
| 0.4279 | 6.6759 | 968 | 1.0245 | -0.1440 | 1.0245 | 1.0122 |
| 0.4279 | 6.6897 | 970 | 1.0246 | -0.1440 | 1.0246 | 1.0122 |
| 0.4279 | 6.7034 | 972 | 1.0457 | 0.0530 | 1.0457 | 1.0226 |
| 0.4279 | 6.7172 | 974 | 1.1087 | 0.0610 | 1.1087 | 1.0530 |
| 0.4279 | 6.7310 | 976 | 1.1413 | 0.0610 | 1.1413 | 1.0683 |
| 0.4279 | 6.7448 | 978 | 1.1621 | 0.0610 | 1.1621 | 1.0780 |
| 0.4279 | 6.7586 | 980 | 1.1025 | 0.0610 | 1.1025 | 1.0500 |
| 0.4279 | 6.7724 | 982 | 1.0438 | -0.1159 | 1.0438 | 1.0216 |
| 0.4279 | 6.7862 | 984 | 1.0376 | -0.1440 | 1.0376 | 1.0186 |
| 0.4279 | 6.8 | 986 | 1.0707 | -0.1493 | 1.0707 | 1.0348 |
| 0.4279 | 6.8138 | 988 | 1.0874 | -0.1493 | 1.0874 | 1.0428 |
| 0.4279 | 6.8276 | 990 | 1.0876 | -0.1440 | 1.0876 | 1.0429 |
| 0.4279 | 6.8414 | 992 | 1.0842 | -0.1440 | 1.0842 | 1.0412 |
| 0.4279 | 6.8552 | 994 | 1.0908 | 0.0530 | 1.0908 | 1.0444 |
| 0.4279 | 6.8690 | 996 | 1.0923 | 0.0530 | 1.0923 | 1.0451 |
| 0.4279 | 6.8828 | 998 | 1.0932 | 0.0530 | 1.0932 | 1.0455 |
| 0.0807 | 6.8966 | 1000 | 1.0831 | -0.1440 | 1.0831 | 1.0407 |
| 0.0807 | 6.9103 | 1002 | 1.0721 | -0.1493 | 1.0721 | 1.0354 |
| 0.0807 | 6.9241 | 1004 | 1.0762 | -0.1493 | 1.0762 | 1.0374 |
| 0.0807 | 6.9379 | 1006 | 1.0732 | -0.1493 | 1.0732 | 1.0359 |
| 0.0807 | 6.9517 | 1008 | 1.0457 | -0.1493 | 1.0457 | 1.0226 |
| 0.0807 | 6.9655 | 1010 | 1.0240 | -0.1440 | 1.0240 | 1.0119 |
| 0.0807 | 6.9793 | 1012 | 1.0134 | -0.1440 | 1.0134 | 1.0067 |
| 0.0807 | 6.9931 | 1014 | 1.0105 | -0.1440 | 1.0105 | 1.0052 |
| 0.0807 | 7.0069 | 1016 | 0.9944 | -0.1440 | 0.9944 | 0.9972 |
| 0.0807 | 7.0207 | 1018 | 0.9817 | -0.1440 | 0.9817 | 0.9908 |
| 0.0807 | 7.0345 | 1020 | 0.9850 | -0.1440 | 0.9850 | 0.9925 |
| 0.0807 | 7.0483 | 1022 | 0.9967 | -0.1440 | 0.9967 | 0.9984 |
| 0.0807 | 7.0621 | 1024 | 1.0125 | -0.1440 | 1.0125 | 1.0062 |
| 0.0807 | 7.0759 | 1026 | 1.0373 | 0.0435 | 1.0373 | 1.0185 |
| 0.0807 | 7.0897 | 1028 | 1.0423 | 0.0435 | 1.0423 | 1.0209 |
| 0.0807 | 7.1034 | 1030 | 1.0392 | -0.1440 | 1.0392 | 1.0194 |
| 0.0807 | 7.1172 | 1032 | 1.0450 | -0.1440 | 1.0450 | 1.0222 |
| 0.0807 | 7.1310 | 1034 | 1.0501 | -0.1440 | 1.0501 | 1.0247 |
| 0.0807 | 7.1448 | 1036 | 1.0538 | -0.1440 | 1.0538 | 1.0266 |
| 0.0807 | 7.1586 | 1038 | 1.0645 | -0.1493 | 1.0645 | 1.0318 |
| 0.0807 | 7.1724 | 1040 | 1.0757 | -0.1493 | 1.0757 | 1.0371 |
| 0.0807 | 7.1862 | 1042 | 1.0844 | -0.1440 | 1.0844 | 1.0414 |
| 0.0807 | 7.2 | 1044 | 1.0786 | -0.1440 | 1.0786 | 1.0386 |
| 0.0807 | 7.2138 | 1046 | 1.0795 | -0.1440 | 1.0795 | 1.0390 |
| 0.0807 | 7.2276 | 1048 | 1.0947 | -0.1159 | 1.0947 | 1.0463 |
| 0.0807 | 7.2414 | 1050 | 1.1047 | -0.1159 | 1.1047 | 1.0510 |
| 0.0807 | 7.2552 | 1052 | 1.1191 | -0.1159 | 1.1191 | 1.0579 |
| 0.0807 | 7.2690 | 1054 | 1.1254 | -0.1159 | 1.1254 | 1.0609 |
| 0.0807 | 7.2828 | 1056 | 1.1163 | -0.1159 | 1.1163 | 1.0565 |
| 0.0807 | 7.2966 | 1058 | 1.1063 | -0.1159 | 1.1063 | 1.0518 |
| 0.0807 | 7.3103 | 1060 | 1.0800 | -0.1440 | 1.0800 | 1.0393 |
| 0.0807 | 7.3241 | 1062 | 1.0765 | -0.1440 | 1.0765 | 1.0375 |
| 0.0807 | 7.3379 | 1064 | 1.0840 | -0.1440 | 1.0840 | 1.0412 |
| 0.0807 | 7.3517 | 1066 | 1.0890 | -0.1440 | 1.0890 | 1.0435 |
| 0.0807 | 7.3655 | 1068 | 1.0944 | -0.1440 | 1.0944 | 1.0461 |
| 0.0807 | 7.3793 | 1070 | 1.0955 | -0.1440 | 1.0955 | 1.0467 |
| 0.0807 | 7.3931 | 1072 | 1.1013 | -0.1440 | 1.1013 | 1.0494 |
| 0.0807 | 7.4069 | 1074 | 1.1038 | -0.1159 | 1.1038 | 1.0506 |
| 0.0807 | 7.4207 | 1076 | 1.1031 | -0.1159 | 1.1031 | 1.0503 |
| 0.0807 | 7.4345 | 1078 | 1.1161 | -0.1159 | 1.1161 | 1.0565 |
| 0.0807 | 7.4483 | 1080 | 1.1348 | -0.0927 | 1.1348 | 1.0653 |
| 0.0807 | 7.4621 | 1082 | 1.1579 | 0.0610 | 1.1579 | 1.0760 |
| 0.0807 | 7.4759 | 1084 | 1.1706 | 0.0610 | 1.1706 | 1.0819 |
| 0.0807 | 7.4897 | 1086 | 1.1588 | 0.0610 | 1.1588 | 1.0765 |
| 0.0807 | 7.5034 | 1088 | 1.1294 | 0.0610 | 1.1294 | 1.0628 |
| 0.0807 | 7.5172 | 1090 | 1.1176 | -0.0927 | 1.1176 | 1.0572 |
| 0.0807 | 7.5310 | 1092 | 1.0846 | -0.1159 | 1.0846 | 1.0414 |
| 0.0807 | 7.5448 | 1094 | 1.0641 | -0.1159 | 1.0641 | 1.0316 |
| 0.0807 | 7.5586 | 1096 | 1.0542 | -0.1440 | 1.0542 | 1.0268 |
| 0.0807 | 7.5724 | 1098 | 1.0698 | -0.1159 | 1.0698 | 1.0343 |
| 0.0807 | 7.5862 | 1100 | 1.1147 | -0.0927 | 1.1147 | 1.0558 |
| 0.0807 | 7.6 | 1102 | 1.1684 | 0.0610 | 1.1684 | 1.0809 |
| 0.0807 | 7.6138 | 1104 | 1.1939 | 0.0610 | 1.1939 | 1.0926 |
| 0.0807 | 7.6276 | 1106 | 1.1863 | 0.0610 | 1.1863 | 1.0892 |
| 0.0807 | 7.6414 | 1108 | 1.1653 | 0.0610 | 1.1653 | 1.0795 |
| 0.0807 | 7.6552 | 1110 | 1.1337 | 0.0610 | 1.1337 | 1.0648 |
| 0.0807 | 7.6690 | 1112 | 1.0856 | -0.1159 | 1.0856 | 1.0419 |
| 0.0807 | 7.6828 | 1114 | 1.0694 | -0.1440 | 1.0694 | 1.0341 |
| 0.0807 | 7.6966 | 1116 | 1.0740 | -0.1440 | 1.0740 | 1.0363 |
| 0.0807 | 7.7103 | 1118 | 1.0852 | -0.1440 | 1.0852 | 1.0417 |
| 0.0807 | 7.7241 | 1120 | 1.1062 | -0.1159 | 1.1062 | 1.0518 |
| 0.0807 | 7.7379 | 1122 | 1.1509 | -0.0927 | 1.1509 | 1.0728 |
| 0.0807 | 7.7517 | 1124 | 1.1957 | 0.0610 | 1.1957 | 1.0935 |
| 0.0807 | 7.7655 | 1126 | 1.2182 | 0.0610 | 1.2182 | 1.1037 |
| 0.0807 | 7.7793 | 1128 | 1.2071 | 0.0610 | 1.2071 | 1.0987 |
| 0.0807 | 7.7931 | 1130 | 1.1667 | -0.0927 | 1.1667 | 1.0802 |
| 0.0807 | 7.8069 | 1132 | 1.1306 | -0.0927 | 1.1306 | 1.0633 |
| 0.0807 | 7.8207 | 1134 | 1.1191 | -0.1159 | 1.1191 | 1.0579 |
| 0.0807 | 7.8345 | 1136 | 1.1074 | -0.1159 | 1.1074 | 1.0523 |
| 0.0807 | 7.8483 | 1138 | 1.1071 | -0.1159 | 1.1071 | 1.0522 |
| 0.0807 | 7.8621 | 1140 | 1.0970 | -0.1159 | 1.0970 | 1.0474 |
| 0.0807 | 7.8759 | 1142 | 1.0945 | -0.1159 | 1.0945 | 1.0462 |
| 0.0807 | 7.8897 | 1144 | 1.1025 | -0.1159 | 1.1025 | 1.0500 |
| 0.0807 | 7.9034 | 1146 | 1.1058 | -0.1159 | 1.1058 | 1.0516 |
| 0.0807 | 7.9172 | 1148 | 1.1227 | -0.1159 | 1.1227 | 1.0596 |
| 0.0807 | 7.9310 | 1150 | 1.1403 | -0.0927 | 1.1403 | 1.0679 |
| 0.0807 | 7.9448 | 1152 | 1.1669 | -0.0927 | 1.1669 | 1.0802 |
| 0.0807 | 7.9586 | 1154 | 1.1849 | 0.0610 | 1.1849 | 1.0886 |
| 0.0807 | 7.9724 | 1156 | 1.1777 | -0.0927 | 1.1777 | 1.0852 |
| 0.0807 | 7.9862 | 1158 | 1.1583 | -0.0927 | 1.1583 | 1.0763 |
| 0.0807 | 8.0 | 1160 | 1.1464 | -0.0927 | 1.1464 | 1.0707 |
| 0.0807 | 8.0138 | 1162 | 1.1386 | -0.1159 | 1.1386 | 1.0671 |
| 0.0807 | 8.0276 | 1164 | 1.1489 | -0.0927 | 1.1489 | 1.0719 |
| 0.0807 | 8.0414 | 1166 | 1.1721 | -0.0927 | 1.1721 | 1.0826 |
| 0.0807 | 8.0552 | 1168 | 1.2001 | 0.0610 | 1.2001 | 1.0955 |
| 0.0807 | 8.0690 | 1170 | 1.2277 | 0.0610 | 1.2277 | 1.1080 |
| 0.0807 | 8.0828 | 1172 | 1.2333 | 0.0610 | 1.2333 | 1.1106 |
| 0.0807 | 8.0966 | 1174 | 1.2172 | 0.0610 | 1.2172 | 1.1033 |
| 0.0807 | 8.1103 | 1176 | 1.1787 | 0.0610 | 1.1787 | 1.0857 |
| 0.0807 | 8.1241 | 1178 | 1.1523 | -0.0927 | 1.1523 | 1.0735 |
| 0.0807 | 8.1379 | 1180 | 1.1445 | -0.0927 | 1.1445 | 1.0698 |
| 0.0807 | 8.1517 | 1182 | 1.1265 | -0.0927 | 1.1265 | 1.0614 |
| 0.0807 | 8.1655 | 1184 | 1.1150 | -0.1159 | 1.1150 | 1.0559 |
| 0.0807 | 8.1793 | 1186 | 1.1089 | -0.1159 | 1.1089 | 1.0530 |
| 0.0807 | 8.1931 | 1188 | 1.1043 | -0.1159 | 1.1043 | 1.0509 |
| 0.0807 | 8.2069 | 1190 | 1.1049 | -0.1159 | 1.1049 | 1.0511 |
| 0.0807 | 8.2207 | 1192 | 1.1070 | -0.1159 | 1.1070 | 1.0522 |
| 0.0807 | 8.2345 | 1194 | 1.1087 | -0.1159 | 1.1087 | 1.0529 |
| 0.0807 | 8.2483 | 1196 | 1.1233 | -0.0927 | 1.1233 | 1.0599 |
| 0.0807 | 8.2621 | 1198 | 1.1496 | 0.0610 | 1.1496 | 1.0722 |
| 0.0807 | 8.2759 | 1200 | 1.1541 | 0.0610 | 1.1541 | 1.0743 |
| 0.0807 | 8.2897 | 1202 | 1.1734 | 0.0610 | 1.1734 | 1.0832 |
| 0.0807 | 8.3034 | 1204 | 1.1827 | 0.0610 | 1.1827 | 1.0875 |
| 0.0807 | 8.3172 | 1206 | 1.1995 | 0.0610 | 1.1995 | 1.0952 |
| 0.0807 | 8.3310 | 1208 | 1.1992 | 0.0610 | 1.1992 | 1.0951 |
| 0.0807 | 8.3448 | 1210 | 1.1980 | 0.0610 | 1.1980 | 1.0945 |
| 0.0807 | 8.3586 | 1212 | 1.1879 | 0.0610 | 1.1879 | 1.0899 |
| 0.0807 | 8.3724 | 1214 | 1.1620 | 0.0610 | 1.1620 | 1.0779 |
| 0.0807 | 8.3862 | 1216 | 1.1350 | -0.0927 | 1.1350 | 1.0654 |
| 0.0807 | 8.4 | 1218 | 1.1349 | -0.0927 | 1.1349 | 1.0653 |
| 0.0807 | 8.4138 | 1220 | 1.1253 | -0.0927 | 1.1253 | 1.0608 |
| 0.0807 | 8.4276 | 1222 | 1.1259 | -0.0927 | 1.1259 | 1.0611 |
| 0.0807 | 8.4414 | 1224 | 1.1374 | -0.0927 | 1.1374 | 1.0665 |
| 0.0807 | 8.4552 | 1226 | 1.1542 | -0.0927 | 1.1542 | 1.0743 |
| 0.0807 | 8.4690 | 1228 | 1.1801 | -0.0927 | 1.1801 | 1.0863 |
| 0.0807 | 8.4828 | 1230 | 1.2021 | 0.0610 | 1.2021 | 1.0964 |
| 0.0807 | 8.4966 | 1232 | 1.2145 | 0.0610 | 1.2145 | 1.1020 |
| 0.0807 | 8.5103 | 1234 | 1.2147 | 0.0610 | 1.2147 | 1.1021 |
| 0.0807 | 8.5241 | 1236 | 1.2030 | -0.0927 | 1.2030 | 1.0968 |
| 0.0807 | 8.5379 | 1238 | 1.1971 | -0.0927 | 1.1971 | 1.0941 |
| 0.0807 | 8.5517 | 1240 | 1.1930 | -0.0927 | 1.1930 | 1.0923 |
| 0.0807 | 8.5655 | 1242 | 1.1858 | -0.0927 | 1.1858 | 1.0890 |
| 0.0807 | 8.5793 | 1244 | 1.1834 | -0.0927 | 1.1834 | 1.0879 |
| 0.0807 | 8.5931 | 1246 | 1.1862 | -0.0927 | 1.1862 | 1.0891 |
| 0.0807 | 8.6069 | 1248 | 1.2022 | 0.0610 | 1.2022 | 1.0965 |
| 0.0807 | 8.6207 | 1250 | 1.2124 | 0.0610 | 1.2124 | 1.1011 |
| 0.0807 | 8.6345 | 1252 | 1.2074 | 0.0610 | 1.2074 | 1.0988 |
| 0.0807 | 8.6483 | 1254 | 1.1765 | -0.0927 | 1.1765 | 1.0847 |
| 0.0807 | 8.6621 | 1256 | 1.1527 | -0.0927 | 1.1527 | 1.0736 |
| 0.0807 | 8.6759 | 1258 | 1.1468 | -0.0927 | 1.1468 | 1.0709 |
| 0.0807 | 8.6897 | 1260 | 1.1470 | -0.0927 | 1.1470 | 1.0710 |
| 0.0807 | 8.7034 | 1262 | 1.1664 | -0.0927 | 1.1664 | 1.0800 |
| 0.0807 | 8.7172 | 1264 | 1.1891 | 0.0610 | 1.1891 | 1.0905 |
| 0.0807 | 8.7310 | 1266 | 1.2036 | 0.0610 | 1.2036 | 1.0971 |
| 0.0807 | 8.7448 | 1268 | 1.2067 | 0.0610 | 1.2067 | 1.0985 |
| 0.0807 | 8.7586 | 1270 | 1.2270 | 0.0610 | 1.2270 | 1.1077 |
| 0.0807 | 8.7724 | 1272 | 1.2446 | 0.0610 | 1.2446 | 1.1156 |
| 0.0807 | 8.7862 | 1274 | 1.2811 | -0.0565 | 1.2811 | 1.1319 |
| 0.0807 | 8.8 | 1276 | 1.3301 | -0.0565 | 1.3301 | 1.1533 |
| 0.0807 | 8.8138 | 1278 | 1.3543 | -0.0565 | 1.3543 | 1.1638 |
| 0.0807 | 8.8276 | 1280 | 1.3651 | -0.0565 | 1.3651 | 1.1684 |
| 0.0807 | 8.8414 | 1282 | 1.3519 | -0.0565 | 1.3519 | 1.1627 |
| 0.0807 | 8.8552 | 1284 | 1.3356 | -0.0565 | 1.3356 | 1.1557 |
| 0.0807 | 8.8690 | 1286 | 1.3055 | -0.0565 | 1.3055 | 1.1426 |
| 0.0807 | 8.8828 | 1288 | 1.2773 | -0.0565 | 1.2773 | 1.1302 |
| 0.0807 | 8.8966 | 1290 | 1.2399 | 0.0610 | 1.2399 | 1.1135 |
| 0.0807 | 8.9103 | 1292 | 1.2003 | -0.0927 | 1.2003 | 1.0956 |
| 0.0807 | 8.9241 | 1294 | 1.1766 | -0.0927 | 1.1766 | 1.0847 |
| 0.0807 | 8.9379 | 1296 | 1.1574 | -0.1440 | 1.1574 | 1.0758 |
| 0.0807 | 8.9517 | 1298 | 1.1473 | -0.1440 | 1.1473 | 1.0711 |
| 0.0807 | 8.9655 | 1300 | 1.1457 | -0.1440 | 1.1457 | 1.0704 |
| 0.0807 | 8.9793 | 1302 | 1.1523 | -0.1159 | 1.1523 | 1.0735 |
| 0.0807 | 8.9931 | 1304 | 1.1633 | -0.0927 | 1.1633 | 1.0786 |
| 0.0807 | 9.0069 | 1306 | 1.1803 | -0.0927 | 1.1803 | 1.0864 |
| 0.0807 | 9.0207 | 1308 | 1.2019 | 0.0610 | 1.2019 | 1.0963 |
| 0.0807 | 9.0345 | 1310 | 1.2253 | 0.0610 | 1.2253 | 1.1070 |
| 0.0807 | 9.0483 | 1312 | 1.2489 | -0.0565 | 1.2489 | 1.1175 |
| 0.0807 | 9.0621 | 1314 | 1.2659 | -0.0565 | 1.2659 | 1.1251 |
| 0.0807 | 9.0759 | 1316 | 1.2696 | -0.0565 | 1.2696 | 1.1268 |
| 0.0807 | 9.0897 | 1318 | 1.2704 | -0.0565 | 1.2704 | 1.1271 |
| 0.0807 | 9.1034 | 1320 | 1.2617 | -0.0565 | 1.2617 | 1.1232 |
| 0.0807 | 9.1172 | 1322 | 1.2451 | 0.0610 | 1.2451 | 1.1158 |
| 0.0807 | 9.1310 | 1324 | 1.2381 | 0.0610 | 1.2381 | 1.1127 |
| 0.0807 | 9.1448 | 1326 | 1.2326 | 0.0610 | 1.2326 | 1.1102 |
| 0.0807 | 9.1586 | 1328 | 1.2392 | 0.0610 | 1.2392 | 1.1132 |
| 0.0807 | 9.1724 | 1330 | 1.2540 | -0.0565 | 1.2540 | 1.1198 |
| 0.0807 | 9.1862 | 1332 | 1.2589 | -0.0565 | 1.2589 | 1.1220 |
| 0.0807 | 9.2 | 1334 | 1.2588 | -0.0565 | 1.2588 | 1.1220 |
| 0.0807 | 9.2138 | 1336 | 1.2526 | -0.0565 | 1.2526 | 1.1192 |
| 0.0807 | 9.2276 | 1338 | 1.2369 | 0.0610 | 1.2369 | 1.1121 |
| 0.0807 | 9.2414 | 1340 | 1.2222 | 0.0610 | 1.2222 | 1.1055 |
| 0.0807 | 9.2552 | 1342 | 1.2107 | 0.0610 | 1.2107 | 1.1003 |
| 0.0807 | 9.2690 | 1344 | 1.2047 | 0.0610 | 1.2047 | 1.0976 |
| 0.0807 | 9.2828 | 1346 | 1.1996 | 0.0610 | 1.1996 | 1.0953 |
| 0.0807 | 9.2966 | 1348 | 1.1922 | 0.0610 | 1.1922 | 1.0919 |
| 0.0807 | 9.3103 | 1350 | 1.1881 | -0.0927 | 1.1881 | 1.0900 |
| 0.0807 | 9.3241 | 1352 | 1.1862 | -0.0927 | 1.1862 | 1.0892 |
| 0.0807 | 9.3379 | 1354 | 1.1882 | -0.0927 | 1.1882 | 1.0900 |
| 0.0807 | 9.3517 | 1356 | 1.1883 | 0.0610 | 1.1883 | 1.0901 |
| 0.0807 | 9.3655 | 1358 | 1.1953 | 0.0610 | 1.1953 | 1.0933 |
| 0.0807 | 9.3793 | 1360 | 1.2063 | 0.0610 | 1.2063 | 1.0983 |
| 0.0807 | 9.3931 | 1362 | 1.2066 | 0.0610 | 1.2066 | 1.0985 |
| 0.0807 | 9.4069 | 1364 | 1.2079 | 0.0610 | 1.2079 | 1.0991 |
| 0.0807 | 9.4207 | 1366 | 1.2071 | 0.0610 | 1.2071 | 1.0987 |
| 0.0807 | 9.4345 | 1368 | 1.2055 | 0.0610 | 1.2055 | 1.0980 |
| 0.0807 | 9.4483 | 1370 | 1.1973 | 0.0610 | 1.1973 | 1.0942 |
| 0.0807 | 9.4621 | 1372 | 1.1921 | 0.0610 | 1.1921 | 1.0918 |
| 0.0807 | 9.4759 | 1374 | 1.1872 | 0.0610 | 1.1872 | 1.0896 |
| 0.0807 | 9.4897 | 1376 | 1.1837 | 0.0610 | 1.1837 | 1.0880 |
| 0.0807 | 9.5034 | 1378 | 1.1867 | 0.0610 | 1.1867 | 1.0894 |
| 0.0807 | 9.5172 | 1380 | 1.1843 | 0.0610 | 1.1843 | 1.0883 |
| 0.0807 | 9.5310 | 1382 | 1.1856 | 0.0610 | 1.1856 | 1.0889 |
| 0.0807 | 9.5448 | 1384 | 1.1820 | 0.0610 | 1.1820 | 1.0872 |
| 0.0807 | 9.5586 | 1386 | 1.1776 | 0.0610 | 1.1776 | 1.0852 |
| 0.0807 | 9.5724 | 1388 | 1.1738 | 0.0610 | 1.1738 | 1.0834 |
| 0.0807 | 9.5862 | 1390 | 1.1688 | 0.0610 | 1.1688 | 1.0811 |
| 0.0807 | 9.6 | 1392 | 1.1633 | -0.0927 | 1.1633 | 1.0786 |
| 0.0807 | 9.6138 | 1394 | 1.1562 | -0.0927 | 1.1562 | 1.0753 |
| 0.0807 | 9.6276 | 1396 | 1.1498 | -0.0927 | 1.1498 | 1.0723 |
| 0.0807 | 9.6414 | 1398 | 1.1416 | -0.0927 | 1.1416 | 1.0685 |
| 0.0807 | 9.6552 | 1400 | 1.1359 | -0.0927 | 1.1359 | 1.0658 |
| 0.0807 | 9.6690 | 1402 | 1.1326 | -0.0927 | 1.1326 | 1.0642 |
| 0.0807 | 9.6828 | 1404 | 1.1326 | -0.0927 | 1.1326 | 1.0642 |
| 0.0807 | 9.6966 | 1406 | 1.1313 | -0.0927 | 1.1313 | 1.0636 |
| 0.0807 | 9.7103 | 1408 | 1.1283 | -0.1159 | 1.1283 | 1.0622 |
| 0.0807 | 9.7241 | 1410 | 1.1251 | -0.1159 | 1.1251 | 1.0607 |
| 0.0807 | 9.7379 | 1412 | 1.1243 | -0.1159 | 1.1243 | 1.0603 |
| 0.0807 | 9.7517 | 1414 | 1.1233 | -0.1159 | 1.1233 | 1.0598 |
| 0.0807 | 9.7655 | 1416 | 1.1234 | -0.1159 | 1.1234 | 1.0599 |
| 0.0807 | 9.7793 | 1418 | 1.1234 | -0.1159 | 1.1234 | 1.0599 |
| 0.0807 | 9.7931 | 1420 | 1.1235 | -0.1159 | 1.1235 | 1.0599 |
| 0.0807 | 9.8069 | 1422 | 1.1246 | -0.1159 | 1.1246 | 1.0605 |
| 0.0807 | 9.8207 | 1424 | 1.1265 | -0.1159 | 1.1265 | 1.0614 |
| 0.0807 | 9.8345 | 1426 | 1.1292 | -0.1159 | 1.1292 | 1.0626 |
| 0.0807 | 9.8483 | 1428 | 1.1330 | -0.1159 | 1.1330 | 1.0644 |
| 0.0807 | 9.8621 | 1430 | 1.1363 | -0.1159 | 1.1363 | 1.0660 |
| 0.0807 | 9.8759 | 1432 | 1.1402 | -0.0927 | 1.1402 | 1.0678 |
| 0.0807 | 9.8897 | 1434 | 1.1433 | -0.0927 | 1.1433 | 1.0692 |
| 0.0807 | 9.9034 | 1436 | 1.1461 | -0.0927 | 1.1461 | 1.0706 |
| 0.0807 | 9.9172 | 1438 | 1.1492 | -0.0927 | 1.1492 | 1.0720 |
| 0.0807 | 9.9310 | 1440 | 1.1512 | -0.0927 | 1.1512 | 1.0729 |
| 0.0807 | 9.9448 | 1442 | 1.1525 | -0.0927 | 1.1525 | 1.0735 |
| 0.0807 | 9.9586 | 1444 | 1.1528 | -0.0927 | 1.1528 | 1.0737 |
| 0.0807 | 9.9724 | 1446 | 1.1528 | -0.0927 | 1.1528 | 1.0737 |
| 0.0807 | 9.9862 | 1448 | 1.1529 | -0.0927 | 1.1529 | 1.0737 |
| 0.0807 | 10.0 | 1450 | 1.1529 | -0.0927 | 1.1529 | 1.0737 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
leinad-deinor/Llama3.2-1b-redeIT-XML-GGUF | leinad-deinor | 2024-11-25T11:41:32Z | 24 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T11:33:57Z | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** leinad-deinor
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Neural-Hermes-V0.1-test-GGUF | mradermacher | 2024-11-25T11:39:05Z | 11 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TheHierophant/Neural-Hermes-V0.1-test",
"base_model:quantized:TheHierophant/Neural-Hermes-V0.1-test",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T09:21:20Z | ---
base_model: TheHierophant/Neural-Hermes-V0.1-test
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TheHierophant/Neural-Hermes-V0.1-test
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Neural-Hermes-V0.1-test-GGUF/resolve/main/Neural-Hermes-V0.1-test.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yaoG88/PrivateDoctor | yaoG88 | 2024-11-25T11:28:37Z | 53 | 1 | null | [
"gguf",
"qwen2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T07:25:27Z | ---
license: apache-2.0
---
|
parler-tts/parler-tts-mini-v1 | parler-tts | 2024-11-25T11:26:20Z | 25,141 | 133 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-06-26T09:14:29Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
asthaaa300/results | asthaaa300 | 2024-11-25T11:24:01Z | 108 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T11:10:02Z | ---
library_name: transformers
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 0.1595 |
| No log | 2.0 | 40 | 0.0672 |
| No log | 3.0 | 60 | 0.0515 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mini1013/master_cate_ac16 | mini1013 | 2024-11-25T11:20:04Z | 257 | 0 | setfit | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | text-classification | 2024-11-25T11:19:40Z | ---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 얌뚱이 칼라고무밴드 머리끈 헤어밴드 고무줄 유아 아동 여아 어린이집 검정 색 대용량 대핑크30g 얌뚱이
- text: 파티 벨벳 심플 왕리본핀 반묶음핀 30칼라 와인_납작핀대 릴리트리
- text: 넓은 여자 머리띠 윤아 와이드 귀안아픈 면 니트 터번 T-도톰쫀득_핑크 모스블랑
- text: 얼굴소멸 히메컷 가발 앞머리 사이드뱅 옆머리 부분 가발 애교머리 풀뱅 규리 민니 옆2p-라이트브라운 굿모닝리테일
- text: 13cm 빅사이즈 대왕 숱많은 긴 머리 꼬임 올림머리 집게핀 3/ 그라데이션 매트_브라운 블렌디드
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9541466176054345
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2.0 | <ul><li>'(신세계김해점)에트로 프로푸미 헤어밴드 01046 05 1099 ONE SIZE 신세계백화점'</li><li>'Baby scrunchie 3set (White/Beige/Black) 빌라드실크 곱창밴드 미니 실크 스크런치 세트 주식회사 실크랩'</li><li>'간단 헤어밴드 미키마우스 머리띠 왕 리본 남자 캐릭터 플라스틱 반짝이 1-4. 글리터 / 블랙 아이드림'</li></ul> |
| 1.0 | <ul><li>'위즈템 헤어밴드 진주 크리스탈 머리끈 연핑크 파파닐'</li><li>'둥근고무줄 (대용량) 칼라 금 은 천고무줄 벌크 탄성끈 가는줄 /굵은줄 02. 대용량 굵은줄(2.5mmx60M)_금색 마이1004(MY1004)'</li><li>'천연 컬러 고무 끈 고무줄 생활용품 3M 하늘색 제이앤제이웍스'</li></ul> |
| 0.0 | <ul><li>'인모 남자가발 정수리 커버 자연스러운 O형 커버가발 마오_인모14X14 하이윤'</li><li>'얼굴소멸 히메컷 가발 앞머리 사이드뱅 옆머리 부분 히메컷 사이드뱅 옆2p-내츄럴브라운 와우마켓'</li><li>'얼굴소멸 히메컷 가발 앞머리 사이드뱅 옆머리 부분 옆2p-라이트브라운 이지구'</li></ul> |
| 4.0 | <ul><li>'무지 12컬러 심플 리본 바나나핀 핫핑크 하얀당나귀'</li><li>'네임핀/이름핀/네임브로치/어린이집선물/유치원선물 5글자(영어6자~8자)_별_브로치 쭈스타'</li><li>'메탈 셀룰로오스 꼬임 올림머리 집게핀 사각4170_아이스옐로우 엑스엔서'</li></ul> |
| 3.0 | <ul><li>'웨딩 드레스 유니크 베일 셀프 촬영 소품 대형 리본 잡지 모델 패션쇼 장식 액세서리 머리 04.파란 (핸드메이드) 더비공이(TheB02)'</li><li>'슈퍼 요정 흰색 보석 웨딩 헤어 타워 공연 여행 T15-a_선택하세요 아토버디'</li><li>'뿌리볼륨집게3p 건강드림'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9541 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_ac16")
# Run inference
preds = model("파티 벨벳 심플 왕리본핀 반묶음핀 30칼라 와인_납작핀대 릴리트리")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.956 | 24 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.4499 | - |
| 1.25 | 50 | 0.2065 | - |
| 2.5 | 100 | 0.0446 | - |
| 3.75 | 150 | 0.0001 | - |
| 5.0 | 200 | 0.0 | - |
| 6.25 | 250 | 0.0001 | - |
| 7.5 | 300 | 0.0 | - |
| 8.75 | 350 | 0.0 | - |
| 10.0 | 400 | 0.0 | - |
| 11.25 | 450 | 0.0 | - |
| 12.5 | 500 | 0.0 | - |
| 13.75 | 550 | 0.0 | - |
| 15.0 | 600 | 0.0 | - |
| 16.25 | 650 | 0.0 | - |
| 17.5 | 700 | 0.0 | - |
| 18.75 | 750 | 0.0 | - |
| 20.0 | 800 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mini1013/master_cate_ac15 | mini1013 | 2024-11-25T11:16:19Z | 147 | 0 | setfit | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | text-classification | 2024-11-25T11:15:56Z | ---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 스마일뱃지 제작 브로치 다양한 크기 문구 삽입가능 별빛(+300원)_뱃지 중(45mm)_200개~399개 맘스뱃지
- text: 고급 골지압박 타이즈 스타킹 유발 면 겨울 베이지 버징가마켓
- text: 겨울 목도리 여자 남자 캐시미어 니트 쁘띠 울 머플러 1_솜사탕-MS47 에스랑제이
- text: 손수건/무지손수건/등산손수건/스카프/등산손수건/두건/KC인증/인쇄가능/개별OPP 무지손수건 [무지손수건] 무지손수건(옐로우) 답돌이월드
- text: 동백꽃 부토니에 머리핀 코사지(K28) K28-06_머리핀 까만당나귀
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.8556701030927835
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 20 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 19.0 | <ul><li>'하복 여름용 시원한 베이직 네이비 정장 시선집중 봄 35_38 예이몰'</li><li>'하복 여름용 시원한 베이직 네이비 정장 시선집중 봄 35_42 예이몰'</li><li>'하복 여름용 시원한 베이직 네이비 정장 시선집중 봄 36B_40 예이몰'</li></ul> |
| 18.0 | <ul><li>'전통 십장생 금은사 금룡 오복 돌띠 남아 여아 돌 백일 여자 남자 애기 아기 한복 돌띠 5번 십장생돌띠(장색) 이제한복'</li><li>'여아 한복 머리띠 족두리 호박 홍시 배씨 장신구 여자 아기 돌 어린이 전통 자수 머리 띠 핀 호박 핀머리띠 레드 청아'</li><li>'엄마옷 삼베 리본 생활한복 두건 KD304133 중년여성 40 50 60대 마담 빅사이즈 할머니 미시 벽돌:FREE AKmall'</li></ul> |
| 17.0 | <ul><li>'[무료각인]자동차/캐리어 가죽키링/스마트키홀더/네임택/ 남자친구 여자친구 부모님 선물 T자형 (핑크)_골드(유광)_나눔손글씨펜체 더위드블루'</li><li>'귀여운 곰인형 키링 열쇠고리 스웨터 모자 최여시'</li><li>'[무료각인]자동차/캐리어 가죽키링/스마트키홀더/네임택/ 남자친구 여자친구 부모님 선물 T자형 (콰이즈블루)_골드(무광)_나눔바른고딕체 더위드블루'</li></ul> |
| 2.0 | <ul><li>'셔츠 넥 카라 레이어드 페이크 케이프 2type 둥근카라/화이트 도비77마켓'</li><li>'셔츠카라 넥케이프 페이크카라 레이어드카라 넥커프스 1-카라-화이트 오니온스'</li><li>'넥케이프 스카프 머플러 레이스 페이크 카라 작은 잎 화이트 모멘트1'</li></ul> |
| 12.0 | <ul><li>'[닥스](광주신세계) 양산 가드닝 PBU003Q 블랙(01) 주식회사 에스에스지닷컴'</li><li>'고급 우양산 남자 초경량 양산 우산 자외선차단 암막 그린 하트 쿨로미-네이비-커뮤니케이션 한정판 서민스토어'</li><li>'암막 하트펀칭코팅 양산 IPLQP40042 스카이 '</li></ul> |
| 5.0 | <ul><li>'캐시미어 머플러 FKU035 블루 롯데백화점1관'</li><li>'여성 겨울 니트 짜임 목도리 머플러 그레이 엠에스씨'</li><li>'엘르/칼린 롱 쁘띠 니트 머플러 SE04MP3000 택1 선택07 바네사끼움SE34MX304 브라운 AK플라자1관'</li></ul> |
| 16.0 | <ul><li>'부토니에 결혼식 무도회 꽃 장미 브로치 핀 진주 나비 신부 신랑 새틴 리본 액세서리 13 포시즌스트레이드'</li><li>'부토니에 결혼식용 인공 수국 꽃 실크 머리 50 개 Off White Leaves White Stems_50 PCS 포시즌스트레이드'</li><li>'코르사주 맞춤형 터키 깃털 머리장식 클립 닭 꼬리 코스튬 모자 DS230441 DS230442 포시즌스트레이드'</li></ul> |
| 11.0 | <ul><li>'BYC 본사 종아리압박밴드 SWG1300 BK(블랙)/F 홈앤쇼핑몰'</li><li>'5묶음 기모 고카바 넌슬립 (바닥실리콘) 아소트 코썸'</li><li>'팬티스타킹/덧신/양말/기모/속바지/판타롱/학생 2_9부 블랙 2매 규리몰'</li></ul> |
| 9.0 | <ul><li>'[1만원인하]에디티드 브리즈 썸머 원피스+니트숄 076/티파니블루/88 AKmall'</li><li>'어깨숄 어깨에 페이크니트 니트 두르는 가디건 여성 캐주얼 여성스런 숄망토 X. 캐러멜 구루미상회'</li><li>'[헬렌카민스키](신세계타임스퀘어점패션관)[공식] 헬렌카민스키 메르시에 판초 코트 LUWRCT00020 멜란지그레이_OS 주식회사 에스에스지닷컴'</li></ul> |
| 1.0 | <ul><li>'내셔널지오그래픽 악세사리 악세서리 메쉬쿨토시 1163327 BLACK_M(002) koreamk2'</li><li>'[LAP](신세계김해점)골지 베이직 워머 AP7AYA01 BK(블랙)_FF 주식회사 에스에스지닷컴'</li><li>'가을 겨울 페이크 후드 넥워머 니트 레이어드 바라클라바 모자 방한용품 베이지 다온마켓'</li></ul> |
| 8.0 | <ul><li>'여성 에티켓 손수건 레이스 무릎 덮개 대형 면 꽃무늬 에티켓03 서울타임즈'</li><li>'일본 수입 스누피 반다나 손수건 6종 스카우트아이보리 키티야'</li><li>'(지나산업)등산손수건/반다나/페이즐리/opp개별포장 블루 infnet16'</li></ul> |
| 15.0 | <ul><li>'카메라렌즈 커프스 남성 패션 소품 실버 행복세일웃음'</li><li>'블루 넥타이핀 커프스 버튼 831 와이셔츠 타이바 링크 소매 젠틀 안트 넥타이핀 엠에프샵'</li><li>'[6월남자] 카메라 렌즈 커프스 버튼 남자 정장 소품 골드 베라콘'</li></ul> |
| 0.0 | <ul><li>'미우미우 벨벳 헤어클립 헤어핀 MIUMIU Velvet hair clip 35.5 어리버리샵'</li><li>'허리늘리기 밴딩탭 슬랙스탭 허리조절 셀프수선고무줄 청바지탭 3P 호메르'</li><li>'수도동파방지 덮개 한파방지커버 수도계량기 보온재 비(B)'</li></ul> |
| 6.0 | <ul><li>'휴대용 접이식 캐릭터 부채 미니사이즈 KK99 6.딸기토끼 안미현'</li><li>'어린이부채만들기 04. ZIG캘리그라피펜 MS3400_42. MS-3400 / 070_PURE ORANGE 포장지세상'</li><li>'발롱 VF1 한국 무용 워십 너슬 부채춤 부채 팬베일 VF111 WR 우_L(35cm) 발롱'</li></ul> |
| 10.0 | <ul><li>'EP 모드 남성용 겨울 스카프 캐시미어 느낌 매우 부드럽고 따뜻함 다이아몬드 그리드 네이비 윈나인'</li><li>'개/대 남성 여성 가을 겨울 양면 컬러 매칭 스카프 모자 장갑 M89E DG 글로벌 엠에스 컴퍼니'</li><li>'하태하태 기념일선물 아마존 크로스 보더 남성 캐주얼 믹스매치하기좋은 여자들이좋아하는선물 Navy blue 리마110'</li></ul> |
| 4.0 | <ul><li>'에쎄 케이스 파우치 슬림형 20개비 빈티지 갑 메탈 보관함 남자친구선물 자동 A 다온마켓'</li><li>'소품보관 악세사리 가죽 전자 케이스 수납 네이비 갑자네'</li><li>'에스티듀퐁 뉴 라인2 전용 리필 가스 CNA000435 레드 주식회사 스타필드하남'</li></ul> |
| 13.0 | <ul><li>'강철 특수 부대 패치 와펜 707 UDT UDU SSU SART HID SEAL 해병대 L.SOU 밀리터리코리아'</li><li>'[NFL] F214ATO040 부클 복조리 크로스백 블랙_Free 롯데쇼핑(주) 프리미엄 아울렛 김해점'</li><li>'와팬 와펜 열접접착 자수 스티커 브러치 패치 마크 견장 51번부터 100번까지_55번 TNT몰'</li></ul> |
| 3.0 | <ul><li>'BTIE_102 그레이체크 니트 보타이(그레이 품절) 차콜 건강드림'</li><li>'다이아몬드컷팅된넥타이핀VMRTP1006 화이트 '</li><li>'푸르티민트향 치약 키즈세이프 60g 충치케어 키즈 4입 치아관리 잇몸냄새 입냄새제거 주식회사제이케이이노베이션'</li></ul> |
| 14.0 | <ul><li>'코지트리 반대로 접고 펴는 거꾸로 우산 거꾸로우산--스카이블루 투게이트'</li><li>'튼튼한 자동 3단우산 거꾸로 우산 반전 네이비 블루 패킹팩토리'</li><li>'[무료 각인서비스] 크로반 대형 자동장우산 KR3 파스텔브라운_폰트02 주식회사 크로반'</li></ul> |
| 7.0 | <ul><li>'브로치 옷핀브로치 진주브로치 브롯지 2_장미 부토니에(진주)-자주 조은상점'</li><li>'릭 오웬스 남성 블랙 클래식 플라이트 가죽 재킷 가죽 자켓 232232M175011 IT 44 주식회사 스마일벤처스'</li><li>'23FW 카사데이 드레스 슈즈 1F920W100M C14449000 42 주식회사 구하다'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.8557 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_ac15")
# Run inference
preds = model("고급 골지압박 타이즈 스타킹 유발 면 겨울 베이지 버징가마켓")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 10.322 | 25 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
| 10.0 | 50 |
| 11.0 | 50 |
| 12.0 | 50 |
| 13.0 | 50 |
| 14.0 | 50 |
| 15.0 | 50 |
| 16.0 | 50 |
| 17.0 | 50 |
| 18.0 | 50 |
| 19.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0064 | 1 | 0.3967 | - |
| 0.3185 | 50 | 0.3383 | - |
| 0.6369 | 100 | 0.2365 | - |
| 0.9554 | 150 | 0.1145 | - |
| 1.2739 | 200 | 0.0563 | - |
| 1.5924 | 250 | 0.0414 | - |
| 1.9108 | 300 | 0.0377 | - |
| 2.2293 | 350 | 0.0159 | - |
| 2.5478 | 400 | 0.0297 | - |
| 2.8662 | 450 | 0.0258 | - |
| 3.1847 | 500 | 0.0194 | - |
| 3.5032 | 550 | 0.0113 | - |
| 3.8217 | 600 | 0.0108 | - |
| 4.1401 | 650 | 0.0059 | - |
| 4.4586 | 700 | 0.0009 | - |
| 4.7771 | 750 | 0.0059 | - |
| 5.0955 | 800 | 0.0044 | - |
| 5.4140 | 850 | 0.004 | - |
| 5.7325 | 900 | 0.0023 | - |
| 6.0510 | 950 | 0.0004 | - |
| 6.3694 | 1000 | 0.0024 | - |
| 6.6879 | 1050 | 0.0007 | - |
| 7.0064 | 1100 | 0.0004 | - |
| 7.3248 | 1150 | 0.0002 | - |
| 7.6433 | 1200 | 0.0002 | - |
| 7.9618 | 1250 | 0.0003 | - |
| 8.2803 | 1300 | 0.0002 | - |
| 8.5987 | 1350 | 0.0001 | - |
| 8.9172 | 1400 | 0.0001 | - |
| 9.2357 | 1450 | 0.0001 | - |
| 9.5541 | 1500 | 0.0001 | - |
| 9.8726 | 1550 | 0.0001 | - |
| 10.1911 | 1600 | 0.0001 | - |
| 10.5096 | 1650 | 0.0001 | - |
| 10.8280 | 1700 | 0.0001 | - |
| 11.1465 | 1750 | 0.0001 | - |
| 11.4650 | 1800 | 0.0001 | - |
| 11.7834 | 1850 | 0.0001 | - |
| 12.1019 | 1900 | 0.0001 | - |
| 12.4204 | 1950 | 0.0001 | - |
| 12.7389 | 2000 | 0.0001 | - |
| 13.0573 | 2050 | 0.0001 | - |
| 13.3758 | 2100 | 0.0001 | - |
| 13.6943 | 2150 | 0.0001 | - |
| 14.0127 | 2200 | 0.0001 | - |
| 14.3312 | 2250 | 0.0001 | - |
| 14.6497 | 2300 | 0.0001 | - |
| 14.9682 | 2350 | 0.0001 | - |
| 15.2866 | 2400 | 0.0001 | - |
| 15.6051 | 2450 | 0.0001 | - |
| 15.9236 | 2500 | 0.0001 | - |
| 16.2420 | 2550 | 0.0001 | - |
| 16.5605 | 2600 | 0.0001 | - |
| 16.8790 | 2650 | 0.0001 | - |
| 17.1975 | 2700 | 0.0001 | - |
| 17.5159 | 2750 | 0.0001 | - |
| 17.8344 | 2800 | 0.0001 | - |
| 18.1529 | 2850 | 0.0001 | - |
| 18.4713 | 2900 | 0.0001 | - |
| 18.7898 | 2950 | 0.0001 | - |
| 19.1083 | 3000 | 0.0001 | - |
| 19.4268 | 3050 | 0.0001 | - |
| 19.7452 | 3100 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/RP-Naughty-v1.0b-8b-GGUF | mradermacher | 2024-11-25T11:14:37Z | 9 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T10:36:50Z | ---
base_model: MrRobotoAI/RP-Naughty-v1.0b-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MrRobotoAI/RP-Naughty-v1.0b-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0b-8b-GGUF/resolve/main/RP-Naughty-v1.0b-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dankalin/ruadapt_qwen2.5_3B_finetuned_v2 | dankalin | 2024-11-25T11:13:36Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T11:09:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated | huihui-ai | 2024-11-25T11:09:22Z | 786 | 26 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-12T13:46:05Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterate/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-Code-32B-Instruct-abliterated
This is an uncensored version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
[0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
[1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
[3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
[7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
[14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
[32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
If the desired result is not achieved, you can clear the conversation and try again.
## ollama
You can use [huihui_ai/qwen2.5-coder-abliterate:32b](https://ollama.com/huihui_ai/qwen2.5-coder-abliterate:32b) directly,
```
ollama run huihui_ai/qwen2.5-coder-abliterate:32b
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-Code-32B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
|
huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated | huihui-ai | 2024-11-25T11:09:09Z | 142 | 6 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-06T12:16:05Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
This is an uncensored version of [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
[0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
[1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
[3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
[7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
[14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
[32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
## ollama
You can use [huihui_ai/qwen2.5-coder-abliterate](https://ollama.com/huihui_ai/qwen2.5-coder-abliterate) directly,
```
ollama run huihui_ai/qwen2.5-coder-abliterate
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Initialize conversation context
initial_messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
]
messages = initial_messages.copy() # Copy the initial conversation context
# Enter conversation loop
while True:
# Get user input
user_input = input("User: ").strip() # Strip leading and trailing spaces
# If the user types '/exit', end the conversation
if user_input.lower() == "/exit":
print("Exiting chat.")
break
# If the user types '/clean', reset the conversation context
if user_input.lower() == "/clean":
messages = initial_messages.copy() # Reset conversation context
print("Chat history cleared. Starting a new conversation.")
continue
# If input is empty, prompt the user and continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Build the chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Tokenize input and prepare it for the model
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
# Extract model output, removing special tokens
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
# Add the model's response to the conversation
messages.append({"role": "assistant", "content": response})
# Print the model's response
print(f"Qwen: {response}")
```
## Evaluations
The following data has been re-evaluated and calculated as the average for each test.
| Benchmark | Qwen2.5-Coder-7B-Instruct | Qwen2.5-Coder-7B-Instruct-abliterated |
|-------------|---------------------------|---------------------------------------|
| IF_Eval | **63.14** | 61.90 |
| MMLU Pro | 33.54 | **33.56** |
| TruthfulQA | **51.804** | 48.8 |
| BBH | 46.98 | **47.17** |
| GPQA | **32.85** | 32.63 |
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated/blob/main/eval.sh)
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k20_task3_organization_fold1 | MayBashendy | 2024-11-25T11:08:34Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T11:01:06Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k20_task3_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k20_task3_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0299
- Qwk: -0.0708
- Mse: 1.0299
- Rmse: 1.0148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0238 | 2 | 2.4408 | 0.0363 | 2.4408 | 1.5623 |
| No log | 0.0476 | 4 | 0.7719 | 0.6405 | 0.7719 | 0.8786 |
| No log | 0.0714 | 6 | 2.3169 | 0.1586 | 2.3169 | 1.5221 |
| No log | 0.0952 | 8 | 4.5369 | 0.0041 | 4.5369 | 2.1300 |
| No log | 0.1190 | 10 | 2.2692 | 0.1492 | 2.2692 | 1.5064 |
| No log | 0.1429 | 12 | 0.8609 | 0.2143 | 0.8609 | 0.9279 |
| No log | 0.1667 | 14 | 0.5764 | 0.2326 | 0.5764 | 0.7592 |
| No log | 0.1905 | 16 | 0.6724 | -0.0421 | 0.6724 | 0.8200 |
| No log | 0.2143 | 18 | 1.0843 | 0.0 | 1.0843 | 1.0413 |
| No log | 0.2381 | 20 | 1.2793 | 0.0 | 1.2793 | 1.1311 |
| No log | 0.2619 | 22 | 1.2726 | 0.0 | 1.2726 | 1.1281 |
| No log | 0.2857 | 24 | 1.0409 | 0.0 | 1.0409 | 1.0203 |
| No log | 0.3095 | 26 | 1.0621 | 0.0 | 1.0621 | 1.0306 |
| No log | 0.3333 | 28 | 1.1491 | 0.0 | 1.1491 | 1.0720 |
| No log | 0.3571 | 30 | 1.2791 | 0.0 | 1.2791 | 1.1310 |
| No log | 0.3810 | 32 | 1.4207 | 0.0 | 1.4207 | 1.1919 |
| No log | 0.4048 | 34 | 1.3388 | 0.0 | 1.3388 | 1.1571 |
| No log | 0.4286 | 36 | 1.2552 | 0.0 | 1.2552 | 1.1204 |
| No log | 0.4524 | 38 | 1.2824 | 0.0 | 1.2824 | 1.1324 |
| No log | 0.4762 | 40 | 1.0797 | 0.0 | 1.0797 | 1.0391 |
| No log | 0.5 | 42 | 1.1020 | 0.0 | 1.1020 | 1.0498 |
| No log | 0.5238 | 44 | 1.1193 | 0.0 | 1.1193 | 1.0580 |
| No log | 0.5476 | 46 | 1.0337 | 0.0 | 1.0337 | 1.0167 |
| No log | 0.5714 | 48 | 1.0299 | 0.0 | 1.0299 | 1.0149 |
| No log | 0.5952 | 50 | 0.9371 | 0.0 | 0.9371 | 0.9680 |
| No log | 0.6190 | 52 | 0.8548 | 0.0120 | 0.8548 | 0.9245 |
| No log | 0.6429 | 54 | 0.8541 | 0.0253 | 0.8541 | 0.9242 |
| No log | 0.6667 | 56 | 0.8003 | -0.2655 | 0.8003 | 0.8946 |
| No log | 0.6905 | 58 | 0.8749 | 0.0253 | 0.8749 | 0.9354 |
| No log | 0.7143 | 60 | 0.9057 | 0.0253 | 0.9057 | 0.9517 |
| No log | 0.7381 | 62 | 1.2178 | 0.0 | 1.2178 | 1.1036 |
| No log | 0.7619 | 64 | 1.4789 | 0.0 | 1.4789 | 1.2161 |
| No log | 0.7857 | 66 | 1.5653 | 0.0 | 1.5653 | 1.2511 |
| No log | 0.8095 | 68 | 1.4045 | 0.0 | 1.4045 | 1.1851 |
| No log | 0.8333 | 70 | 1.3236 | 0.0 | 1.3236 | 1.1505 |
| No log | 0.8571 | 72 | 1.2265 | 0.0 | 1.2265 | 1.1075 |
| No log | 0.8810 | 74 | 1.0435 | 0.0 | 1.0435 | 1.0215 |
| No log | 0.9048 | 76 | 0.8471 | -0.0820 | 0.8471 | 0.9204 |
| No log | 0.9286 | 78 | 0.7717 | -0.2737 | 0.7717 | 0.8785 |
| No log | 0.9524 | 80 | 0.7421 | -0.2791 | 0.7421 | 0.8615 |
| No log | 0.9762 | 82 | 0.7633 | -0.2791 | 0.7633 | 0.8737 |
| No log | 1.0 | 84 | 0.8790 | -0.4808 | 0.8790 | 0.9376 |
| No log | 1.0238 | 86 | 1.2850 | 0.0253 | 1.2850 | 1.1336 |
| No log | 1.0476 | 88 | 1.4705 | 0.0120 | 1.4705 | 1.2127 |
| No log | 1.0714 | 90 | 1.4255 | 0.0120 | 1.4255 | 1.1939 |
| No log | 1.0952 | 92 | 1.1068 | 0.0403 | 1.1068 | 1.0521 |
| No log | 1.1190 | 94 | 0.9656 | -0.2692 | 0.9656 | 0.9826 |
| No log | 1.1429 | 96 | 1.0669 | 0.0571 | 1.0669 | 1.0329 |
| No log | 1.1667 | 98 | 0.9090 | -0.2623 | 0.9090 | 0.9534 |
| No log | 1.1905 | 100 | 0.6427 | -0.0421 | 0.6427 | 0.8017 |
| No log | 1.2143 | 102 | 0.7432 | 0.1239 | 0.7432 | 0.8621 |
| No log | 1.2381 | 104 | 0.6904 | -0.0421 | 0.6904 | 0.8309 |
| No log | 1.2619 | 106 | 0.6379 | 0.0 | 0.6379 | 0.7987 |
| No log | 1.2857 | 108 | 0.6442 | 0.0 | 0.6442 | 0.8026 |
| No log | 1.3095 | 110 | 0.6169 | 0.0 | 0.6169 | 0.7855 |
| No log | 1.3333 | 112 | 0.8070 | 0.0763 | 0.8070 | 0.8983 |
| No log | 1.3571 | 114 | 1.2333 | 0.0 | 1.2333 | 1.1105 |
| No log | 1.3810 | 116 | 1.1418 | 0.0253 | 1.1418 | 1.0685 |
| No log | 1.4048 | 118 | 0.7193 | 0.0763 | 0.7193 | 0.8481 |
| No log | 1.4286 | 120 | 0.5925 | 0.0 | 0.5925 | 0.7698 |
| No log | 1.4524 | 122 | 0.5883 | 0.0 | 0.5883 | 0.7670 |
| No log | 1.4762 | 124 | 0.6245 | 0.4211 | 0.6245 | 0.7903 |
| No log | 1.5 | 126 | 0.8013 | 0.0571 | 0.8013 | 0.8952 |
| No log | 1.5238 | 128 | 0.9190 | 0.0571 | 0.9190 | 0.9586 |
| No log | 1.5476 | 130 | 0.7998 | 0.0763 | 0.7998 | 0.8943 |
| No log | 1.5714 | 132 | 0.6704 | -0.0233 | 0.6704 | 0.8188 |
| No log | 1.5952 | 134 | 0.7139 | 0.0 | 0.7139 | 0.8450 |
| No log | 1.6190 | 136 | 0.7536 | 0.0222 | 0.7536 | 0.8681 |
| No log | 1.6429 | 138 | 0.7898 | 0.0222 | 0.7898 | 0.8887 |
| No log | 1.6667 | 140 | 0.9175 | -0.0154 | 0.9175 | 0.9579 |
| No log | 1.6905 | 142 | 1.0658 | 0.0763 | 1.0658 | 1.0324 |
| No log | 1.7143 | 144 | 1.3366 | 0.0571 | 1.3366 | 1.1561 |
| No log | 1.7381 | 146 | 1.3149 | 0.0571 | 1.3149 | 1.1467 |
| No log | 1.7619 | 148 | 1.0729 | -0.2623 | 1.0729 | 1.0358 |
| No log | 1.7857 | 150 | 0.9215 | -0.0577 | 0.9215 | 0.9599 |
| No log | 1.8095 | 152 | 0.8677 | -0.2737 | 0.8677 | 0.9315 |
| No log | 1.8333 | 154 | 0.9060 | -0.0577 | 0.9060 | 0.9518 |
| No log | 1.8571 | 156 | 0.8206 | -0.2737 | 0.8206 | 0.9059 |
| No log | 1.8810 | 158 | 0.7161 | -0.0233 | 0.7161 | 0.8462 |
| No log | 1.9048 | 160 | 0.6828 | 0.0 | 0.6828 | 0.8263 |
| No log | 1.9286 | 162 | 0.7079 | -0.0233 | 0.7079 | 0.8414 |
| No log | 1.9524 | 164 | 0.8325 | 0.1895 | 0.8325 | 0.9124 |
| No log | 1.9762 | 166 | 0.9309 | -0.0708 | 0.9309 | 0.9648 |
| No log | 2.0 | 168 | 0.8407 | 0.1538 | 0.8407 | 0.9169 |
| No log | 2.0238 | 170 | 0.7086 | 0.1895 | 0.7086 | 0.8418 |
| No log | 2.0476 | 172 | 0.6871 | 0.1895 | 0.6871 | 0.8289 |
| No log | 2.0714 | 174 | 0.6977 | 0.1895 | 0.6977 | 0.8353 |
| No log | 2.0952 | 176 | 0.7223 | 0.1895 | 0.7223 | 0.8499 |
| No log | 2.1190 | 178 | 0.8651 | 0.1895 | 0.8651 | 0.9301 |
| No log | 2.1429 | 180 | 0.9079 | -0.0577 | 0.9079 | 0.9528 |
| No log | 2.1667 | 182 | 0.8224 | 0.1895 | 0.8224 | 0.9068 |
| No log | 2.1905 | 184 | 0.7224 | 0.1895 | 0.7224 | 0.8499 |
| No log | 2.2143 | 186 | 0.6403 | 0.1895 | 0.6403 | 0.8002 |
| No log | 2.2381 | 188 | 0.6995 | 0.1895 | 0.6995 | 0.8364 |
| No log | 2.2619 | 190 | 0.9390 | -0.0820 | 0.9390 | 0.9690 |
| No log | 2.2857 | 192 | 1.5264 | 0.0873 | 1.5264 | 1.2355 |
| No log | 2.3095 | 194 | 1.5570 | 0.0873 | 1.5570 | 1.2478 |
| No log | 2.3333 | 196 | 1.1148 | 0.0253 | 1.1148 | 1.0559 |
| No log | 2.3571 | 198 | 0.7730 | -0.0577 | 0.7730 | 0.8792 |
| No log | 2.3810 | 200 | 0.6416 | 0.1895 | 0.6416 | 0.8010 |
| No log | 2.4048 | 202 | 0.6195 | 0.1895 | 0.6195 | 0.7871 |
| No log | 2.4286 | 204 | 0.6287 | -0.0233 | 0.6287 | 0.7929 |
| No log | 2.4524 | 206 | 0.6514 | 0.1895 | 0.6514 | 0.8071 |
| No log | 2.4762 | 208 | 0.6680 | 0.1895 | 0.6680 | 0.8173 |
| No log | 2.5 | 210 | 0.6328 | -0.0233 | 0.6328 | 0.7955 |
| No log | 2.5238 | 212 | 0.6459 | -0.0233 | 0.6459 | 0.8037 |
| No log | 2.5476 | 214 | 0.7335 | 0.1895 | 0.7335 | 0.8564 |
| No log | 2.5714 | 216 | 0.8351 | 0.1270 | 0.8351 | 0.9138 |
| No log | 2.5952 | 218 | 1.0127 | -0.0593 | 1.0127 | 1.0063 |
| No log | 2.6190 | 220 | 0.9720 | -0.0593 | 0.9720 | 0.9859 |
| No log | 2.6429 | 222 | 0.8506 | 0.1270 | 0.8506 | 0.9223 |
| No log | 2.6667 | 224 | 0.8260 | 0.1852 | 0.8260 | 0.9089 |
| No log | 2.6905 | 226 | 0.7444 | -0.0233 | 0.7444 | 0.8628 |
| No log | 2.7143 | 228 | 0.7270 | -0.0233 | 0.7270 | 0.8526 |
| No log | 2.7381 | 230 | 0.7485 | 0.1895 | 0.7485 | 0.8652 |
| No log | 2.7619 | 232 | 0.8024 | 0.1895 | 0.8024 | 0.8957 |
| No log | 2.7857 | 234 | 0.7359 | 0.1895 | 0.7359 | 0.8578 |
| No log | 2.8095 | 236 | 0.6246 | -0.0233 | 0.6246 | 0.7903 |
| No log | 2.8333 | 238 | 0.6041 | 0.0 | 0.6041 | 0.7773 |
| No log | 2.8571 | 240 | 0.6421 | -0.0233 | 0.6421 | 0.8013 |
| No log | 2.8810 | 242 | 0.7927 | 0.1895 | 0.7927 | 0.8903 |
| No log | 2.9048 | 244 | 1.0349 | -0.0577 | 1.0349 | 1.0173 |
| No log | 2.9286 | 246 | 1.3748 | -0.0686 | 1.3748 | 1.1725 |
| No log | 2.9524 | 248 | 1.4972 | -0.1748 | 1.4972 | 1.2236 |
| No log | 2.9762 | 250 | 1.2263 | -0.0577 | 1.2263 | 1.1074 |
| No log | 3.0 | 252 | 1.1699 | -0.0577 | 1.1699 | 1.0816 |
| No log | 3.0238 | 254 | 1.1834 | -0.0577 | 1.1834 | 1.0878 |
| No log | 3.0476 | 256 | 1.3036 | 0.0833 | 1.3036 | 1.1418 |
| No log | 3.0714 | 258 | 1.3534 | -0.0829 | 1.3534 | 1.1634 |
| No log | 3.0952 | 260 | 1.1091 | -0.0577 | 1.1091 | 1.0531 |
| No log | 3.1190 | 262 | 0.8620 | 0.1852 | 0.8620 | 0.9284 |
| No log | 3.1429 | 264 | 0.7880 | 0.2143 | 0.7880 | 0.8877 |
| No log | 3.1667 | 266 | 0.6451 | 0.1895 | 0.6451 | 0.8032 |
| No log | 3.1905 | 268 | 0.6522 | 0.1895 | 0.6522 | 0.8076 |
| No log | 3.2143 | 270 | 0.6687 | 0.1895 | 0.6687 | 0.8177 |
| No log | 3.2381 | 272 | 0.6456 | 0.1895 | 0.6456 | 0.8035 |
| No log | 3.2619 | 274 | 0.5733 | 0.1895 | 0.5733 | 0.7572 |
| No log | 3.2857 | 276 | 0.6235 | 0.1895 | 0.6235 | 0.7896 |
| No log | 3.3095 | 278 | 0.6383 | 0.1895 | 0.6383 | 0.7989 |
| No log | 3.3333 | 280 | 0.6786 | 0.1895 | 0.6786 | 0.8238 |
| No log | 3.3571 | 282 | 0.6602 | 0.1895 | 0.6602 | 0.8125 |
| No log | 3.3810 | 284 | 0.7220 | 0.1895 | 0.7220 | 0.8497 |
| No log | 3.4048 | 286 | 0.7167 | 0.2326 | 0.7167 | 0.8466 |
| No log | 3.4286 | 288 | 0.6068 | 0.1895 | 0.6068 | 0.7790 |
| No log | 3.4524 | 290 | 0.6393 | 0.1895 | 0.6393 | 0.7996 |
| No log | 3.4762 | 292 | 0.8190 | 0.0984 | 0.8190 | 0.9050 |
| No log | 3.5 | 294 | 0.9751 | 0.0984 | 0.9751 | 0.9875 |
| No log | 3.5238 | 296 | 0.9127 | 0.0984 | 0.9127 | 0.9553 |
| No log | 3.5476 | 298 | 0.7281 | 0.1895 | 0.7281 | 0.8533 |
| No log | 3.5714 | 300 | 0.8366 | -0.1224 | 0.8366 | 0.9147 |
| No log | 3.5952 | 302 | 0.8395 | -0.1224 | 0.8395 | 0.9162 |
| No log | 3.6190 | 304 | 0.7047 | 0.1895 | 0.7047 | 0.8395 |
| No log | 3.6429 | 306 | 0.7205 | 0.1895 | 0.7205 | 0.8488 |
| No log | 3.6667 | 308 | 0.7547 | 0.1895 | 0.7547 | 0.8688 |
| No log | 3.6905 | 310 | 0.7087 | 0.1895 | 0.7087 | 0.8418 |
| No log | 3.7143 | 312 | 0.6537 | 0.1895 | 0.6537 | 0.8085 |
| No log | 3.7381 | 314 | 0.6616 | 0.1895 | 0.6616 | 0.8134 |
| No log | 3.7619 | 316 | 0.7274 | 0.1895 | 0.7274 | 0.8529 |
| No log | 3.7857 | 318 | 0.7490 | 0.1895 | 0.7490 | 0.8655 |
| No log | 3.8095 | 320 | 0.6904 | 0.1895 | 0.6904 | 0.8309 |
| No log | 3.8333 | 322 | 0.6779 | 0.1895 | 0.6779 | 0.8233 |
| No log | 3.8571 | 324 | 0.6646 | 0.1895 | 0.6646 | 0.8152 |
| No log | 3.8810 | 326 | 0.7118 | 0.1895 | 0.7118 | 0.8437 |
| No log | 3.9048 | 328 | 0.8149 | -0.0577 | 0.8149 | 0.9027 |
| No log | 3.9286 | 330 | 0.9550 | -0.0916 | 0.9550 | 0.9772 |
| No log | 3.9524 | 332 | 0.9220 | -0.0708 | 0.9220 | 0.9602 |
| No log | 3.9762 | 334 | 0.9188 | -0.0708 | 0.9188 | 0.9585 |
| No log | 4.0 | 336 | 0.9137 | -0.0708 | 0.9137 | 0.9559 |
| No log | 4.0238 | 338 | 0.8713 | -0.0577 | 0.8713 | 0.9334 |
| No log | 4.0476 | 340 | 0.9617 | -0.1074 | 0.9617 | 0.9807 |
| No log | 4.0714 | 342 | 0.8883 | -0.0708 | 0.8883 | 0.9425 |
| No log | 4.0952 | 344 | 0.8646 | -0.0708 | 0.8646 | 0.9298 |
| No log | 4.1190 | 346 | 0.8873 | -0.1000 | 0.8873 | 0.9420 |
| No log | 4.1429 | 348 | 0.8191 | -0.0577 | 0.8191 | 0.9050 |
| No log | 4.1667 | 350 | 0.7325 | 0.1895 | 0.7325 | 0.8559 |
| No log | 4.1905 | 352 | 0.6920 | 0.1895 | 0.6920 | 0.8319 |
| No log | 4.2143 | 354 | 0.6994 | 0.1895 | 0.6994 | 0.8363 |
| No log | 4.2381 | 356 | 0.7181 | 0.1895 | 0.7181 | 0.8474 |
| No log | 4.2619 | 358 | 0.7879 | 0.1538 | 0.7879 | 0.8876 |
| No log | 4.2857 | 360 | 0.7800 | 0.1538 | 0.7800 | 0.8831 |
| No log | 4.3095 | 362 | 0.6914 | 0.1895 | 0.6914 | 0.8315 |
| No log | 4.3333 | 364 | 0.6843 | 0.1895 | 0.6843 | 0.8272 |
| No log | 4.3571 | 366 | 0.7442 | 0.1895 | 0.7442 | 0.8627 |
| No log | 4.3810 | 368 | 0.7732 | -0.0577 | 0.7732 | 0.8793 |
| No log | 4.4048 | 370 | 0.8039 | -0.0577 | 0.8039 | 0.8966 |
| No log | 4.4286 | 372 | 0.7379 | 0.1895 | 0.7379 | 0.8590 |
| No log | 4.4524 | 374 | 0.7521 | 0.1895 | 0.7521 | 0.8672 |
| No log | 4.4762 | 376 | 0.8622 | -0.0577 | 0.8622 | 0.9286 |
| No log | 4.5 | 378 | 1.0460 | 0.0654 | 1.0460 | 1.0227 |
| No log | 4.5238 | 380 | 1.0705 | 0.0654 | 1.0705 | 1.0347 |
| No log | 4.5476 | 382 | 1.0044 | 0.0654 | 1.0044 | 1.0022 |
| No log | 4.5714 | 384 | 0.8652 | -0.0577 | 0.8652 | 0.9302 |
| No log | 4.5952 | 386 | 0.7997 | 0.1895 | 0.7997 | 0.8942 |
| No log | 4.6190 | 388 | 0.7859 | 0.1852 | 0.7859 | 0.8865 |
| No log | 4.6429 | 390 | 0.7632 | 0.1895 | 0.7632 | 0.8736 |
| No log | 4.6667 | 392 | 0.8143 | -0.0577 | 0.8143 | 0.9024 |
| No log | 4.6905 | 394 | 0.8783 | -0.0708 | 0.8783 | 0.9372 |
| No log | 4.7143 | 396 | 0.9830 | -0.0708 | 0.9830 | 0.9915 |
| No log | 4.7381 | 398 | 0.9788 | -0.0708 | 0.9788 | 0.9893 |
| No log | 4.7619 | 400 | 0.8891 | -0.0577 | 0.8891 | 0.9429 |
| No log | 4.7857 | 402 | 0.8682 | 0.1895 | 0.8682 | 0.9318 |
| No log | 4.8095 | 404 | 0.9194 | -0.0577 | 0.9194 | 0.9589 |
| No log | 4.8333 | 406 | 1.0533 | -0.0708 | 1.0533 | 1.0263 |
| No log | 4.8571 | 408 | 1.2333 | -0.0864 | 1.2333 | 1.1106 |
| No log | 4.8810 | 410 | 1.2617 | -0.0936 | 1.2617 | 1.1233 |
| No log | 4.9048 | 412 | 1.1690 | 0.0654 | 1.1690 | 1.0812 |
| No log | 4.9286 | 414 | 0.9699 | -0.0708 | 0.9699 | 0.9848 |
| No log | 4.9524 | 416 | 0.8089 | 0.1895 | 0.8089 | 0.8994 |
| No log | 4.9762 | 418 | 0.7304 | 0.1895 | 0.7304 | 0.8547 |
| No log | 5.0 | 420 | 0.7373 | 0.1895 | 0.7373 | 0.8587 |
| No log | 5.0238 | 422 | 0.7842 | 0.1895 | 0.7842 | 0.8856 |
| No log | 5.0476 | 424 | 0.8816 | -0.0708 | 0.8816 | 0.9389 |
| No log | 5.0714 | 426 | 0.9446 | -0.0708 | 0.9446 | 0.9719 |
| No log | 5.0952 | 428 | 0.9600 | -0.0708 | 0.9600 | 0.9798 |
| No log | 5.1190 | 430 | 0.9416 | -0.0577 | 0.9416 | 0.9704 |
| No log | 5.1429 | 432 | 0.9540 | -0.0577 | 0.9540 | 0.9767 |
| No log | 5.1667 | 434 | 0.9770 | -0.0577 | 0.9770 | 0.9884 |
| No log | 5.1905 | 436 | 1.0307 | -0.0577 | 1.0307 | 1.0152 |
| No log | 5.2143 | 438 | 1.0846 | -0.2623 | 1.0846 | 1.0414 |
| No log | 5.2381 | 440 | 1.1897 | -0.2595 | 1.1897 | 1.0907 |
| No log | 5.2619 | 442 | 1.1705 | -0.2595 | 1.1705 | 1.0819 |
| No log | 5.2857 | 444 | 1.0453 | -0.2623 | 1.0453 | 1.0224 |
| No log | 5.3095 | 446 | 1.0004 | -0.2655 | 1.0004 | 1.0002 |
| No log | 5.3333 | 448 | 0.9626 | -0.0577 | 0.9626 | 0.9811 |
| No log | 5.3571 | 450 | 0.9618 | -0.0577 | 0.9618 | 0.9807 |
| No log | 5.3810 | 452 | 0.9812 | -0.0577 | 0.9812 | 0.9906 |
| No log | 5.4048 | 454 | 0.9915 | -0.0577 | 0.9915 | 0.9957 |
| No log | 5.4286 | 456 | 0.9729 | -0.0577 | 0.9729 | 0.9864 |
| No log | 5.4524 | 458 | 0.9266 | -0.0708 | 0.9266 | 0.9626 |
| No log | 5.4762 | 460 | 0.9070 | -0.0708 | 0.9070 | 0.9523 |
| No log | 5.5 | 462 | 0.8438 | -0.0577 | 0.8438 | 0.9186 |
| No log | 5.5238 | 464 | 0.8292 | -0.0708 | 0.8292 | 0.9106 |
| No log | 5.5476 | 466 | 0.8522 | -0.0708 | 0.8522 | 0.9231 |
| No log | 5.5714 | 468 | 0.9341 | -0.0820 | 0.9341 | 0.9665 |
| No log | 5.5952 | 470 | 0.9539 | -0.0820 | 0.9539 | 0.9767 |
| No log | 5.6190 | 472 | 0.8792 | -0.0708 | 0.8792 | 0.9377 |
| No log | 5.6429 | 474 | 0.9084 | -0.0708 | 0.9084 | 0.9531 |
| No log | 5.6667 | 476 | 0.9351 | -0.0708 | 0.9351 | 0.9670 |
| No log | 5.6905 | 478 | 1.0915 | -0.0820 | 1.0915 | 1.0447 |
| No log | 5.7143 | 480 | 1.2434 | -0.2550 | 1.2434 | 1.1151 |
| No log | 5.7381 | 482 | 1.2569 | -0.2550 | 1.2569 | 1.1211 |
| No log | 5.7619 | 484 | 1.1550 | -0.2571 | 1.1550 | 1.0747 |
| No log | 5.7857 | 486 | 1.0461 | -0.0820 | 1.0461 | 1.0228 |
| No log | 5.8095 | 488 | 0.9263 | -0.0708 | 0.9263 | 0.9624 |
| No log | 5.8333 | 490 | 0.9228 | -0.0708 | 0.9228 | 0.9606 |
| No log | 5.8571 | 492 | 0.9763 | -0.0708 | 0.9763 | 0.9881 |
| No log | 5.8810 | 494 | 1.0909 | -0.0708 | 1.0909 | 1.0444 |
| No log | 5.9048 | 496 | 1.1359 | -0.0708 | 1.1359 | 1.0658 |
| No log | 5.9286 | 498 | 1.0691 | -0.0708 | 1.0691 | 1.0340 |
| 0.4043 | 5.9524 | 500 | 0.9757 | -0.0708 | 0.9757 | 0.9878 |
| 0.4043 | 5.9762 | 502 | 0.8897 | -0.0708 | 0.8897 | 0.9433 |
| 0.4043 | 6.0 | 504 | 0.8610 | -0.0577 | 0.8610 | 0.9279 |
| 0.4043 | 6.0238 | 506 | 0.8492 | -0.0708 | 0.8492 | 0.9215 |
| 0.4043 | 6.0476 | 508 | 0.8873 | -0.0708 | 0.8873 | 0.9419 |
| 0.4043 | 6.0714 | 510 | 0.9019 | -0.0708 | 0.9019 | 0.9497 |
| 0.4043 | 6.0952 | 512 | 0.9810 | -0.0820 | 0.9810 | 0.9905 |
| 0.4043 | 6.1190 | 514 | 1.0104 | -0.0820 | 1.0104 | 1.0052 |
| 0.4043 | 6.1429 | 516 | 1.0803 | -0.0820 | 1.0803 | 1.0394 |
| 0.4043 | 6.1667 | 518 | 1.1271 | -0.0820 | 1.1271 | 1.0617 |
| 0.4043 | 6.1905 | 520 | 1.0852 | -0.0820 | 1.0852 | 1.0417 |
| 0.4043 | 6.2143 | 522 | 1.0063 | -0.0820 | 1.0063 | 1.0032 |
| 0.4043 | 6.2381 | 524 | 0.9277 | -0.0708 | 0.9277 | 0.9632 |
| 0.4043 | 6.2619 | 526 | 0.8538 | -0.0577 | 0.8538 | 0.9240 |
| 0.4043 | 6.2857 | 528 | 0.8712 | -0.0577 | 0.8712 | 0.9334 |
| 0.4043 | 6.3095 | 530 | 0.9539 | -0.0820 | 0.9539 | 0.9767 |
| 0.4043 | 6.3333 | 532 | 1.0729 | -0.0916 | 1.0729 | 1.0358 |
| 0.4043 | 6.3571 | 534 | 1.1774 | -0.1000 | 1.1774 | 1.0851 |
| 0.4043 | 6.3810 | 536 | 1.2147 | -0.1000 | 1.2147 | 1.1022 |
| 0.4043 | 6.4048 | 538 | 1.1667 | -0.1000 | 1.1667 | 1.0801 |
| 0.4043 | 6.4286 | 540 | 1.1143 | 0.0654 | 1.1143 | 1.0556 |
| 0.4043 | 6.4524 | 542 | 1.1052 | 0.0833 | 1.1052 | 1.0513 |
| 0.4043 | 6.4762 | 544 | 1.1786 | 0.0833 | 1.1786 | 1.0857 |
| 0.4043 | 6.5 | 546 | 1.1690 | 0.0833 | 1.1690 | 1.0812 |
| 0.4043 | 6.5238 | 548 | 1.0900 | 0.0833 | 1.0900 | 1.0440 |
| 0.4043 | 6.5476 | 550 | 0.9957 | -0.0577 | 0.9957 | 0.9979 |
| 0.4043 | 6.5714 | 552 | 0.9590 | -0.0577 | 0.9590 | 0.9793 |
| 0.4043 | 6.5952 | 554 | 0.9400 | -0.0577 | 0.9400 | 0.9695 |
| 0.4043 | 6.6190 | 556 | 0.9413 | -0.0708 | 0.9413 | 0.9702 |
| 0.4043 | 6.6429 | 558 | 0.9567 | -0.0820 | 0.9567 | 0.9781 |
| 0.4043 | 6.6667 | 560 | 0.9771 | -0.0820 | 0.9771 | 0.9885 |
| 0.4043 | 6.6905 | 562 | 1.0007 | -0.0820 | 1.0007 | 1.0003 |
| 0.4043 | 6.7143 | 564 | 0.9945 | -0.0820 | 0.9945 | 0.9973 |
| 0.4043 | 6.7381 | 566 | 0.9594 | -0.0577 | 0.9594 | 0.9795 |
| 0.4043 | 6.7619 | 568 | 0.9403 | -0.0577 | 0.9403 | 0.9697 |
| 0.4043 | 6.7857 | 570 | 0.9544 | -0.0577 | 0.9544 | 0.9769 |
| 0.4043 | 6.8095 | 572 | 1.0105 | -0.0577 | 1.0105 | 1.0053 |
| 0.4043 | 6.8333 | 574 | 1.0013 | -0.0577 | 1.0013 | 1.0006 |
| 0.4043 | 6.8571 | 576 | 0.9724 | -0.0577 | 0.9724 | 0.9861 |
| 0.4043 | 6.8810 | 578 | 0.9248 | -0.0577 | 0.9248 | 0.9617 |
| 0.4043 | 6.9048 | 580 | 0.9033 | -0.0577 | 0.9033 | 0.9504 |
| 0.4043 | 6.9286 | 582 | 0.8999 | -0.0577 | 0.8999 | 0.9486 |
| 0.4043 | 6.9524 | 584 | 0.9143 | -0.0708 | 0.9143 | 0.9562 |
| 0.4043 | 6.9762 | 586 | 0.9303 | -0.0708 | 0.9303 | 0.9645 |
| 0.4043 | 7.0 | 588 | 0.9220 | -0.0708 | 0.9220 | 0.9602 |
| 0.4043 | 7.0238 | 590 | 0.9006 | -0.0708 | 0.9006 | 0.9490 |
| 0.4043 | 7.0476 | 592 | 0.8779 | 0.1895 | 0.8779 | 0.9369 |
| 0.4043 | 7.0714 | 594 | 0.8867 | 0.1895 | 0.8867 | 0.9417 |
| 0.4043 | 7.0952 | 596 | 0.9163 | -0.2222 | 0.9163 | 0.9572 |
| 0.4043 | 7.1190 | 598 | 0.9445 | -0.2222 | 0.9445 | 0.9719 |
| 0.4043 | 7.1429 | 600 | 0.9870 | -0.0708 | 0.9870 | 0.9935 |
| 0.4043 | 7.1667 | 602 | 1.0473 | -0.0820 | 1.0473 | 1.0234 |
| 0.4043 | 7.1905 | 604 | 1.0717 | -0.0820 | 1.0717 | 1.0352 |
| 0.4043 | 7.2143 | 606 | 1.0772 | -0.0820 | 1.0772 | 1.0379 |
| 0.4043 | 7.2381 | 608 | 1.0895 | -0.0820 | 1.0895 | 1.0438 |
| 0.4043 | 7.2619 | 610 | 1.1125 | -0.0916 | 1.1125 | 1.0548 |
| 0.4043 | 7.2857 | 612 | 1.0709 | -0.0820 | 1.0709 | 1.0349 |
| 0.4043 | 7.3095 | 614 | 1.0060 | -0.0708 | 1.0060 | 1.0030 |
| 0.4043 | 7.3333 | 616 | 0.9766 | -0.0708 | 0.9766 | 0.9882 |
| 0.4043 | 7.3571 | 618 | 0.9681 | -0.0577 | 0.9681 | 0.9839 |
| 0.4043 | 7.3810 | 620 | 0.9710 | -0.0577 | 0.9710 | 0.9854 |
| 0.4043 | 7.4048 | 622 | 1.0013 | -0.0708 | 1.0013 | 1.0007 |
| 0.4043 | 7.4286 | 624 | 1.0799 | -0.0708 | 1.0799 | 1.0392 |
| 0.4043 | 7.4524 | 626 | 1.1841 | -0.2595 | 1.1841 | 1.0882 |
| 0.4043 | 7.4762 | 628 | 1.2081 | -0.2550 | 1.2081 | 1.0991 |
| 0.4043 | 7.5 | 630 | 1.2303 | -0.2550 | 1.2303 | 1.1092 |
| 0.4043 | 7.5238 | 632 | 1.2125 | -0.2550 | 1.2125 | 1.1011 |
| 0.4043 | 7.5476 | 634 | 1.1389 | -0.2595 | 1.1389 | 1.0672 |
| 0.4043 | 7.5714 | 636 | 1.0885 | -0.0820 | 1.0885 | 1.0433 |
| 0.4043 | 7.5952 | 638 | 1.0465 | -0.0820 | 1.0465 | 1.0230 |
| 0.4043 | 7.6190 | 640 | 1.0406 | -0.0820 | 1.0406 | 1.0201 |
| 0.4043 | 7.6429 | 642 | 1.0742 | -0.0820 | 1.0742 | 1.0364 |
| 0.4043 | 7.6667 | 644 | 1.1050 | -0.0820 | 1.1050 | 1.0512 |
| 0.4043 | 7.6905 | 646 | 1.1073 | -0.0820 | 1.1073 | 1.0523 |
| 0.4043 | 7.7143 | 648 | 1.1002 | -0.0820 | 1.1002 | 1.0489 |
| 0.4043 | 7.7381 | 650 | 1.0965 | -0.0708 | 1.0965 | 1.0472 |
| 0.4043 | 7.7619 | 652 | 1.0260 | -0.0708 | 1.0260 | 1.0129 |
| 0.4043 | 7.7857 | 654 | 0.9671 | -0.0708 | 0.9671 | 0.9834 |
| 0.4043 | 7.8095 | 656 | 0.9216 | -0.0577 | 0.9216 | 0.9600 |
| 0.4043 | 7.8333 | 658 | 0.8723 | -0.0577 | 0.8723 | 0.9339 |
| 0.4043 | 7.8571 | 660 | 0.8533 | -0.0577 | 0.8533 | 0.9238 |
| 0.4043 | 7.8810 | 662 | 0.8569 | -0.0577 | 0.8569 | 0.9257 |
| 0.4043 | 7.9048 | 664 | 0.8866 | -0.0708 | 0.8866 | 0.9416 |
| 0.4043 | 7.9286 | 666 | 0.8788 | -0.0708 | 0.8788 | 0.9374 |
| 0.4043 | 7.9524 | 668 | 0.8810 | -0.0708 | 0.8810 | 0.9386 |
| 0.4043 | 7.9762 | 670 | 0.8930 | -0.0708 | 0.8930 | 0.9450 |
| 0.4043 | 8.0 | 672 | 0.9147 | -0.0708 | 0.9147 | 0.9564 |
| 0.4043 | 8.0238 | 674 | 0.9497 | -0.0708 | 0.9497 | 0.9745 |
| 0.4043 | 8.0476 | 676 | 0.9555 | -0.0708 | 0.9555 | 0.9775 |
| 0.4043 | 8.0714 | 678 | 0.9611 | -0.0708 | 0.9611 | 0.9804 |
| 0.4043 | 8.0952 | 680 | 0.9538 | -0.0708 | 0.9538 | 0.9766 |
| 0.4043 | 8.1190 | 682 | 0.9465 | -0.0708 | 0.9465 | 0.9729 |
| 0.4043 | 8.1429 | 684 | 0.9693 | -0.0708 | 0.9693 | 0.9845 |
| 0.4043 | 8.1667 | 686 | 1.0155 | -0.0708 | 1.0155 | 1.0077 |
| 0.4043 | 8.1905 | 688 | 1.0399 | -0.0820 | 1.0399 | 1.0197 |
| 0.4043 | 8.2143 | 690 | 1.0386 | -0.0820 | 1.0386 | 1.0191 |
| 0.4043 | 8.2381 | 692 | 1.0024 | -0.0708 | 1.0024 | 1.0012 |
| 0.4043 | 8.2619 | 694 | 0.9724 | -0.0708 | 0.9724 | 0.9861 |
| 0.4043 | 8.2857 | 696 | 0.9520 | -0.0708 | 0.9520 | 0.9757 |
| 0.4043 | 8.3095 | 698 | 0.9782 | -0.0708 | 0.9782 | 0.9890 |
| 0.4043 | 8.3333 | 700 | 0.9992 | -0.0708 | 0.9992 | 0.9996 |
| 0.4043 | 8.3571 | 702 | 0.9987 | -0.0820 | 0.9987 | 0.9993 |
| 0.4043 | 8.3810 | 704 | 0.9905 | -0.0708 | 0.9905 | 0.9952 |
| 0.4043 | 8.4048 | 706 | 0.9760 | -0.0708 | 0.9760 | 0.9879 |
| 0.4043 | 8.4286 | 708 | 0.9739 | -0.0708 | 0.9739 | 0.9868 |
| 0.4043 | 8.4524 | 710 | 0.9744 | -0.0708 | 0.9744 | 0.9871 |
| 0.4043 | 8.4762 | 712 | 0.9791 | -0.0708 | 0.9791 | 0.9895 |
| 0.4043 | 8.5 | 714 | 0.9978 | -0.0820 | 0.9978 | 0.9989 |
| 0.4043 | 8.5238 | 716 | 1.0020 | -0.0820 | 1.0020 | 1.0010 |
| 0.4043 | 8.5476 | 718 | 1.0064 | -0.0820 | 1.0064 | 1.0032 |
| 0.4043 | 8.5714 | 720 | 1.0366 | -0.0820 | 1.0366 | 1.0181 |
| 0.4043 | 8.5952 | 722 | 1.0446 | -0.0820 | 1.0446 | 1.0221 |
| 0.4043 | 8.6190 | 724 | 1.0506 | -0.0820 | 1.0506 | 1.0250 |
| 0.4043 | 8.6429 | 726 | 1.0406 | -0.0820 | 1.0406 | 1.0201 |
| 0.4043 | 8.6667 | 728 | 1.0288 | -0.0820 | 1.0288 | 1.0143 |
| 0.4043 | 8.6905 | 730 | 1.0226 | -0.0708 | 1.0226 | 1.0112 |
| 0.4043 | 8.7143 | 732 | 1.0186 | -0.0708 | 1.0186 | 1.0092 |
| 0.4043 | 8.7381 | 734 | 1.0278 | -0.0708 | 1.0278 | 1.0138 |
| 0.4043 | 8.7619 | 736 | 1.0339 | -0.0708 | 1.0339 | 1.0168 |
| 0.4043 | 8.7857 | 738 | 1.0329 | -0.0708 | 1.0329 | 1.0163 |
| 0.4043 | 8.8095 | 740 | 1.0340 | -0.0708 | 1.0340 | 1.0169 |
| 0.4043 | 8.8333 | 742 | 1.0276 | -0.0708 | 1.0276 | 1.0137 |
| 0.4043 | 8.8571 | 744 | 1.0371 | -0.0708 | 1.0371 | 1.0184 |
| 0.4043 | 8.8810 | 746 | 1.0721 | -0.0708 | 1.0721 | 1.0354 |
| 0.4043 | 8.9048 | 748 | 1.1299 | -0.0820 | 1.1299 | 1.0630 |
| 0.4043 | 8.9286 | 750 | 1.1670 | -0.2595 | 1.1670 | 1.0803 |
| 0.4043 | 8.9524 | 752 | 1.1760 | -0.2595 | 1.1760 | 1.0844 |
| 0.4043 | 8.9762 | 754 | 1.1600 | -0.2595 | 1.1600 | 1.0770 |
| 0.4043 | 9.0 | 756 | 1.1248 | -0.0820 | 1.1248 | 1.0606 |
| 0.4043 | 9.0238 | 758 | 1.0874 | -0.0708 | 1.0874 | 1.0428 |
| 0.4043 | 9.0476 | 760 | 1.0429 | -0.0708 | 1.0429 | 1.0212 |
| 0.4043 | 9.0714 | 762 | 1.0067 | -0.0708 | 1.0067 | 1.0034 |
| 0.4043 | 9.0952 | 764 | 0.9867 | -0.0708 | 0.9867 | 0.9933 |
| 0.4043 | 9.1190 | 766 | 0.9743 | -0.0708 | 0.9743 | 0.9871 |
| 0.4043 | 9.1429 | 768 | 0.9790 | -0.0708 | 0.9790 | 0.9895 |
| 0.4043 | 9.1667 | 770 | 0.9933 | -0.0708 | 0.9933 | 0.9966 |
| 0.4043 | 9.1905 | 772 | 0.9940 | -0.0708 | 0.9940 | 0.9970 |
| 0.4043 | 9.2143 | 774 | 0.9897 | -0.0708 | 0.9897 | 0.9948 |
| 0.4043 | 9.2381 | 776 | 0.9750 | -0.0708 | 0.9750 | 0.9874 |
| 0.4043 | 9.2619 | 778 | 0.9698 | -0.0708 | 0.9698 | 0.9848 |
| 0.4043 | 9.2857 | 780 | 0.9683 | -0.0708 | 0.9683 | 0.9840 |
| 0.4043 | 9.3095 | 782 | 0.9835 | -0.0708 | 0.9835 | 0.9917 |
| 0.4043 | 9.3333 | 784 | 1.0027 | -0.0708 | 1.0027 | 1.0013 |
| 0.4043 | 9.3571 | 786 | 1.0183 | -0.0708 | 1.0183 | 1.0091 |
| 0.4043 | 9.3810 | 788 | 1.0274 | -0.0708 | 1.0274 | 1.0136 |
| 0.4043 | 9.4048 | 790 | 1.0286 | -0.0708 | 1.0286 | 1.0142 |
| 0.4043 | 9.4286 | 792 | 1.0175 | -0.0708 | 1.0175 | 1.0087 |
| 0.4043 | 9.4524 | 794 | 1.0147 | -0.0708 | 1.0147 | 1.0073 |
| 0.4043 | 9.4762 | 796 | 1.0267 | -0.0708 | 1.0267 | 1.0132 |
| 0.4043 | 9.5 | 798 | 1.0294 | -0.0708 | 1.0294 | 1.0146 |
| 0.4043 | 9.5238 | 800 | 1.0220 | -0.0708 | 1.0220 | 1.0109 |
| 0.4043 | 9.5476 | 802 | 1.0134 | -0.0708 | 1.0134 | 1.0067 |
| 0.4043 | 9.5714 | 804 | 1.0043 | -0.0708 | 1.0043 | 1.0021 |
| 0.4043 | 9.5952 | 806 | 0.9992 | -0.0708 | 0.9992 | 0.9996 |
| 0.4043 | 9.6190 | 808 | 0.9980 | -0.0708 | 0.9980 | 0.9990 |
| 0.4043 | 9.6429 | 810 | 0.9947 | -0.0708 | 0.9947 | 0.9973 |
| 0.4043 | 9.6667 | 812 | 0.9918 | -0.0708 | 0.9918 | 0.9959 |
| 0.4043 | 9.6905 | 814 | 0.9907 | -0.0708 | 0.9907 | 0.9953 |
| 0.4043 | 9.7143 | 816 | 0.9955 | -0.0708 | 0.9955 | 0.9977 |
| 0.4043 | 9.7381 | 818 | 1.0040 | -0.0708 | 1.0040 | 1.0020 |
| 0.4043 | 9.7619 | 820 | 1.0101 | -0.0708 | 1.0101 | 1.0051 |
| 0.4043 | 9.7857 | 822 | 1.0194 | -0.0708 | 1.0194 | 1.0096 |
| 0.4043 | 9.8095 | 824 | 1.0274 | -0.0708 | 1.0274 | 1.0136 |
| 0.4043 | 9.8333 | 826 | 1.0370 | -0.0708 | 1.0370 | 1.0183 |
| 0.4043 | 9.8571 | 828 | 1.0409 | -0.0708 | 1.0409 | 1.0203 |
| 0.4043 | 9.8810 | 830 | 1.0394 | -0.0708 | 1.0394 | 1.0195 |
| 0.4043 | 9.9048 | 832 | 1.0355 | -0.0708 | 1.0355 | 1.0176 |
| 0.4043 | 9.9286 | 834 | 1.0326 | -0.0708 | 1.0326 | 1.0162 |
| 0.4043 | 9.9524 | 836 | 1.0307 | -0.0708 | 1.0307 | 1.0152 |
| 0.4043 | 9.9762 | 838 | 1.0301 | -0.0708 | 1.0301 | 1.0149 |
| 0.4043 | 10.0 | 840 | 1.0299 | -0.0708 | 1.0299 | 1.0148 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final | avsolatorio | 2024-11-25T11:03:24Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1943715",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-25T11:03:19Z | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1943715
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: percentage of irrigated land in india is about
sentences:
- Irrigation in India Irrigation in India Irrigation in India includes a network
of major and minor canals from Indian rivers, groundwater well based systems,
tanks, and other rainwater harvesting projects for agricultural activities. Of
these groundwater system is the largest. In 2013-14, only about 47.7% of total
agricultural land in India was reliably irrigated. The largest canal in India
is Indira Gandhi Canal, which is about 650 km long. About 2/3rd cultivated land
in India is dependent on monsoons. Irrigation in India helps improve food security,
reduce dependence on monsoons, improve agricultural productivity and create rural
job opportunities. Dams used for irrigation projects
- Waiting for a Girl Like You Waiting for a Girl Like You "Waiting for a Girl Like
You" is a 1981 power ballad by the British-American rock band Foreigner. The distinctive
synthesizer theme was performed by the then-little-known Thomas Dolby, and this
song also marked a major departure from their earlier singles because their previous
singles were mid to upper tempo rock songs while this song was a softer love song
with the energy of a power ballad. It was the second single released from the
album "4" (1981) and was co-written by Lou Gramm and Mick Jones. It has become
one of the band's most
- Agriculture in India 2010, only about 35% of agricultural land in India was reliably
irrigated. About 2/3rd cultivated land in India is dependent on monsoons. The
improvements in irrigation infrastructure in the last 50 years have helped India
improve food security, reduce dependence on monsoons, improve agricultural productivity
and create rural job opportunities. Dams used for irrigation projects have helped
provide drinking water to a growing rural population, control flood and prevent
drought-related damage to agriculture. , India had a large and diverse agricultural
sector, accounting, on average, for about 16% of GDP and 10% of export earnings.
India's arable land area of
- source_sentence: 'Use of multiple antimicrobial drugs by clinical patients: a prognostic
index of hospital mortality?'
sentences:
- Recent reports have suggested that extramedullary (EM) relapse of acute myeloid
leukemia (AML) post-hematopoietic stem cell transplantation (HSCT), unlike isolated
bone marrow (BM) relapse, is associated with improved prognosis. We reviewed the
outcomes of relapsed AML post-HSCT at our institution to determine whether survival
for patients with EM relapse was truly improved in comparison to patients suffering
BM relapses treated in a similar (active) way.Outcomes of all 274 allogeneic HSCT
performed for adult AML between 2000 and 2010 at our institution were retrospectively
reviewed.As of January 2011, 72 relapses post-HSCT had occurred, including 64
BM relapses (89%), two concomitant BM and EM relapses (3%), and six EM relapses
alone (8%). EM relapses occurred significantly later post-HSCT than BM relapses
(median 25.2 vs 3.9 months, respectively; P = 0.001). Patients suffering an EM
relapse were significantly more likely to receive active therapy at relapse (7/8;
88%) than those suffering a BM relapse alone (28/64; 44%; P = 0.026). When survival
analysis was restricted to outcomes of patients treated actively (i.e., with curative
intent), no difference in outcome between EM and BM relapses was observed (median
survival 13.5 vs 8 months for EM vs BM relapses, respectively, P = 0.44).
- 'Laparoscopic box model trainers have been used in training curricula for a long
time, however data on their impact on skills acquisition is still limited. Our
aim was to validate a low cost box model trainer as a tool for the training of
skills relevant to laparoscopic surgery.Randomised, controlled trial (Canadian
Task Force Classification I).University Hospital.Sixteen gynaecologic residents
with limited laparoscopic experience were randomised to a group that received
a structured box model training curriculum, and a control group. Performance before
and after the training was assessed in a virtual reality laparoscopic trainer
(LapSim and was based on objective parameters, registered by the computer system
(time, error, and economy of motion scores). Group A showed significantly greater
improvement in all performance parameters compared with the control group: economy
of movement (p=0.001), time (p=0.001) and tissue damage (p=0.036), confirming
the positive impact of box-trainer curriculum on laparoscopic skills acquisition.'
- To quantify the use of multiple and prolonged antibiotics and anti-infective drug
therapy in clinical patients in a 144-bed hospital.Adult patients (2,790 patients
with 3,706 admissions over a period of 19 months) were investigated prospectively
regarding treatment with anti-infective agents. The mean age was 57.4 (range,
18.8-97 years), and 54.3% were females (2012).Hospital stay was 5.5 (6.7 days
(range, 2-226 days), with duration up to 10 days for 91.9% of the subjects. Antibiotics
or other agents were administered to 1,166 subjects (31.5%), 325 (8.8%) required
assistance in the ICU, and a total of 141 (3.8%) died. The association between
anti-infective drug therapy and hospital mortality was statistically significant
(P<.01) with a strong linear correlation (r = 0.902, P = .014). The quantity of
prescribed antimicrobial drugs, age, and need for ICU assistance were independent
variables for death by logistic regression analysis. The odds ratio for anti-infective
drug therapy was 1.341 (1.043 to 1.725); for age, 1.042 ( 1.026 to 1.058); and
for stay in the ICU, 11.226 ( 6.648 to 18.957).
- source_sentence: who is notre dame de paris dedicated to
sentences:
- Musée de Notre Dame de Paris paintings; and historical documents including a
petition to restore the cathedral signed by, among others, Victor Hugo and Jean
Auguste Dominique Ingres. The museum closed in November 2008. [and opened again
in 2013] Musée de Notre Dame de Paris The Musée de Notre Dame de Paris was a small
museum dedicated to the cathedral of Notre Dame de Paris and its archaeology.
It stands at 10 Rue du Cloître Notre Dame, Paris, France, and was open to the
public several afternoons per week; an admission fee was charged. The museum was
established in 1951 to present the cathedral's history, as
- 'Smoking serves different functions for men and women. Thus, we wanted to investigate
the association between smoking behaviour and intakes of selected healthy foods
in men and women with special focus on differences and similarities between the
two genders.In 1993-1997, a random sample of 80 996 men and 79 729 women aged
50-64 y was invited to participate in the study ''Diet, Cancer and Health''. In
all, 27 179 men and 29 876 women attended a health examination and completed a
192-item food-frequency questionnaire (FFQ). The association between smoking status
and low, median and high intakes of selected foods was examined among 25 821 men
and 28 596 women.The greater Copenhagen and Aarhus area, Denmark.For both men
and women, smoking status group was associated with diet, such that increasing
level of smoking status ranging from never smokers over ex-smokers to currently
heavy smokers was associated with a lower intake of the healthy foods: fresh fruit,
cooked vegetables, raw vegetables/salad, and olive oil. For wine, increasing level
of smoking status category was associated with a higher fraction of abstainers
and heavy drinkers. The difference between the extreme smoking status categories
was larger than the difference between men and women within smoking status categories
such that never smoking men in general had a higher intake of healthy foods than
heavy smoking women. Correction for age, educational level, and body mass index
(BMI) did not affect the results.'
- Notre-Dame de Paris rededicated to the Cult of Reason, and then to the Cult of
the Supreme Being. During this time, many of the treasures of the cathedral were
either destroyed or plundered. The twenty-eight statues of biblical kings located
at the west facade, mistaken for statues of French kings, were beheaded. Many
of the heads were found during a 1977 excavation nearby, and are on display at
the Musée de Cluny. For a time the Goddess of Liberty replaced the Virgin Mary
on several altars. The cathedral's great bells escaped being melted down. All
of the other large statues on the facade,
- source_sentence: who sang schoolhouse rock i 'm just a bill
sentences:
- Grand Hotel (Mackinac Island) In 1886, the Michigan Central Railroad, Grand Rapids
and Indiana Railroad, and Detroit and Cleveland Steamship Navigation Company formed
the Mackinac Island Hotel Company. The group purchased the land on which the hotel
was built and construction began, based upon the design by Detroit architects
Mason and Rice. When it opened the following year, the hotel was advertised to
Chicago, Erie, Montreal and Detroit residents as a summer retreat for vacationers
who arrived by lake steamer and by rail from across the continent. The hotel opened
on July 10, 1887 and took a mere 93 days to complete. At its
- Jack Sheldon He was Griffin's sidekick for many years. His voice is perhaps best
known from the "Schoolhouse Rock!" cartoons of the 1970s, such as "Conjunction
Junction" and "I'm Just a Bill." He appeared in one episode of "Johnny Bravo"
as the Sensitive Man. He sang a few songs in the episode similar to the "Schoolhouse
Rock!" style. Sheldon returned to the "Schoolhouse Rock!" series for a 2002 episode
titled "I'm Gonna Send Your Vote to College," explaining the electoral college
process, and distributed on the series' DVD collection that same year. Sheldon
sang and played trumpet for the new segment. Sheldon
- I'm Just a Bill I'm Just a Bill "I'm Just a Bill" is a 1976 "Schoolhouse Rock!"
segment, featuring a song of the same title written by Dave Frishberg. The segment
debuted as part of "America Rock", the third season of the Schoolhouse Rock series.
The song featured in the segment is sung by Jack Sheldon (the voice of the Bill),
with dialogue by Sheldon's son John as the boy learning the process. It is about
how a bill becomes a law, how it must go through Congress, and how it can be vetoed,
etc. The Bill is for the law that school buses
- source_sentence: who does the chief risk officer report to
sentences:
- Chief risk officer a company's executive chief officer and chief financial officer
to clarify the precision of its financial reports. Moreover, to ensure the mentioned
accuracy of financial reports, internal controls are required. Accordingly, each
financial report required an internal control report to prevent fraud. Furthermore,
the CRO has to be aware of everything occurring in his company on a daily basis,
but he must also be current on all of the requirements from the SEC. In addition,
the CRO restrains corporate risk by managing compliance. Why is a CRO so important
in financial institutions? There is a report of having a CRO
- Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk
management officer (CRMO) of a firm or corporation is the executive accountable
for enabling the efficient and effective governance of significant risks, and
related opportunities, to a business and its various segments. Risks are commonly
categorized as strategic, reputational, operational, financial, or compliance-related.
CROs are accountable to the Executive Committee and The Board for enabling the
business to balance risk and reward. In more complex organizations, they are generally
responsible for coordinating the organization's Enterprise Risk Management (ERM)
approach. The CRO is responsible for assessing and mitigating significant competitive,
- Foundations of Constraint Satisfaction
model-index:
- name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
results:
- task:
type: triplet
name: Triplet
dataset:
name: medi mteb dev
type: medi-mteb-dev
metrics:
- type: cosine_accuracy
value: 0.9156494608352947
name: Cosine Accuracy
---
# all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- NQ
- pubmed
- specter_train_triples
- S2ORC_citations_abstracts
- fever
- gooaq_pairs
- codesearchnet
- wikihow
- WikiAnswers
- eli5_question_answer
- amazon-qa
- medmcqa
- zeroshot
- TriviaQA_pairs
- PAQ_pairs
- stackexchange_duplicate_questions_title-body_title-body
- trex
- flickr30k_captions
- hotpotqa
- task671_ambigqa_text_generation
- task061_ropes_answer_generation
- task285_imdb_answer_generation
- task905_hate_speech_offensive_classification
- task566_circa_classification
- task184_snli_entailment_to_neutral_text_modification
- task280_stereoset_classification_stereotype_type
- task1599_smcalflow_classification
- task1384_deal_or_no_dialog_classification
- task591_sciq_answer_generation
- task823_peixian-rtgender_sentiment_analysis
- task023_cosmosqa_question_generation
- task900_freebase_qa_category_classification
- task924_event2mind_word_generation
- task152_tomqa_find_location_easy_noise
- task1368_healthfact_sentence_generation
- task1661_super_glue_classification
- task1187_politifact_classification
- task1728_web_nlg_data_to_text
- task112_asset_simple_sentence_identification
- task1340_msr_text_compression_compression
- task072_abductivenli_answer_generation
- task1504_hatexplain_answer_generation
- task684_online_privacy_policy_text_information_type_generation
- task1290_xsum_summarization
- task075_squad1.1_answer_generation
- task1587_scifact_classification
- task384_socialiqa_question_classification
- task1555_scitail_answer_generation
- task1532_daily_dialog_emotion_classification
- task239_tweetqa_answer_generation
- task596_mocha_question_generation
- task1411_dart_subject_identification
- task1359_numer_sense_answer_generation
- task329_gap_classification
- task220_rocstories_title_classification
- task316_crows-pairs_classification_stereotype
- task495_semeval_headline_classification
- task1168_brown_coarse_pos_tagging
- task348_squad2.0_unanswerable_question_generation
- task049_multirc_questions_needed_to_answer
- task1534_daily_dialog_question_classification
- task322_jigsaw_classification_threat
- task295_semeval_2020_task4_commonsense_reasoning
- task186_snli_contradiction_to_entailment_text_modification
- task034_winogrande_question_modification_object
- task160_replace_letter_in_a_sentence
- task469_mrqa_answer_generation
- task105_story_cloze-rocstories_sentence_generation
- task649_race_blank_question_generation
- task1536_daily_dialog_happiness_classification
- task683_online_privacy_policy_text_purpose_answer_generation
- task024_cosmosqa_answer_generation
- task584_udeps_eng_fine_pos_tagging
- task066_timetravel_binary_consistency_classification
- task413_mickey_en_sentence_perturbation_generation
- task182_duorc_question_generation
- task028_drop_answer_generation
- task1601_webquestions_answer_generation
- task1295_adversarial_qa_question_answering
- task201_mnli_neutral_classification
- task038_qasc_combined_fact
- task293_storycommonsense_emotion_text_generation
- task572_recipe_nlg_text_generation
- task517_emo_classify_emotion_of_dialogue
- task382_hybridqa_answer_generation
- task176_break_decompose_questions
- task1291_multi_news_summarization
- task155_count_nouns_verbs
- task031_winogrande_question_generation_object
- task279_stereoset_classification_stereotype
- task1336_peixian_equity_evaluation_corpus_gender_classifier
- task508_scruples_dilemmas_more_ethical_isidentifiable
- task518_emo_different_dialogue_emotions
- task077_splash_explanation_to_sql
- task923_event2mind_classifier
- task470_mrqa_question_generation
- task638_multi_woz_classification
- task1412_web_questions_question_answering
- task847_pubmedqa_question_generation
- task678_ollie_actual_relationship_answer_generation
- task290_tellmewhy_question_answerability
- task575_air_dialogue_classification
- task189_snli_neutral_to_contradiction_text_modification
- task026_drop_question_generation
- task162_count_words_starting_with_letter
- task079_conala_concat_strings
- task610_conllpp_ner
- task046_miscellaneous_question_typing
- task197_mnli_domain_answer_generation
- task1325_qa_zre_question_generation_on_subject_relation
- task430_senteval_subject_count
- task672_nummersense
- task402_grailqa_paraphrase_generation
- task904_hate_speech_offensive_classification
- task192_hotpotqa_sentence_generation
- task069_abductivenli_classification
- task574_air_dialogue_sentence_generation
- task187_snli_entailment_to_contradiction_text_modification
- task749_glucose_reverse_cause_emotion_detection
- task1552_scitail_question_generation
- task750_aqua_multiple_choice_answering
- task327_jigsaw_classification_toxic
- task1502_hatexplain_classification
- task328_jigsaw_classification_insult
- task304_numeric_fused_head_resolution
- task1293_kilt_tasks_hotpotqa_question_answering
- task216_rocstories_correct_answer_generation
- task1326_qa_zre_question_generation_from_answer
- task1338_peixian_equity_evaluation_corpus_sentiment_classifier
- task1729_personachat_generate_next
- task1202_atomic_classification_xneed
- task400_paws_paraphrase_classification
- task502_scruples_anecdotes_whoiswrong_verification
- task088_identify_typo_verification
- task221_rocstories_two_choice_classification
- task200_mnli_entailment_classification
- task074_squad1.1_question_generation
- task581_socialiqa_question_generation
- task1186_nne_hrngo_classification
- task898_freebase_qa_answer_generation
- task1408_dart_similarity_classification
- task168_strategyqa_question_decomposition
- task1357_xlsum_summary_generation
- task390_torque_text_span_selection
- task165_mcscript_question_answering_commonsense
- task1533_daily_dialog_formal_classification
- task002_quoref_answer_generation
- task1297_qasc_question_answering
- task305_jeopardy_answer_generation_normal
- task029_winogrande_full_object
- task1327_qa_zre_answer_generation_from_question
- task326_jigsaw_classification_obscene
- task1542_every_ith_element_from_starting
- task570_recipe_nlg_ner_generation
- task1409_dart_text_generation
- task401_numeric_fused_head_reference
- task846_pubmedqa_classification
- task1712_poki_classification
- task344_hybridqa_answer_generation
- task875_emotion_classification
- task1214_atomic_classification_xwant
- task106_scruples_ethical_judgment
- task238_iirc_answer_from_passage_answer_generation
- task1391_winogrande_easy_answer_generation
- task195_sentiment140_classification
- task163_count_words_ending_with_letter
- task579_socialiqa_classification
- task569_recipe_nlg_text_generation
- task1602_webquestion_question_genreation
- task747_glucose_cause_emotion_detection
- task219_rocstories_title_answer_generation
- task178_quartz_question_answering
- task103_facts2story_long_text_generation
- task301_record_question_generation
- task1369_healthfact_sentence_generation
- task515_senteval_odd_word_out
- task496_semeval_answer_generation
- task1658_billsum_summarization
- task1204_atomic_classification_hinderedby
- task1392_superglue_multirc_answer_verification
- task306_jeopardy_answer_generation_double
- task1286_openbookqa_question_answering
- task159_check_frequency_of_words_in_sentence_pair
- task151_tomqa_find_location_easy_clean
- task323_jigsaw_classification_sexually_explicit
- task037_qasc_generate_related_fact
- task027_drop_answer_type_generation
- task1596_event2mind_text_generation_2
- task141_odd-man-out_classification_category
- task194_duorc_answer_generation
- task679_hope_edi_english_text_classification
- task246_dream_question_generation
- task1195_disflqa_disfluent_to_fluent_conversion
- task065_timetravel_consistent_sentence_classification
- task351_winomt_classification_gender_identifiability_anti
- task580_socialiqa_answer_generation
- task583_udeps_eng_coarse_pos_tagging
- task202_mnli_contradiction_classification
- task222_rocstories_two_chioce_slotting_classification
- task498_scruples_anecdotes_whoiswrong_classification
- task067_abductivenli_answer_generation
- task616_cola_classification
- task286_olid_offense_judgment
- task188_snli_neutral_to_entailment_text_modification
- task223_quartz_explanation_generation
- task820_protoqa_answer_generation
- task196_sentiment140_answer_generation
- task1678_mathqa_answer_selection
- task349_squad2.0_answerable_unanswerable_question_classification
- task154_tomqa_find_location_hard_noise
- task333_hateeval_classification_hate_en
- task235_iirc_question_from_subtext_answer_generation
- task1554_scitail_classification
- task210_logic2text_structured_text_generation
- task035_winogrande_question_modification_person
- task230_iirc_passage_classification
- task1356_xlsum_title_generation
- task1726_mathqa_correct_answer_generation
- task302_record_classification
- task380_boolq_yes_no_question
- task212_logic2text_classification
- task748_glucose_reverse_cause_event_detection
- task834_mathdataset_classification
- task350_winomt_classification_gender_identifiability_pro
- task191_hotpotqa_question_generation
- task236_iirc_question_from_passage_answer_generation
- task217_rocstories_ordering_answer_generation
- task568_circa_question_generation
- task614_glucose_cause_event_detection
- task361_spolin_yesand_prompt_response_classification
- task421_persent_sentence_sentiment_classification
- task203_mnli_sentence_generation
- task420_persent_document_sentiment_classification
- task153_tomqa_find_location_hard_clean
- task346_hybridqa_classification
- task1211_atomic_classification_hassubevent
- task360_spolin_yesand_response_generation
- task510_reddit_tifu_title_summarization
- task511_reddit_tifu_long_text_summarization
- task345_hybridqa_answer_generation
- task270_csrg_counterfactual_context_generation
- task307_jeopardy_answer_generation_final
- task001_quoref_question_generation
- task089_swap_words_verification
- task1196_atomic_classification_oeffect
- task080_piqa_answer_generation
- task1598_nyc_long_text_generation
- task240_tweetqa_question_generation
- task615_moviesqa_answer_generation
- task1347_glue_sts-b_similarity_classification
- task114_is_the_given_word_longest
- task292_storycommonsense_character_text_generation
- task115_help_advice_classification
- task431_senteval_object_count
- task1360_numer_sense_multiple_choice_qa_generation
- task177_para-nmt_paraphrasing
- task132_dais_text_modification
- task269_csrg_counterfactual_story_generation
- task233_iirc_link_exists_classification
- task161_count_words_containing_letter
- task1205_atomic_classification_isafter
- task571_recipe_nlg_ner_generation
- task1292_yelp_review_full_text_categorization
- task428_senteval_inversion
- task311_race_question_generation
- task429_senteval_tense
- task403_creak_commonsense_inference
- task929_products_reviews_classification
- task582_naturalquestion_answer_generation
- task237_iirc_answer_from_subtext_answer_generation
- task050_multirc_answerability
- task184_break_generate_question
- task669_ambigqa_answer_generation
- task169_strategyqa_sentence_generation
- task500_scruples_anecdotes_title_generation
- task241_tweetqa_classification
- task1345_glue_qqp_question_paraprashing
- task218_rocstories_swap_order_answer_generation
- task613_politifact_text_generation
- task1167_penn_treebank_coarse_pos_tagging
- task1422_mathqa_physics
- task247_dream_answer_generation
- task199_mnli_classification
- task164_mcscript_question_answering_text
- task1541_agnews_classification
- task516_senteval_conjoints_inversion
- task294_storycommonsense_motiv_text_generation
- task501_scruples_anecdotes_post_type_verification
- task213_rocstories_correct_ending_classification
- task821_protoqa_question_generation
- task493_review_polarity_classification
- task308_jeopardy_answer_generation_all
- task1595_event2mind_text_generation_1
- task040_qasc_question_generation
- task231_iirc_link_classification
- task1727_wiqa_what_is_the_effect
- task578_curiosity_dialogs_answer_generation
- task310_race_classification
- task309_race_answer_generation
- task379_agnews_topic_classification
- task030_winogrande_full_person
- task1540_parsed_pdfs_summarization
- task039_qasc_find_overlapping_words
- task1206_atomic_classification_isbefore
- task157_count_vowels_and_consonants
- task339_record_answer_generation
- task453_swag_answer_generation
- task848_pubmedqa_classification
- task673_google_wellformed_query_classification
- task676_ollie_relationship_answer_generation
- task268_casehold_legal_answer_generation
- task844_financial_phrasebank_classification
- task330_gap_answer_generation
- task595_mocha_answer_generation
- task1285_kpa_keypoint_matching
- task234_iirc_passage_line_answer_generation
- task494_review_polarity_answer_generation
- task670_ambigqa_question_generation
- task289_gigaword_summarization
- npr
- nli
- SimpleWiki
- amazon_review_2018
- ccnews_title_text
- agnews
- xsum
- msmarco
- yahoo_answers_title_answer
- squad_pairs
- wow
- mteb-amazon_counterfactual-avs_triplets
- mteb-amazon_massive_intent-avs_triplets
- mteb-amazon_massive_scenario-avs_triplets
- mteb-amazon_reviews_multi-avs_triplets
- mteb-banking77-avs_triplets
- mteb-emotion-avs_triplets
- mteb-imdb-avs_triplets
- mteb-mtop_domain-avs_triplets
- mteb-mtop_intent-avs_triplets
- mteb-toxic_conversations_50k-avs_triplets
- mteb-tweet_sentiment_extraction-avs_triplets
- covid-bing-query-gpt4-avs_triplets
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final")
# Run inference
sentences = [
'who does the chief risk officer report to',
"Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk management officer (CRMO) of a firm or corporation is the executive accountable for enabling the efficient and effective governance of significant risks, and related opportunities, to a business and its various segments. Risks are commonly categorized as strategic, reputational, operational, financial, or compliance-related. CROs are accountable to the Executive Committee and The Board for enabling the business to balance risk and reward. In more complex organizations, they are generally responsible for coordinating the organization's Enterprise Risk Management (ERM) approach. The CRO is responsible for assessing and mitigating significant competitive,",
"Chief risk officer a company's executive chief officer and chief financial officer to clarify the precision of its financial reports. Moreover, to ensure the mentioned accuracy of financial reports, internal controls are required. Accordingly, each financial report required an internal control report to prevent fraud. Furthermore, the CRO has to be aware of everything occurring in his company on a daily basis, but he must also be current on all of the requirements from the SEC. In addition, the CRO restrains corporate risk by managing compliance. Why is a CRO so important in financial institutions? There is a report of having a CRO",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `medi-mteb-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9156** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### NQ
* Dataset: NQ
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.86 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 111 tokens</li><li>mean: 137.85 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 110 tokens</li><li>mean: 138.8 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### pubmed
* Dataset: pubmed
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 22.62 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 77 tokens</li><li>mean: 240.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 77 tokens</li><li>mean: 239.5 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### specter_train_triples
* Dataset: specter_train_triples
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 15.41 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.07 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 15.69 tokens</li><li>max: 50 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### S2ORC_citations_abstracts
* Dataset: S2ORC_citations_abstracts
* Size: 99,352 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 198.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 207.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 204.86 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### fever
* Dataset: fever
* Size: 74,514 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 12.51 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 112.46 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 113.69 tokens</li><li>max: 155 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gooaq_pairs
* Dataset: gooaq_pairs
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.96 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 59.94 tokens</li><li>max: 144 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 63.02 tokens</li><li>max: 150 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### codesearchnet
* Dataset: codesearchnet
* Size: 15,210 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 29.65 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 134.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 164.44 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wikihow
* Dataset: wikihow
* Size: 5,070 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 8.03 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 44.2 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 36.49 tokens</li><li>max: 104 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### WikiAnswers
* Dataset: WikiAnswers
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 12.77 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.89 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.36 tokens</li><li>max: 42 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### eli5_question_answer
* Dataset: eli5_question_answer
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.24 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 98.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 108.48 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon-qa
* Dataset: amazon-qa
* Size: 99,352 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 22.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 54.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 62.82 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### medmcqa
* Dataset: medmcqa
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 20.68 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 112.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 110.9 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### zeroshot
* Dataset: zeroshot
* Size: 15,210 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 8.55 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 111.81 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 116.53 tokens</li><li>max: 239 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### TriviaQA_pairs
* Dataset: TriviaQA_pairs
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 19.77 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 245.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 233.43 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### PAQ_pairs
* Dataset: PAQ_pairs
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 12.55 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 109 tokens</li><li>mean: 136.21 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 112 tokens</li><li>mean: 135.15 tokens</li><li>max: 223 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### stackexchange_duplicate_questions_title-body_title-body
* Dataset: stackexchange_duplicate_questions_title-body_title-body
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 147.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 144.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 201.86 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### trex
* Dataset: trex
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 9.53 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 102.65 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 117.98 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### flickr30k_captions
* Dataset: flickr30k_captions
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.72 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.93 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.11 tokens</li><li>max: 52 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### hotpotqa
* Dataset: hotpotqa
* Size: 40,048 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 24.11 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 113.67 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 114.74 tokens</li><li>max: 189 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task671_ambigqa_text_generation
* Dataset: task671_ambigqa_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 12.72 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.53 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.24 tokens</li><li>max: 19 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task061_ropes_answer_generation
* Dataset: task061_ropes_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 117 tokens</li><li>mean: 210.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 117 tokens</li><li>mean: 210.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 119 tokens</li><li>mean: 212.51 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task285_imdb_answer_generation
* Dataset: task285_imdb_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 46 tokens</li><li>mean: 209.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 204.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 209.59 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task905_hate_speech_offensive_classification
* Dataset: task905_hate_speech_offensive_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 41.93 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 41.02 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 32.41 tokens</li><li>max: 135 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task566_circa_classification
* Dataset: task566_circa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 27.86 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.24 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 27.52 tokens</li><li>max: 47 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_snli_entailment_to_neutral_text_modification
* Dataset: task184_snli_entailment_to_neutral_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 29.87 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 28.89 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.34 tokens</li><li>max: 100 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task280_stereoset_classification_stereotype_type
* Dataset: task280_stereoset_classification_stereotype_type
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.47 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.93 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.85 tokens</li><li>max: 51 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1599_smcalflow_classification
* Dataset: task1599_smcalflow_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 11.31 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.56 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.28 tokens</li><li>max: 45 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1384_deal_or_no_dialog_classification
* Dataset: task1384_deal_or_no_dialog_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 59.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 59.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 58.71 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task591_sciq_answer_generation
* Dataset: task591_sciq_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.59 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.13 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.72 tokens</li><li>max: 75 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task823_peixian-rtgender_sentiment_analysis
* Dataset: task823_peixian-rtgender_sentiment_analysis
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 56.98 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 59.75 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.1 tokens</li><li>max: 169 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task023_cosmosqa_question_generation
* Dataset: task023_cosmosqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 78.99 tokens</li><li>max: 159 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 80.06 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 79.04 tokens</li><li>max: 161 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task900_freebase_qa_category_classification
* Dataset: task900_freebase_qa_category_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 20.52 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.26 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.06 tokens</li><li>max: 69 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task924_event2mind_word_generation
* Dataset: task924_event2mind_word_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 32.1 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.18 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.42 tokens</li><li>max: 68 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task152_tomqa_find_location_easy_noise
* Dataset: task152_tomqa_find_location_easy_noise
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 52.82 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.35 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.73 tokens</li><li>max: 82 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1368_healthfact_sentence_generation
* Dataset: task1368_healthfact_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 91 tokens</li><li>mean: 240.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 239.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 97 tokens</li><li>mean: 245.07 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1661_super_glue_classification
* Dataset: task1661_super_glue_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 140.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 143.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 142.81 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1187_politifact_classification
* Dataset: task1187_politifact_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 33.14 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 31.38 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 32.0 tokens</li><li>max: 71 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1728_web_nlg_data_to_text
* Dataset: task1728_web_nlg_data_to_text
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 43.18 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 46.4 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 43.15 tokens</li><li>max: 152 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task112_asset_simple_sentence_identification
* Dataset: task112_asset_simple_sentence_identification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 52.11 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.9 tokens</li><li>max: 144 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 52.06 tokens</li><li>max: 114 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1340_msr_text_compression_compression
* Dataset: task1340_msr_text_compression_compression
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 41.91 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 44.3 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.09 tokens</li><li>max: 141 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task072_abductivenli_answer_generation
* Dataset: task072_abductivenli_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 26.79 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.15 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.43 tokens</li><li>max: 55 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1504_hatexplain_answer_generation
* Dataset: task1504_hatexplain_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 28.83 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.33 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 28.06 tokens</li><li>max: 67 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task684_online_privacy_policy_text_information_type_generation
* Dataset: task684_online_privacy_policy_text_information_type_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 29.89 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.11 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 30.07 tokens</li><li>max: 68 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1290_xsum_summarization
* Dataset: task1290_xsum_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 226.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 229.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 229.42 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task075_squad1.1_answer_generation
* Dataset: task075_squad1.1_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 48 tokens</li><li>mean: 167.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 172.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 179.84 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1587_scifact_classification
* Dataset: task1587_scifact_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 88 tokens</li><li>mean: 242.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 90 tokens</li><li>mean: 246.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 244.62 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task384_socialiqa_question_classification
* Dataset: task384_socialiqa_question_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 35.43 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.43 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.63 tokens</li><li>max: 57 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1555_scitail_answer_generation
* Dataset: task1555_scitail_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 36.85 tokens</li><li>max: 90 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.15 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.55 tokens</li><li>max: 92 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1532_daily_dialog_emotion_classification
* Dataset: task1532_daily_dialog_emotion_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 136.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 140.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 134.53 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task239_tweetqa_answer_generation
* Dataset: task239_tweetqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 55.93 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 56.54 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 55.95 tokens</li><li>max: 81 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task596_mocha_question_generation
* Dataset: task596_mocha_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 80.84 tokens</li><li>max: 163 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 95.19 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 45.62 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1411_dart_subject_identification
* Dataset: task1411_dart_subject_identification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 14.95 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.05 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.34 tokens</li><li>max: 38 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1359_numer_sense_answer_generation
* Dataset: task1359_numer_sense_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 18.74 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.39 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.29 tokens</li><li>max: 30 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task329_gap_classification
* Dataset: task329_gap_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 40 tokens</li><li>mean: 123.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 127.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 128.32 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task220_rocstories_title_classification
* Dataset: task220_rocstories_title_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 53 tokens</li><li>mean: 80.74 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 81.05 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 79.84 tokens</li><li>max: 115 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task316_crows-pairs_classification_stereotype
* Dataset: task316_crows-pairs_classification_stereotype
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.78 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.21 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.83 tokens</li><li>max: 52 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task495_semeval_headline_classification
* Dataset: task495_semeval_headline_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 24.49 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.19 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.2 tokens</li><li>max: 38 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1168_brown_coarse_pos_tagging
* Dataset: task1168_brown_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 43.8 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 43.34 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 44.88 tokens</li><li>max: 197 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task348_squad2.0_unanswerable_question_generation
* Dataset: task348_squad2.0_unanswerable_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 152.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 161.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 165.55 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task049_multirc_questions_needed_to_answer
* Dataset: task049_multirc_questions_needed_to_answer
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 174 tokens</li><li>mean: 252.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 169 tokens</li><li>mean: 252.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 178 tokens</li><li>mean: 252.82 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1534_daily_dialog_question_classification
* Dataset: task1534_daily_dialog_question_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 125.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 130.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 135.15 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task322_jigsaw_classification_threat
* Dataset: task322_jigsaw_classification_threat
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 54.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 61.29 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 61.83 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task295_semeval_2020_task4_commonsense_reasoning
* Dataset: task295_semeval_2020_task4_commonsense_reasoning
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 45.19 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 45.14 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 44.6 tokens</li><li>max: 88 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task186_snli_contradiction_to_entailment_text_modification
* Dataset: task186_snli_contradiction_to_entailment_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.16 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.23 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 32.18 tokens</li><li>max: 67 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task034_winogrande_question_modification_object
* Dataset: task034_winogrande_question_modification_object
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 36.34 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 35.6 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 34.88 tokens</li><li>max: 55 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task160_replace_letter_in_a_sentence
* Dataset: task160_replace_letter_in_a_sentence
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 31.98 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.78 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 31.79 tokens</li><li>max: 48 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task469_mrqa_answer_generation
* Dataset: task469_mrqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 182.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 181.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 184.86 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task105_story_cloze-rocstories_sentence_generation
* Dataset: task105_story_cloze-rocstories_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 36 tokens</li><li>mean: 55.59 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 54.88 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 55.93 tokens</li><li>max: 76 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task649_race_blank_question_generation
* Dataset: task649_race_blank_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 36 tokens</li><li>mean: 253.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 252.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 157 tokens</li><li>mean: 253.95 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1536_daily_dialog_happiness_classification
* Dataset: task1536_daily_dialog_happiness_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 128.45 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 135.05 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 143.71 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task683_online_privacy_policy_text_purpose_answer_generation
* Dataset: task683_online_privacy_policy_text_purpose_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 29.98 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.36 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 29.89 tokens</li><li>max: 68 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task024_cosmosqa_answer_generation
* Dataset: task024_cosmosqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 45 tokens</li><li>mean: 92.42 tokens</li><li>max: 176 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 93.6 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 94.42 tokens</li><li>max: 183 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task584_udeps_eng_fine_pos_tagging
* Dataset: task584_udeps_eng_fine_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 40.27 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 39.65 tokens</li><li>max: 186 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.61 tokens</li><li>max: 148 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task066_timetravel_binary_consistency_classification
* Dataset: task066_timetravel_binary_consistency_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 66.76 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 67.45 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 66.98 tokens</li><li>max: 92 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task413_mickey_en_sentence_perturbation_generation
* Dataset: task413_mickey_en_sentence_perturbation_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 13.75 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.81 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.31 tokens</li><li>max: 20 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task182_duorc_question_generation
* Dataset: task182_duorc_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 99 tokens</li><li>mean: 242.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 246.33 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 246.42 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task028_drop_answer_generation
* Dataset: task028_drop_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 76 tokens</li><li>mean: 230.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 234.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 81 tokens</li><li>mean: 235.81 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1601_webquestions_answer_generation
* Dataset: task1601_webquestions_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 16.51 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.69 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 16.73 tokens</li><li>max: 27 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1295_adversarial_qa_question_answering
* Dataset: task1295_adversarial_qa_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 45 tokens</li><li>mean: 164.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 54 tokens</li><li>mean: 166.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 166.85 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task201_mnli_neutral_classification
* Dataset: task201_mnli_neutral_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 73.03 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 73.42 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 72.64 tokens</li><li>max: 205 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task038_qasc_combined_fact
* Dataset: task038_qasc_combined_fact
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.27 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 30.52 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.84 tokens</li><li>max: 53 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task293_storycommonsense_emotion_text_generation
* Dataset: task293_storycommonsense_emotion_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 40.0 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 40.18 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 37.66 tokens</li><li>max: 85 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task572_recipe_nlg_text_generation
* Dataset: task572_recipe_nlg_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 114.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 119.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 124.27 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task517_emo_classify_emotion_of_dialogue
* Dataset: task517_emo_classify_emotion_of_dialogue
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 18.12 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.16 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.4 tokens</li><li>max: 67 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task382_hybridqa_answer_generation
* Dataset: task382_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 42.31 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 41.59 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 41.75 tokens</li><li>max: 75 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task176_break_decompose_questions
* Dataset: task176_break_decompose_questions
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.43 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.21 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.73 tokens</li><li>max: 38 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1291_multi_news_summarization
* Dataset: task1291_multi_news_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 116 tokens</li><li>mean: 255.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 255.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 68 tokens</li><li>mean: 252.32 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task155_count_nouns_verbs
* Dataset: task155_count_nouns_verbs
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 27.02 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.8 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.96 tokens</li><li>max: 46 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task031_winogrande_question_generation_object
* Dataset: task031_winogrande_question_generation_object
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 7.43 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.31 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.25 tokens</li><li>max: 11 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task279_stereoset_classification_stereotype
* Dataset: task279_stereoset_classification_stereotype
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.86 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.52 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.39 tokens</li><li>max: 50 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1336_peixian_equity_evaluation_corpus_gender_classifier
* Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.59 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.58 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.64 tokens</li><li>max: 16 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task508_scruples_dilemmas_more_ethical_isidentifiable
* Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 29.67 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.64 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.71 tokens</li><li>max: 86 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task518_emo_different_dialogue_emotions
* Dataset: task518_emo_different_dialogue_emotions
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 47.83 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 45.5 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 45.83 tokens</li><li>max: 123 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task077_splash_explanation_to_sql
* Dataset: task077_splash_explanation_to_sql
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 39.84 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 39.9 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 35.84 tokens</li><li>max: 111 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task923_event2mind_classifier
* Dataset: task923_event2mind_classifier
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 20.63 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 18.63 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 19.5 tokens</li><li>max: 46 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task470_mrqa_question_generation
* Dataset: task470_mrqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 171.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 173.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 179.34 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task638_multi_woz_classification
* Dataset: task638_multi_woz_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 78 tokens</li><li>mean: 223.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 220.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 219.78 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1412_web_questions_question_answering
* Dataset: task1412_web_questions_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.32 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.18 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.07 tokens</li><li>max: 16 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task847_pubmedqa_question_generation
* Dataset: task847_pubmedqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 21 tokens</li><li>mean: 249.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 249.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 249.01 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task678_ollie_actual_relationship_answer_generation
* Dataset: task678_ollie_actual_relationship_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 40.91 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 38.11 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 41.31 tokens</li><li>max: 104 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task290_tellmewhy_question_answerability
* Dataset: task290_tellmewhy_question_answerability
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 62.72 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.32 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 62.95 tokens</li><li>max: 95 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task575_air_dialogue_classification
* Dataset: task575_air_dialogue_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 14.19 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.59 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.31 tokens</li><li>max: 42 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task189_snli_neutral_to_contradiction_text_modification
* Dataset: task189_snli_neutral_to_contradiction_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.84 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.73 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.22 tokens</li><li>max: 105 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task026_drop_question_generation
* Dataset: task026_drop_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 82 tokens</li><li>mean: 219.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 222.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 96 tokens</li><li>mean: 232.0 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task162_count_words_starting_with_letter
* Dataset: task162_count_words_starting_with_letter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 32.17 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.76 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.63 tokens</li><li>max: 46 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task079_conala_concat_strings
* Dataset: task079_conala_concat_strings
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 39.49 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 34.22 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 33.51 tokens</li><li>max: 76 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task610_conllpp_ner
* Dataset: task610_conllpp_ner
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 19.53 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.3 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.15 tokens</li><li>max: 54 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task046_miscellaneous_question_typing
* Dataset: task046_miscellaneous_question_typing
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 25.34 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 24.92 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 25.11 tokens</li><li>max: 57 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task197_mnli_domain_answer_generation
* Dataset: task197_mnli_domain_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 43.91 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 45.21 tokens</li><li>max: 211 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 39.5 tokens</li><li>max: 115 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1325_qa_zre_question_generation_on_subject_relation
* Dataset: task1325_qa_zre_question_generation_on_subject_relation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 50.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 49.76 tokens</li><li>max: 180 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 54.01 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task430_senteval_subject_count
* Dataset: task430_senteval_subject_count
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.36 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.41 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.16 tokens</li><li>max: 34 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task672_nummersense
* Dataset: task672_nummersense
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.72 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.34 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.28 tokens</li><li>max: 30 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task402_grailqa_paraphrase_generation
* Dataset: task402_grailqa_paraphrase_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 130.03 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 139.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 136.9 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task904_hate_speech_offensive_classification
* Dataset: task904_hate_speech_offensive_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 34.87 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 34.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.88 tokens</li><li>max: 148 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task192_hotpotqa_sentence_generation
* Dataset: task192_hotpotqa_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 125.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 124.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 134.28 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task069_abductivenli_classification
* Dataset: task069_abductivenli_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 52.09 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 52.07 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 51.91 tokens</li><li>max: 95 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task574_air_dialogue_sentence_generation
* Dataset: task574_air_dialogue_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 54 tokens</li><li>mean: 144.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 143.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 66 tokens</li><li>mean: 147.62 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task187_snli_entailment_to_contradiction_text_modification
* Dataset: task187_snli_entailment_to_contradiction_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 30.26 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.08 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 29.35 tokens</li><li>max: 71 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task749_glucose_reverse_cause_emotion_detection
* Dataset: task749_glucose_reverse_cause_emotion_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 38 tokens</li><li>mean: 67.95 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 67.23 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.79 tokens</li><li>max: 107 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1552_scitail_question_generation
* Dataset: task1552_scitail_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 18.34 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.57 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.86 tokens</li><li>max: 54 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task750_aqua_multiple_choice_answering
* Dataset: task750_aqua_multiple_choice_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 70.17 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 68.58 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 68.28 tokens</li><li>max: 165 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task327_jigsaw_classification_toxic
* Dataset: task327_jigsaw_classification_toxic
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 36.97 tokens</li><li>max: 234 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 41.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 46.13 tokens</li><li>max: 244 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1502_hatexplain_classification
* Dataset: task1502_hatexplain_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 28.81 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.8 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.25 tokens</li><li>max: 90 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task328_jigsaw_classification_insult
* Dataset: task328_jigsaw_classification_insult
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 50.85 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 60.44 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 63.9 tokens</li><li>max: 249 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task304_numeric_fused_head_resolution
* Dataset: task304_numeric_fused_head_resolution
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 121.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 122.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 135.09 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1293_kilt_tasks_hotpotqa_question_answering
* Dataset: task1293_kilt_tasks_hotpotqa_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 24.85 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 24.21 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.81 tokens</li><li>max: 84 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task216_rocstories_correct_answer_generation
* Dataset: task216_rocstories_correct_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 59.48 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 58.43 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 58.2 tokens</li><li>max: 95 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1326_qa_zre_question_generation_from_answer
* Dataset: task1326_qa_zre_question_generation_from_answer
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 46.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 45.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 49.45 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.69 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.7 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.59 tokens</li><li>max: 17 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1729_personachat_generate_next
* Dataset: task1729_personachat_generate_next
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 146.83 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 142.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 144.69 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1202_atomic_classification_xneed
* Dataset: task1202_atomic_classification_xneed
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 19.56 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.38 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.24 tokens</li><li>max: 28 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task400_paws_paraphrase_classification
* Dataset: task400_paws_paraphrase_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 52.16 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.75 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 52.95 tokens</li><li>max: 97 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task502_scruples_anecdotes_whoiswrong_verification
* Dataset: task502_scruples_anecdotes_whoiswrong_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 229.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 236.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 235.34 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task088_identify_typo_verification
* Dataset: task088_identify_typo_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 15.1 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.06 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.41 tokens</li><li>max: 47 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task221_rocstories_two_choice_classification
* Dataset: task221_rocstories_two_choice_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 47 tokens</li><li>mean: 72.64 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.56 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 73.23 tokens</li><li>max: 108 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task200_mnli_entailment_classification
* Dataset: task200_mnli_entailment_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 72.66 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 72.92 tokens</li><li>max: 224 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 73.48 tokens</li><li>max: 226 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task074_squad1.1_question_generation
* Dataset: task074_squad1.1_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 149.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 160.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 164.94 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task581_socialiqa_question_generation
* Dataset: task581_socialiqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 26.47 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 25.5 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 25.89 tokens</li><li>max: 48 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1186_nne_hrngo_classification
* Dataset: task1186_nne_hrngo_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 33.83 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.53 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 33.3 tokens</li><li>max: 77 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task898_freebase_qa_answer_generation
* Dataset: task898_freebase_qa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.18 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.45 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.4 tokens</li><li>max: 79 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1408_dart_similarity_classification
* Dataset: task1408_dart_similarity_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 59.53 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 61.93 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 48.83 tokens</li><li>max: 124 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task168_strategyqa_question_decomposition
* Dataset: task168_strategyqa_question_decomposition
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 80.63 tokens</li><li>max: 181 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 78.98 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 77.19 tokens</li><li>max: 166 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1357_xlsum_summary_generation
* Dataset: task1357_xlsum_summary_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 67 tokens</li><li>mean: 241.86 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 69 tokens</li><li>mean: 242.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 67 tokens</li><li>mean: 247.11 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task390_torque_text_span_selection
* Dataset: task390_torque_text_span_selection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 47 tokens</li><li>mean: 110.01 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 110.44 tokens</li><li>max: 195 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 110.66 tokens</li><li>max: 196 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task165_mcscript_question_answering_commonsense
* Dataset: task165_mcscript_question_answering_commonsense
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 147 tokens</li><li>mean: 197.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 145 tokens</li><li>mean: 196.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 198.04 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1533_daily_dialog_formal_classification
* Dataset: task1533_daily_dialog_formal_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 130.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 136.79 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 136.81 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task002_quoref_answer_generation
* Dataset: task002_quoref_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 214 tokens</li><li>mean: 255.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 214 tokens</li><li>mean: 255.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 224 tokens</li><li>mean: 255.61 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1297_qasc_question_answering
* Dataset: task1297_qasc_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 61 tokens</li><li>mean: 84.74 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 85.41 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 84.83 tokens</li><li>max: 125 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task305_jeopardy_answer_generation_normal
* Dataset: task305_jeopardy_answer_generation_normal
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 27.67 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.39 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.41 tokens</li><li>max: 46 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task029_winogrande_full_object
* Dataset: task029_winogrande_full_object
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 7.37 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.33 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.24 tokens</li><li>max: 10 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1327_qa_zre_answer_generation_from_question
* Dataset: task1327_qa_zre_answer_generation_from_question
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 54.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 52.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 55.5 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task326_jigsaw_classification_obscene
* Dataset: task326_jigsaw_classification_obscene
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 65.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 77.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 73.17 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1542_every_ith_element_from_starting
* Dataset: task1542_every_ith_element_from_starting
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 127.39 tokens</li><li>max: 245 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 125.92 tokens</li><li>max: 244 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 123.75 tokens</li><li>max: 238 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task570_recipe_nlg_ner_generation
* Dataset: task570_recipe_nlg_ner_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 73.94 tokens</li><li>max: 250 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 73.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 75.51 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1409_dart_text_generation
* Dataset: task1409_dart_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 68.05 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 72.93 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 68.0 tokens</li><li>max: 164 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task401_numeric_fused_head_reference
* Dataset: task401_numeric_fused_head_reference
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 109.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 117.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 119.84 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task846_pubmedqa_classification
* Dataset: task846_pubmedqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 85.64 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 85.03 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 93.96 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1712_poki_classification
* Dataset: task1712_poki_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 52.23 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 55.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 63.09 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task344_hybridqa_answer_generation
* Dataset: task344_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 22.26 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 22.14 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 22.01 tokens</li><li>max: 55 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task875_emotion_classification
* Dataset: task875_emotion_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 23.04 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.43 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 20.33 tokens</li><li>max: 68 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1214_atomic_classification_xwant
* Dataset: task1214_atomic_classification_xwant
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 19.65 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.44 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.51 tokens</li><li>max: 31 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task106_scruples_ethical_judgment
* Dataset: task106_scruples_ethical_judgment
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 30.0 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.93 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.69 tokens</li><li>max: 58 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task238_iirc_answer_from_passage_answer_generation
* Dataset: task238_iirc_answer_from_passage_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 138 tokens</li><li>mean: 242.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 165 tokens</li><li>mean: 242.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 243.38 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1391_winogrande_easy_answer_generation
* Dataset: task1391_winogrande_easy_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 26 tokens</li><li>mean: 31.7 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 31.3 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 31.2 tokens</li><li>max: 49 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task195_sentiment140_classification
* Dataset: task195_sentiment140_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 22.51 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.98 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 21.42 tokens</li><li>max: 51 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task163_count_words_ending_with_letter
* Dataset: task163_count_words_ending_with_letter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 31.97 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.7 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.57 tokens</li><li>max: 43 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task579_socialiqa_classification
* Dataset: task579_socialiqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 54.15 tokens</li><li>max: 132 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 53.63 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 54.12 tokens</li><li>max: 84 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task569_recipe_nlg_text_generation
* Dataset: task569_recipe_nlg_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 192.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 194.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 198.01 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1602_webquestion_question_genreation
* Dataset: task1602_webquestion_question_genreation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 23.59 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 24.18 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 22.52 tokens</li><li>max: 120 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task747_glucose_cause_emotion_detection
* Dataset: task747_glucose_cause_emotion_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 67.95 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.16 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.84 tokens</li><li>max: 99 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task219_rocstories_title_answer_generation
* Dataset: task219_rocstories_title_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 67.65 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 66.72 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 66.88 tokens</li><li>max: 96 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task178_quartz_question_answering
* Dataset: task178_quartz_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 57.99 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 57.21 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 56.85 tokens</li><li>max: 102 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task103_facts2story_long_text_generation
* Dataset: task103_facts2story_long_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 52 tokens</li><li>mean: 80.5 tokens</li><li>max: 143 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 82.19 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 78.93 tokens</li><li>max: 145 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task301_record_question_generation
* Dataset: task301_record_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 140 tokens</li><li>mean: 210.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 139 tokens</li><li>mean: 209.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 143 tokens</li><li>mean: 208.87 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1369_healthfact_sentence_generation
* Dataset: task1369_healthfact_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 110 tokens</li><li>mean: 243.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 243.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 113 tokens</li><li>mean: 251.69 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task515_senteval_odd_word_out
* Dataset: task515_senteval_odd_word_out
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 19.82 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.22 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.02 tokens</li><li>max: 35 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task496_semeval_answer_generation
* Dataset: task496_semeval_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 28.16 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.78 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.71 tokens</li><li>max: 45 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1658_billsum_summarization
* Dataset: task1658_billsum_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1204_atomic_classification_hinderedby
* Dataset: task1204_atomic_classification_hinderedby
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 22.08 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 22.05 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.51 tokens</li><li>max: 38 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1392_superglue_multirc_answer_verification
* Dataset: task1392_superglue_multirc_answer_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 128 tokens</li><li>mean: 241.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 127 tokens</li><li>mean: 241.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 136 tokens</li><li>mean: 242.0 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task306_jeopardy_answer_generation_double
* Dataset: task306_jeopardy_answer_generation_double
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 27.86 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 27.16 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.47 tokens</li><li>max: 47 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1286_openbookqa_question_answering
* Dataset: task1286_openbookqa_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 39.61 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 38.96 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 38.35 tokens</li><li>max: 89 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task159_check_frequency_of_words_in_sentence_pair
* Dataset: task159_check_frequency_of_words_in_sentence_pair
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 50.41 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.35 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.59 tokens</li><li>max: 66 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task151_tomqa_find_location_easy_clean
* Dataset: task151_tomqa_find_location_easy_clean
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 50.74 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.23 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.66 tokens</li><li>max: 74 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task323_jigsaw_classification_sexually_explicit
* Dataset: task323_jigsaw_classification_sexually_explicit
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 66.2 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 76.82 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 75.6 tokens</li><li>max: 251 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task037_qasc_generate_related_fact
* Dataset: task037_qasc_generate_related_fact
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 22.08 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 22.07 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 21.88 tokens</li><li>max: 40 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task027_drop_answer_type_generation
* Dataset: task027_drop_answer_type_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 87 tokens</li><li>mean: 229.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 74 tokens</li><li>mean: 230.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 71 tokens</li><li>mean: 232.72 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1596_event2mind_text_generation_2
* Dataset: task1596_event2mind_text_generation_2
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.04 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.04 tokens</li><li>max: 18 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task141_odd-man-out_classification_category
* Dataset: task141_odd-man-out_classification_category
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 18.43 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.37 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.45 tokens</li><li>max: 25 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task194_duorc_answer_generation
* Dataset: task194_duorc_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 149 tokens</li><li>mean: 251.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 252.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 148 tokens</li><li>mean: 251.81 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task679_hope_edi_english_text_classification
* Dataset: task679_hope_edi_english_text_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 27.62 tokens</li><li>max: 199 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 27.01 tokens</li><li>max: 205 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 29.68 tokens</li><li>max: 194 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task246_dream_question_generation
* Dataset: task246_dream_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 80.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 80.34 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 86.98 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1195_disflqa_disfluent_to_fluent_conversion
* Dataset: task1195_disflqa_disfluent_to_fluent_conversion
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 19.79 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.84 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.05 tokens</li><li>max: 44 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task065_timetravel_consistent_sentence_classification
* Dataset: task065_timetravel_consistent_sentence_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 55 tokens</li><li>mean: 79.44 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 79.28 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 80.05 tokens</li><li>max: 110 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task351_winomt_classification_gender_identifiability_anti
* Dataset: task351_winomt_classification_gender_identifiability_anti
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 21.8 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.7 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.83 tokens</li><li>max: 30 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task580_socialiqa_answer_generation
* Dataset: task580_socialiqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 52.36 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 51.03 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 51.01 tokens</li><li>max: 87 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task583_udeps_eng_coarse_pos_tagging
* Dataset: task583_udeps_eng_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 40.75 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 39.87 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.43 tokens</li><li>max: 185 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task202_mnli_contradiction_classification
* Dataset: task202_mnli_contradiction_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 73.61 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 76.12 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 74.47 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task222_rocstories_two_chioce_slotting_classification
* Dataset: task222_rocstories_two_chioce_slotting_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 48 tokens</li><li>mean: 73.08 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 73.29 tokens</li><li>max: 100 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 71.96 tokens</li><li>max: 102 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task498_scruples_anecdotes_whoiswrong_classification
* Dataset: task498_scruples_anecdotes_whoiswrong_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 225.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 231.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 231.0 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task067_abductivenli_answer_generation
* Dataset: task067_abductivenli_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 26.76 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 26.09 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 26.35 tokens</li><li>max: 38 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task616_cola_classification
* Dataset: task616_cola_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 12.44 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.29 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.16 tokens</li><li>max: 29 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task286_olid_offense_judgment
* Dataset: task286_olid_offense_judgment
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 32.73 tokens</li><li>max: 145 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.79 tokens</li><li>max: 171 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.27 tokens</li><li>max: 169 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task188_snli_neutral_to_entailment_text_modification
* Dataset: task188_snli_neutral_to_entailment_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.76 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.25 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.02 tokens</li><li>max: 84 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task223_quartz_explanation_generation
* Dataset: task223_quartz_explanation_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 31.41 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 31.77 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 28.98 tokens</li><li>max: 96 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task820_protoqa_answer_generation
* Dataset: task820_protoqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.71 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.49 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.15 tokens</li><li>max: 29 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task196_sentiment140_answer_generation
* Dataset: task196_sentiment140_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 36.21 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.8 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 36.21 tokens</li><li>max: 72 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1678_mathqa_answer_selection
* Dataset: task1678_mathqa_answer_selection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 70.5 tokens</li><li>max: 177 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 69.11 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 69.75 tokens</li><li>max: 160 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task349_squad2.0_answerable_unanswerable_question_classification
* Dataset: task349_squad2.0_answerable_unanswerable_question_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 53 tokens</li><li>mean: 175.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 175.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 175.37 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task154_tomqa_find_location_hard_noise
* Dataset: task154_tomqa_find_location_hard_noise
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 129 tokens</li><li>mean: 176.0 tokens</li><li>max: 253 tokens</li></ul> | <ul><li>min: 126 tokens</li><li>mean: 176.09 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 177.44 tokens</li><li>max: 254 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task333_hateeval_classification_hate_en
* Dataset: task333_hateeval_classification_hate_en
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 38.53 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 37.38 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 36.64 tokens</li><li>max: 113 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task235_iirc_question_from_subtext_answer_generation
* Dataset: task235_iirc_question_from_subtext_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 52.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 50.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 55.69 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1554_scitail_classification
* Dataset: task1554_scitail_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.69 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 25.79 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 24.42 tokens</li><li>max: 59 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task210_logic2text_structured_text_generation
* Dataset: task210_logic2text_structured_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 31.62 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 30.74 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 32.72 tokens</li><li>max: 89 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task035_winogrande_question_modification_person
* Dataset: task035_winogrande_question_modification_person
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 31 tokens</li><li>mean: 36.19 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.74 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.48 tokens</li><li>max: 48 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task230_iirc_passage_classification
* Dataset: task230_iirc_passage_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1356_xlsum_title_generation
* Dataset: task1356_xlsum_title_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 59 tokens</li><li>mean: 240.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 241.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 248.67 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1726_mathqa_correct_answer_generation
* Dataset: task1726_mathqa_correct_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 44.19 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 42.51 tokens</li><li>max: 129 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 43.3 tokens</li><li>max: 133 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task302_record_classification
* Dataset: task302_record_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 194 tokens</li><li>mean: 253.34 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 198 tokens</li><li>mean: 252.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 195 tokens</li><li>mean: 252.92 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task380_boolq_yes_no_question
* Dataset: task380_boolq_yes_no_question
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 26 tokens</li><li>mean: 133.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 138.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 137.7 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task212_logic2text_classification
* Dataset: task212_logic2text_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 33.08 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 32.04 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 33.02 tokens</li><li>max: 127 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task748_glucose_reverse_cause_event_detection
* Dataset: task748_glucose_reverse_cause_event_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 67.7 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 67.03 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.84 tokens</li><li>max: 105 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task834_mathdataset_classification
* Dataset: task834_mathdataset_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 27.58 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 27.78 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.82 tokens</li><li>max: 93 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task350_winomt_classification_gender_identifiability_pro
* Dataset: task350_winomt_classification_gender_identifiability_pro
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 21.79 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.63 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.79 tokens</li><li>max: 30 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task191_hotpotqa_question_generation
* Dataset: task191_hotpotqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 198 tokens</li><li>mean: 255.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 238 tokens</li><li>mean: 255.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 255 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task236_iirc_question_from_passage_answer_generation
* Dataset: task236_iirc_question_from_passage_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 135 tokens</li><li>mean: 238.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 237.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 154 tokens</li><li>mean: 239.59 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task217_rocstories_ordering_answer_generation
* Dataset: task217_rocstories_ordering_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 45 tokens</li><li>mean: 72.45 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.26 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 71.03 tokens</li><li>max: 105 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task568_circa_question_generation
* Dataset: task568_circa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.57 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.53 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.93 tokens</li><li>max: 20 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task614_glucose_cause_event_detection
* Dataset: task614_glucose_cause_event_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 67.7 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 67.16 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 68.55 tokens</li><li>max: 103 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task361_spolin_yesand_prompt_response_classification
* Dataset: task361_spolin_yesand_prompt_response_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 47.04 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 45.97 tokens</li><li>max: 119 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 47.1 tokens</li><li>max: 128 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task421_persent_sentence_sentiment_classification
* Dataset: task421_persent_sentence_sentiment_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 67.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 71.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 72.33 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task203_mnli_sentence_generation
* Dataset: task203_mnli_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 39.1 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.55 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 34.25 tokens</li><li>max: 170 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task420_persent_document_sentiment_classification
* Dataset: task420_persent_document_sentiment_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 221.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 233.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 227.57 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task153_tomqa_find_location_hard_clean
* Dataset: task153_tomqa_find_location_hard_clean
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 161.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 160.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 164.12 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task346_hybridqa_classification
* Dataset: task346_hybridqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 32.88 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.94 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 31.91 tokens</li><li>max: 75 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1211_atomic_classification_hassubevent
* Dataset: task1211_atomic_classification_hassubevent
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 16.28 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.08 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.83 tokens</li><li>max: 29 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task360_spolin_yesand_response_generation
* Dataset: task360_spolin_yesand_response_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 22.53 tokens</li><li>max: 89 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 21.05 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.8 tokens</li><li>max: 67 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task510_reddit_tifu_title_summarization
* Dataset: task510_reddit_tifu_title_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 217.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 218.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 222.62 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task511_reddit_tifu_long_text_summarization
* Dataset: task511_reddit_tifu_long_text_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 239.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 238.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 245.19 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task345_hybridqa_answer_generation
* Dataset: task345_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 22.16 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.62 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.91 tokens</li><li>max: 47 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task270_csrg_counterfactual_context_generation
* Dataset: task270_csrg_counterfactual_context_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 63 tokens</li><li>mean: 100.09 tokens</li><li>max: 158 tokens</li></ul> | <ul><li>min: 63 tokens</li><li>mean: 98.76 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 100.29 tokens</li><li>max: 141 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task307_jeopardy_answer_generation_final
* Dataset: task307_jeopardy_answer_generation_final
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 29.55 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.3 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.25 tokens</li><li>max: 43 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task001_quoref_question_generation
* Dataset: task001_quoref_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 201 tokens</li><li>mean: 254.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 254.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 255.09 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task089_swap_words_verification
* Dataset: task089_swap_words_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.86 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.63 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.25 tokens</li><li>max: 22 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1196_atomic_classification_oeffect
* Dataset: task1196_atomic_classification_oeffect
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 18.78 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.57 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.51 tokens</li><li>max: 29 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task080_piqa_answer_generation
* Dataset: task080_piqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 10.85 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.75 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.12 tokens</li><li>max: 26 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1598_nyc_long_text_generation
* Dataset: task1598_nyc_long_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 35.49 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 35.61 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 36.63 tokens</li><li>max: 55 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task240_tweetqa_question_generation
* Dataset: task240_tweetqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 51.08 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 50.61 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 51.58 tokens</li><li>max: 95 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task615_moviesqa_answer_generation
* Dataset: task615_moviesqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 11.45 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 11.43 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.37 tokens</li><li>max: 21 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1347_glue_sts-b_similarity_classification
* Dataset: task1347_glue_sts-b_similarity_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 31.15 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.1 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.97 tokens</li><li>max: 92 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task114_is_the_given_word_longest
* Dataset: task114_is_the_given_word_longest
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 28.84 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.47 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.72 tokens</li><li>max: 47 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task292_storycommonsense_character_text_generation
* Dataset: task292_storycommonsense_character_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 43 tokens</li><li>mean: 67.9 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 67.11 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 69.09 tokens</li><li>max: 96 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task115_help_advice_classification
* Dataset: task115_help_advice_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 19.92 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.28 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.23 tokens</li><li>max: 137 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task431_senteval_object_count
* Dataset: task431_senteval_object_count
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.77 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.16 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.77 tokens</li><li>max: 35 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1360_numer_sense_multiple_choice_qa_generation
* Dataset: task1360_numer_sense_multiple_choice_qa_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 40.71 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.36 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.32 tokens</li><li>max: 60 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task177_para-nmt_paraphrasing
* Dataset: task177_para-nmt_paraphrasing
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.93 tokens</li><li>max: 82 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.97 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.26 tokens</li><li>max: 36 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task132_dais_text_modification
* Dataset: task132_dais_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.33 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.07 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.15 tokens</li><li>max: 15 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task269_csrg_counterfactual_story_generation
* Dataset: task269_csrg_counterfactual_story_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 49 tokens</li><li>mean: 80.0 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 79.62 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 79.46 tokens</li><li>max: 114 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task233_iirc_link_exists_classification
* Dataset: task233_iirc_link_exists_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 145 tokens</li><li>mean: 235.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 233.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 151 tokens</li><li>mean: 234.97 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task161_count_words_containing_letter
* Dataset: task161_count_words_containing_letter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 30.99 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.79 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.48 tokens</li><li>max: 42 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1205_atomic_classification_isafter
* Dataset: task1205_atomic_classification_isafter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 20.92 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.64 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.52 tokens</li><li>max: 37 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task571_recipe_nlg_ner_generation
* Dataset: task571_recipe_nlg_ner_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 118.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 118.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 111.25 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1292_yelp_review_full_text_categorization
* Dataset: task1292_yelp_review_full_text_categorization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 136.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 147.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 146.33 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task428_senteval_inversion
* Dataset: task428_senteval_inversion
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.68 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.59 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.26 tokens</li><li>max: 34 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task311_race_question_generation
* Dataset: task311_race_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 115 tokens</li><li>mean: 254.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 137 tokens</li><li>mean: 254.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 255.51 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task429_senteval_tense
* Dataset: task429_senteval_tense
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.82 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.07 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.3 tokens</li><li>max: 36 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task403_creak_commonsense_inference
* Dataset: task403_creak_commonsense_inference
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 30.14 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.54 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.26 tokens</li><li>max: 122 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task929_products_reviews_classification
* Dataset: task929_products_reviews_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 69.61 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.61 tokens</li><li>max: 123 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.68 tokens</li><li>max: 123 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task582_naturalquestion_answer_generation
* Dataset: task582_naturalquestion_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.7 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.63 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 25 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task237_iirc_answer_from_subtext_answer_generation
* Dataset: task237_iirc_answer_from_subtext_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 66.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 64.95 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 61.31 tokens</li><li>max: 161 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task050_multirc_answerability
* Dataset: task050_multirc_answerability
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 32.56 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 31.62 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 32.26 tokens</li><li>max: 159 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_break_generate_question
* Dataset: task184_break_generate_question
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 39.72 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 39.07 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 39.81 tokens</li><li>max: 148 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task669_ambigqa_answer_generation
* Dataset: task669_ambigqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 12.91 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 12.84 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.74 tokens</li><li>max: 22 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task169_strategyqa_sentence_generation
* Dataset: task169_strategyqa_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 35.06 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.24 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.37 tokens</li><li>max: 65 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task500_scruples_anecdotes_title_generation
* Dataset: task500_scruples_anecdotes_title_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 225.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 233.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 235.04 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task241_tweetqa_classification
* Dataset: task241_tweetqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 31 tokens</li><li>mean: 61.77 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.17 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 61.71 tokens</li><li>max: 92 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1345_glue_qqp_question_paraprashing
* Dataset: task1345_glue_qqp_question_paraprashing
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 16.8 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.75 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.69 tokens</li><li>max: 51 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task218_rocstories_swap_order_answer_generation
* Dataset: task218_rocstories_swap_order_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 48 tokens</li><li>mean: 72.69 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.72 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 72.12 tokens</li><li>max: 106 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task613_politifact_text_generation
* Dataset: task613_politifact_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 24.85 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 23.4 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 22.9 tokens</li><li>max: 61 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1167_penn_treebank_coarse_pos_tagging
* Dataset: task1167_penn_treebank_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 53.87 tokens</li><li>max: 200 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 53.76 tokens</li><li>max: 220 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 55.02 tokens</li><li>max: 202 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1422_mathqa_physics
* Dataset: task1422_mathqa_physics
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 72.76 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 71.89 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 72.78 tokens</li><li>max: 155 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task247_dream_answer_generation
* Dataset: task247_dream_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 38 tokens</li><li>mean: 160.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 158.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 167.84 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task199_mnli_classification
* Dataset: task199_mnli_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 43.48 tokens</li><li>max: 127 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.59 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.16 tokens</li><li>max: 113 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task164_mcscript_question_answering_text
* Dataset: task164_mcscript_question_answering_text
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 150 tokens</li><li>mean: 201.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 150 tokens</li><li>mean: 201.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 201.39 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1541_agnews_classification
* Dataset: task1541_agnews_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 21 tokens</li><li>mean: 53.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 52.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 54.13 tokens</li><li>max: 161 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task516_senteval_conjoints_inversion
* Dataset: task516_senteval_conjoints_inversion
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 20.15 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.98 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.92 tokens</li><li>max: 34 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task294_storycommonsense_motiv_text_generation
* Dataset: task294_storycommonsense_motiv_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 40.72 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 41.23 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 40.31 tokens</li><li>max: 86 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task501_scruples_anecdotes_post_type_verification
* Dataset: task501_scruples_anecdotes_post_type_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 230.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 234.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 234.19 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task213_rocstories_correct_ending_classification
* Dataset: task213_rocstories_correct_ending_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 62 tokens</li><li>mean: 86.09 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 60 tokens</li><li>mean: 85.37 tokens</li><li>max: 131 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 85.96 tokens</li><li>max: 131 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task821_protoqa_question_generation
* Dataset: task821_protoqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 14.97 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.01 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.99 tokens</li><li>max: 93 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task493_review_polarity_classification
* Dataset: task493_review_polarity_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 100.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 106.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 112.99 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task308_jeopardy_answer_generation_all
* Dataset: task308_jeopardy_answer_generation_all
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 27.95 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 26.96 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.41 tokens</li><li>max: 48 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1595_event2mind_text_generation_1
* Dataset: task1595_event2mind_text_generation_1
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.86 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.95 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.04 tokens</li><li>max: 20 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task040_qasc_question_generation
* Dataset: task040_qasc_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 15.06 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.04 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 13.86 tokens</li><li>max: 32 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task231_iirc_link_classification
* Dataset: task231_iirc_link_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 179 tokens</li><li>mean: 246.11 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 170 tokens</li><li>mean: 246.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 161 tokens</li><li>mean: 247.03 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1727_wiqa_what_is_the_effect
* Dataset: task1727_wiqa_what_is_the_effect
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 95.88 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 95.98 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 96.22 tokens</li><li>max: 183 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task578_curiosity_dialogs_answer_generation
* Dataset: task578_curiosity_dialogs_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 229.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 118 tokens</li><li>mean: 235.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 229.13 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task310_race_classification
* Dataset: task310_race_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 101 tokens</li><li>mean: 255.03 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 218 tokens</li><li>mean: 255.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 255.03 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task309_race_answer_generation
* Dataset: task309_race_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 75 tokens</li><li>mean: 255.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 204 tokens</li><li>mean: 255.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 75 tokens</li><li>mean: 255.25 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task379_agnews_topic_classification
* Dataset: task379_agnews_topic_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 54.82 tokens</li><li>max: 193 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 54.53 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 54.86 tokens</li><li>max: 187 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task030_winogrande_full_person
* Dataset: task030_winogrande_full_person
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 7.6 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.49 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.37 tokens</li><li>max: 11 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1540_parsed_pdfs_summarization
* Dataset: task1540_parsed_pdfs_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 186.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 190.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 192.05 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task039_qasc_find_overlapping_words
* Dataset: task039_qasc_find_overlapping_words
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 30.48 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.06 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.67 tokens</li><li>max: 60 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1206_atomic_classification_isbefore
* Dataset: task1206_atomic_classification_isbefore
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 21.26 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.84 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.35 tokens</li><li>max: 31 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task157_count_vowels_and_consonants
* Dataset: task157_count_vowels_and_consonants
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 28.03 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 27.93 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 28.34 tokens</li><li>max: 39 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task339_record_answer_generation
* Dataset: task339_record_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 171 tokens</li><li>mean: 234.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 234.22 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 232.25 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task453_swag_answer_generation
* Dataset: task453_swag_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 18.53 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.23 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 17.5 tokens</li><li>max: 55 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task848_pubmedqa_classification
* Dataset: task848_pubmedqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 21 tokens</li><li>mean: 248.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 249.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 251.72 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task673_google_wellformed_query_classification
* Dataset: task673_google_wellformed_query_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 11.57 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.23 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.34 tokens</li><li>max: 22 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task676_ollie_relationship_answer_generation
* Dataset: task676_ollie_relationship_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 51.45 tokens</li><li>max: 113 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 49.38 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 51.68 tokens</li><li>max: 113 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task268_casehold_legal_answer_generation
* Dataset: task268_casehold_legal_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 235 tokens</li><li>mean: 255.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 156 tokens</li><li>mean: 255.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 226 tokens</li><li>mean: 255.94 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task844_financial_phrasebank_classification
* Dataset: task844_financial_phrasebank_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 39.74 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 38.28 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 39.06 tokens</li><li>max: 86 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task330_gap_answer_generation
* Dataset: task330_gap_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 26 tokens</li><li>mean: 107.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 108.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 110.56 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task595_mocha_answer_generation
* Dataset: task595_mocha_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 94.35 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 96.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 118.22 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1285_kpa_keypoint_matching
* Dataset: task1285_kpa_keypoint_matching
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 52.36 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 50.15 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 53.13 tokens</li><li>max: 88 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task234_iirc_passage_line_answer_generation
* Dataset: task234_iirc_passage_line_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 143 tokens</li><li>mean: 234.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 235.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 235.94 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task494_review_polarity_answer_generation
* Dataset: task494_review_polarity_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 106.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 111.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 112.42 tokens</li><li>max: 249 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task670_ambigqa_question_generation
* Dataset: task670_ambigqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 12.66 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.49 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.24 tokens</li><li>max: 18 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task289_gigaword_summarization
* Dataset: task289_gigaword_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 51.54 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 51.94 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 51.44 tokens</li><li>max: 87 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### npr
* Dataset: npr
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 12.33 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 148.6 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 115.37 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### nli
* Dataset: nli
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 20.98 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.92 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.04 tokens</li><li>max: 32 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### SimpleWiki
* Dataset: SimpleWiki
* Size: 5,070 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 29.18 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 33.55 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 56.1 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon_review_2018
* Dataset: amazon_review_2018
* Size: 99,352 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 11.43 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 86.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 70.62 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### ccnews_title_text
* Dataset: ccnews_title_text
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.63 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 209.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 197.07 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### agnews
* Dataset: agnews
* Size: 44,606 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 12.05 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 40.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 46.18 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### xsum
* Dataset: xsum
* Size: 10,140 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 27.73 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 224.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 230.01 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### msmarco
* Dataset: msmarco
* Size: 173,354 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 8.96 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 78.76 tokens</li><li>max: 235 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 79.64 tokens</li><li>max: 218 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### yahoo_answers_title_answer
* Dataset: yahoo_answers_title_answer
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 16.99 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 76.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 91.49 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### squad_pairs
* Dataset: squad_pairs
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.24 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 152.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 163.22 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wow
* Dataset: wow
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 88.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 100 tokens</li><li>mean: 111.97 tokens</li><li>max: 166 tokens</li></ul> | <ul><li>min: 80 tokens</li><li>mean: 113.24 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_counterfactual-avs_triplets
* Dataset: mteb-amazon_counterfactual-avs_triplets
* Size: 4,055 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 26.99 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 27.29 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.56 tokens</li><li>max: 83 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_intent-avs_triplets
* Dataset: mteb-amazon_massive_intent-avs_triplets
* Size: 11,661 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.43 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.19 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.5 tokens</li><li>max: 28 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_scenario-avs_triplets
* Dataset: mteb-amazon_massive_scenario-avs_triplets
* Size: 11,661 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.61 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.01 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.48 tokens</li><li>max: 29 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_reviews_multi-avs_triplets
* Dataset: mteb-amazon_reviews_multi-avs_triplets
* Size: 198,192 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 46.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 49.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 47.98 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-banking77-avs_triplets
* Dataset: mteb-banking77-avs_triplets
* Size: 10,139 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 16.61 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.78 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.11 tokens</li><li>max: 83 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-emotion-avs_triplets
* Dataset: mteb-emotion-avs_triplets
* Size: 16,224 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 22.02 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 17.48 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 22.16 tokens</li><li>max: 72 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-imdb-avs_triplets
* Dataset: mteb-imdb-avs_triplets
* Size: 24,839 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 208.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 52 tokens</li><li>mean: 223.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 210.03 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_domain-avs_triplets
* Dataset: mteb-mtop_domain-avs_triplets
* Size: 15,715 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.11 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.66 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.16 tokens</li><li>max: 29 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_intent-avs_triplets
* Dataset: mteb-mtop_intent-avs_triplets
* Size: 15,715 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.08 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.78 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.11 tokens</li><li>max: 28 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-toxic_conversations_50k-avs_triplets
* Dataset: mteb-toxic_conversations_50k-avs_triplets
* Size: 49,677 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 68.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 90.19 tokens</li><li>max: 252 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 64.54 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-tweet_sentiment_extraction-avs_triplets
* Dataset: mteb-tweet_sentiment_extraction-avs_triplets
* Size: 27,373 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 20.82 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 20.02 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.66 tokens</li><li>max: 50 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### covid-bing-query-gpt4-avs_triplets
* Dataset: covid-bing-query-gpt4-avs_triplets
* Size: 5,070 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.08 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 37.42 tokens</li><li>max: 239 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 37.25 tokens</li><li>max: 100 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 18,269 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.81 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 144.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 143.7 tokens</li><li>max: 256 tokens</li></ul> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `learning_rate`: 5.656854249492381e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5.656854249492381e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | medi-mteb-dev_cosine_accuracy |
|:------:|:-----:|:-------------:|:---------------:|:-----------------------------:|
| 0 | 0 | - | - | 0.8358 |
| 0.1308 | 500 | 2.6713 | 1.1708 | 0.8820 |
| 0.2616 | 1000 | 1.9946 | 1.1040 | 0.8890 |
| 0.3925 | 1500 | 2.0138 | 1.0559 | 0.8955 |
| 0.5233 | 2000 | 1.7733 | 1.0154 | 0.8976 |
| 0.6541 | 2500 | 1.8934 | 1.0145 | 0.8990 |
| 0.7849 | 3000 | 1.7916 | 1.0166 | 0.8990 |
| 0.9158 | 3500 | 1.8491 | 0.9818 | 0.8981 |
| 1.0466 | 4000 | 1.7568 | 0.9473 | 0.9031 |
| 1.1774 | 4500 | 1.8666 | 1.0801 | 0.9003 |
| 1.3082 | 5000 | 1.6883 | 0.9535 | 0.9008 |
| 1.4390 | 5500 | 1.7082 | 1.0652 | 0.9028 |
| 1.5699 | 6000 | 1.6634 | 1.0519 | 0.9040 |
| 1.7007 | 6500 | 1.689 | 0.9920 | 0.9039 |
| 1.8315 | 7000 | 1.6129 | 1.0213 | 0.9021 |
| 1.9623 | 7500 | 1.576 | 0.9993 | 0.9033 |
| 2.0931 | 8000 | 1.6392 | 1.0826 | 0.9069 |
| 2.2240 | 8500 | 1.5947 | 1.1802 | 0.9063 |
| 2.3548 | 9000 | 1.6222 | 1.2468 | 0.9075 |
| 2.4856 | 9500 | 1.4471 | 1.0080 | 0.9077 |
| 2.6164 | 10000 | 1.5689 | 1.1530 | 0.9088 |
| 2.7473 | 10500 | 1.4836 | 1.0531 | 0.9080 |
| 2.8781 | 11000 | 1.525 | 1.0097 | 0.9091 |
| 3.0089 | 11500 | 1.4068 | 1.0630 | 0.9071 |
| 3.1397 | 12000 | 1.5666 | 0.9643 | 0.9091 |
| 3.2705 | 12500 | 1.4479 | 1.0455 | 0.9077 |
| 3.4014 | 13000 | 1.5516 | 1.0711 | 0.9109 |
| 3.5322 | 13500 | 1.3551 | 0.9991 | 0.9093 |
| 3.6630 | 14000 | 1.4498 | 1.0136 | 0.9093 |
| 3.7938 | 14500 | 1.3856 | 1.0710 | 0.9097 |
| 3.9246 | 15000 | 1.4329 | 1.0074 | 0.9097 |
| 4.0555 | 15500 | 1.3455 | 1.0328 | 0.9094 |
| 4.1863 | 16000 | 1.4601 | 1.0259 | 0.9078 |
| 4.3171 | 16500 | 1.3684 | 1.0295 | 0.9120 |
| 4.4479 | 17000 | 1.3637 | 1.0637 | 0.9090 |
| 4.5788 | 17500 | 1.3688 | 1.0929 | 0.9100 |
| 4.7096 | 18000 | 1.3419 | 1.1102 | 0.9124 |
| 4.8404 | 18500 | 1.3378 | 0.9625 | 0.9129 |
| 4.9712 | 19000 | 1.3224 | 1.0812 | 0.9126 |
| 5.1020 | 19500 | 1.3579 | 1.0317 | 0.9121 |
| 5.2329 | 20000 | 1.3409 | 1.0622 | 0.9107 |
| 5.3637 | 20500 | 1.3929 | 1.1232 | 0.9113 |
| 5.4945 | 21000 | 1.213 | 1.0926 | 0.9123 |
| 5.6253 | 21500 | 1.313 | 1.0791 | 0.9118 |
| 5.7561 | 22000 | 1.2606 | 1.0581 | 0.9119 |
| 5.8870 | 22500 | 1.3094 | 1.0322 | 0.9134 |
| 6.0178 | 23000 | 1.2102 | 1.0039 | 0.9106 |
| 6.1486 | 23500 | 1.3686 | 1.0815 | 0.9140 |
| 6.2794 | 24000 | 1.2467 | 1.0143 | 0.9126 |
| 6.4103 | 24500 | 1.3445 | 1.0778 | 0.9116 |
| 6.5411 | 25000 | 1.1894 | 0.9941 | 0.9140 |
| 6.6719 | 25500 | 1.2617 | 1.0546 | 0.9121 |
| 6.8027 | 26000 | 1.2042 | 1.0126 | 0.9130 |
| 6.9335 | 26500 | 1.2559 | 1.0516 | 0.9142 |
| 7.0644 | 27000 | 1.2031 | 0.9957 | 0.9146 |
| 7.1952 | 27500 | 1.2866 | 1.0564 | 0.9142 |
| 7.3260 | 28000 | 1.2477 | 1.0420 | 0.9135 |
| 7.4568 | 28500 | 1.1961 | 1.0116 | 0.9151 |
| 7.5877 | 29000 | 1.227 | 1.0091 | 0.9154 |
| 7.7185 | 29500 | 1.1952 | 1.0307 | 0.9146 |
| 7.8493 | 30000 | 1.192 | 0.9344 | 0.9144 |
| 7.9801 | 30500 | 1.1871 | 1.0943 | 0.9151 |
| 8.1109 | 31000 | 1.2267 | 1.0049 | 0.9150 |
| 8.2418 | 31500 | 1.1928 | 1.0673 | 0.9149 |
| 8.3726 | 32000 | 1.2942 | 1.0980 | 0.9148 |
| 8.5034 | 32500 | 1.1099 | 1.0380 | 0.9151 |
| 8.6342 | 33000 | 1.1882 | 1.0734 | 0.9138 |
| 8.7650 | 33500 | 1.1365 | 1.0677 | 0.9144 |
| 8.8959 | 34000 | 1.2215 | 1.0256 | 0.9160 |
| 9.0267 | 34500 | 1.0926 | 1.0198 | 0.9142 |
| 9.1575 | 35000 | 1.269 | 1.0395 | 0.9160 |
| 9.2883 | 35500 | 1.1528 | 1.0306 | 0.9152 |
| 9.4192 | 36000 | 1.2324 | 1.0607 | 0.9158 |
| 9.5500 | 36500 | 1.1187 | 1.0418 | 0.9151 |
| 9.6808 | 37000 | 1.1722 | 1.0443 | 0.9151 |
| 9.8116 | 37500 | 1.1149 | 1.0457 | 0.9152 |
| 9.9424 | 38000 | 1.1751 | 1.0245 | 0.9156 |
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 0.34.2
- Datasets: 2.21.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k20_task3_organization_fold0 | MayBashendy | 2024-11-25T11:01:03Z | 181 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T10:52:20Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k20_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k20_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9518
- Qwk: 0.1987
- Mse: 0.9518
- Rmse: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0202 | 2 | 4.6526 | -0.0034 | 4.6526 | 2.1570 |
| No log | 0.0404 | 4 | 2.7591 | -0.0722 | 2.7591 | 1.6610 |
| No log | 0.0606 | 6 | 2.0539 | 0.0 | 2.0539 | 1.4331 |
| No log | 0.0808 | 8 | 1.4661 | 0.0 | 1.4661 | 1.2108 |
| No log | 0.1010 | 10 | 1.2663 | -0.0087 | 1.2663 | 1.1253 |
| No log | 0.1212 | 12 | 1.0946 | 0.0610 | 1.0946 | 1.0462 |
| No log | 0.1414 | 14 | 1.1664 | -0.0565 | 1.1664 | 1.0800 |
| No log | 0.1616 | 16 | 1.2983 | -0.1379 | 1.2983 | 1.1394 |
| No log | 0.1818 | 18 | 1.4576 | -0.1204 | 1.4576 | 1.2073 |
| No log | 0.2020 | 20 | 1.3006 | -0.1579 | 1.3006 | 1.1404 |
| No log | 0.2222 | 22 | 1.2909 | -0.0296 | 1.2909 | 1.1362 |
| No log | 0.2424 | 24 | 1.3276 | -0.1204 | 1.3276 | 1.1522 |
| No log | 0.2626 | 26 | 1.7499 | 0.0 | 1.7499 | 1.3228 |
| No log | 0.2828 | 28 | 2.0100 | 0.0 | 2.0100 | 1.4178 |
| No log | 0.3030 | 30 | 1.6476 | 0.0 | 1.6476 | 1.2836 |
| No log | 0.3232 | 32 | 1.0629 | 0.0 | 1.0629 | 1.0310 |
| No log | 0.3434 | 34 | 0.9430 | 0.0 | 0.9430 | 0.9711 |
| No log | 0.3636 | 36 | 1.1489 | -0.2073 | 1.1489 | 1.0718 |
| No log | 0.3838 | 38 | 1.3328 | 0.0833 | 1.3328 | 1.1545 |
| No log | 0.4040 | 40 | 1.6896 | 0.0 | 1.6896 | 1.2998 |
| No log | 0.4242 | 42 | 1.9170 | 0.0 | 1.9170 | 1.3845 |
| No log | 0.4444 | 44 | 1.6669 | 0.0 | 1.6669 | 1.2911 |
| No log | 0.4646 | 46 | 1.6141 | 0.0 | 1.6141 | 1.2705 |
| No log | 0.4848 | 48 | 2.0316 | 0.0 | 2.0316 | 1.4253 |
| No log | 0.5051 | 50 | 1.7168 | 0.0 | 1.7168 | 1.3102 |
| No log | 0.5253 | 52 | 1.3197 | -0.0087 | 1.3197 | 1.1488 |
| No log | 0.5455 | 54 | 1.1897 | -0.0185 | 1.1897 | 1.0908 |
| No log | 0.5657 | 56 | 1.2664 | -0.0087 | 1.2664 | 1.1253 |
| No log | 0.5859 | 58 | 1.3157 | -0.0087 | 1.3157 | 1.1470 |
| No log | 0.6061 | 60 | 1.3946 | 0.0 | 1.3946 | 1.1809 |
| No log | 0.6263 | 62 | 1.6207 | 0.0 | 1.6207 | 1.2731 |
| No log | 0.6465 | 64 | 1.6594 | 0.0 | 1.6594 | 1.2882 |
| No log | 0.6667 | 66 | 1.6041 | 0.0 | 1.6041 | 1.2665 |
| No log | 0.6869 | 68 | 1.4446 | 0.0 | 1.4446 | 1.2019 |
| No log | 0.7071 | 70 | 1.2474 | 0.0873 | 1.2474 | 1.1169 |
| No log | 0.7273 | 72 | 1.0867 | -0.0927 | 1.0867 | 1.0425 |
| No log | 0.7475 | 74 | 1.1978 | 0.0737 | 1.1978 | 1.0944 |
| No log | 0.7677 | 76 | 1.1383 | 0.0610 | 1.1383 | 1.0669 |
| No log | 0.7879 | 78 | 1.1199 | 0.0610 | 1.1199 | 1.0582 |
| No log | 0.8081 | 80 | 1.1664 | 0.0788 | 1.1664 | 1.0800 |
| No log | 0.8283 | 82 | 1.4168 | -0.0087 | 1.4168 | 1.1903 |
| No log | 0.8485 | 84 | 1.7631 | 0.0 | 1.7631 | 1.3278 |
| No log | 0.8687 | 86 | 1.9420 | 0.0 | 1.9420 | 1.3935 |
| No log | 0.8889 | 88 | 1.6878 | -0.0087 | 1.6878 | 1.2992 |
| No log | 0.9091 | 90 | 1.1111 | 0.0737 | 1.1111 | 1.0541 |
| No log | 0.9293 | 92 | 0.7490 | 0.0 | 0.7490 | 0.8655 |
| No log | 0.9495 | 94 | 0.7523 | 0.0 | 0.7523 | 0.8674 |
| No log | 0.9697 | 96 | 0.8939 | 0.384 | 0.8939 | 0.9455 |
| No log | 0.9899 | 98 | 1.2505 | 0.0833 | 1.2505 | 1.1183 |
| No log | 1.0101 | 100 | 1.4339 | -0.0087 | 1.4339 | 1.1975 |
| No log | 1.0303 | 102 | 1.5430 | -0.0087 | 1.5430 | 1.2422 |
| No log | 1.0505 | 104 | 1.3831 | 0.0833 | 1.3831 | 1.1760 |
| No log | 1.0707 | 106 | 1.0113 | -0.0927 | 1.0113 | 1.0056 |
| No log | 1.0909 | 108 | 0.8329 | 0.384 | 0.8329 | 0.9126 |
| No log | 1.1111 | 110 | 0.8490 | 0.2143 | 0.8490 | 0.9214 |
| No log | 1.1313 | 112 | 0.9445 | 0.2029 | 0.9445 | 0.9718 |
| No log | 1.1515 | 114 | 1.2364 | 0.0788 | 1.2364 | 1.1119 |
| No log | 1.1717 | 116 | 1.3630 | 0.0833 | 1.3630 | 1.1675 |
| No log | 1.1919 | 118 | 1.4137 | 0.0833 | 1.4137 | 1.1890 |
| No log | 1.2121 | 120 | 1.4581 | 0.0833 | 1.4581 | 1.2075 |
| No log | 1.2323 | 122 | 1.1877 | 0.0788 | 1.1877 | 1.0898 |
| No log | 1.2525 | 124 | 0.9243 | 0.0435 | 0.9243 | 0.9614 |
| No log | 1.2727 | 126 | 0.8489 | 0.2143 | 0.8489 | 0.9214 |
| No log | 1.2929 | 128 | 0.8532 | 0.384 | 0.8532 | 0.9237 |
| No log | 1.3131 | 130 | 1.0019 | 0.0435 | 1.0019 | 1.0009 |
| No log | 1.3333 | 132 | 0.9882 | 0.0435 | 0.9882 | 0.9941 |
| No log | 1.3535 | 134 | 0.9380 | 0.0435 | 0.9380 | 0.9685 |
| No log | 1.3737 | 136 | 0.9155 | 0.0435 | 0.9155 | 0.9568 |
| No log | 1.3939 | 138 | 0.8986 | 0.384 | 0.8986 | 0.9479 |
| No log | 1.4141 | 140 | 0.9368 | 0.0435 | 0.9368 | 0.9679 |
| No log | 1.4343 | 142 | 1.0774 | -0.0732 | 1.0774 | 1.0380 |
| No log | 1.4545 | 144 | 1.2654 | 0.0737 | 1.2654 | 1.1249 |
| No log | 1.4747 | 146 | 1.2350 | 0.0737 | 1.2350 | 1.1113 |
| No log | 1.4949 | 148 | 0.9842 | 0.0435 | 0.9842 | 0.9921 |
| No log | 1.5152 | 150 | 0.8831 | 0.384 | 0.8831 | 0.9397 |
| No log | 1.5354 | 152 | 0.8988 | 0.384 | 0.8988 | 0.9481 |
| No log | 1.5556 | 154 | 0.9459 | 0.384 | 0.9459 | 0.9726 |
| No log | 1.5758 | 156 | 0.9607 | 0.1987 | 0.9607 | 0.9802 |
| No log | 1.5960 | 158 | 1.1195 | 0.0435 | 1.1195 | 1.0581 |
| No log | 1.6162 | 160 | 1.3280 | 0.0435 | 1.3280 | 1.1524 |
| No log | 1.6364 | 162 | 1.2664 | 0.1987 | 1.2664 | 1.1253 |
| No log | 1.6566 | 164 | 1.2449 | -0.0732 | 1.2449 | 1.1157 |
| No log | 1.6768 | 166 | 1.1367 | 0.0435 | 1.1367 | 1.0661 |
| No log | 1.6970 | 168 | 1.0158 | 0.0435 | 1.0158 | 1.0079 |
| No log | 1.7172 | 170 | 1.0182 | 0.0435 | 1.0182 | 1.0091 |
| No log | 1.7374 | 172 | 1.1649 | 0.0435 | 1.1649 | 1.0793 |
| No log | 1.7576 | 174 | 1.1151 | 0.0435 | 1.1151 | 1.0560 |
| No log | 1.7778 | 176 | 0.8819 | 0.5075 | 0.8819 | 0.9391 |
| No log | 1.7980 | 178 | 0.8527 | 0.3636 | 0.8527 | 0.9234 |
| No log | 1.8182 | 180 | 0.8497 | 0.5075 | 0.8497 | 0.9218 |
| No log | 1.8384 | 182 | 0.8870 | 0.1538 | 0.8870 | 0.9418 |
| No log | 1.8586 | 184 | 0.8319 | 0.4615 | 0.8319 | 0.9121 |
| No log | 1.8788 | 186 | 0.7856 | 0.4615 | 0.7856 | 0.8864 |
| No log | 1.8990 | 188 | 0.7520 | 0.5075 | 0.7520 | 0.8672 |
| No log | 1.9192 | 190 | 0.7200 | 0.5075 | 0.7200 | 0.8485 |
| No log | 1.9394 | 192 | 0.6791 | 0.5075 | 0.6791 | 0.8241 |
| No log | 1.9596 | 194 | 0.6859 | 0.5075 | 0.6859 | 0.8282 |
| No log | 1.9798 | 196 | 0.6420 | 0.5075 | 0.6420 | 0.8012 |
| No log | 2.0 | 198 | 0.6148 | 0.5075 | 0.6148 | 0.7841 |
| No log | 2.0202 | 200 | 0.6374 | 0.5075 | 0.6374 | 0.7984 |
| No log | 2.0404 | 202 | 0.5905 | 0.5075 | 0.5905 | 0.7685 |
| No log | 2.0606 | 204 | 0.6100 | 0.3231 | 0.6100 | 0.7810 |
| No log | 2.0808 | 206 | 0.5855 | 0.3636 | 0.5855 | 0.7652 |
| No log | 2.1010 | 208 | 0.6067 | 0.384 | 0.6067 | 0.7789 |
| No log | 2.1212 | 210 | 0.6539 | 0.384 | 0.6539 | 0.8087 |
| No log | 2.1414 | 212 | 0.6310 | 0.3231 | 0.6310 | 0.7944 |
| No log | 2.1616 | 214 | 0.8593 | -0.2222 | 0.8593 | 0.9270 |
| No log | 2.1818 | 216 | 0.8253 | -0.0185 | 0.8253 | 0.9084 |
| No log | 2.2020 | 218 | 0.6836 | 0.2143 | 0.6836 | 0.8268 |
| No log | 2.2222 | 220 | 1.1906 | 0.0788 | 1.1906 | 1.0912 |
| No log | 2.2424 | 222 | 1.8905 | 0.2344 | 1.8905 | 1.3749 |
| No log | 2.2626 | 224 | 1.7365 | 0.2344 | 1.7365 | 1.3178 |
| No log | 2.2828 | 226 | 1.1391 | 0.0788 | 1.1391 | 1.0673 |
| No log | 2.3030 | 228 | 0.7826 | 0.384 | 0.7826 | 0.8847 |
| No log | 2.3232 | 230 | 0.7477 | 0.0 | 0.7477 | 0.8647 |
| No log | 2.3434 | 232 | 0.7923 | 0.384 | 0.7923 | 0.8901 |
| No log | 2.3636 | 234 | 0.7817 | 0.384 | 0.7817 | 0.8842 |
| No log | 2.3838 | 236 | 0.8681 | 0.2029 | 0.8681 | 0.9317 |
| No log | 2.4040 | 238 | 0.7886 | 0.384 | 0.7886 | 0.8881 |
| No log | 2.4242 | 240 | 0.7134 | 0.1270 | 0.7134 | 0.8446 |
| No log | 2.4444 | 242 | 0.7032 | 0.1270 | 0.7032 | 0.8386 |
| No log | 2.4646 | 244 | 0.7132 | 0.1270 | 0.7132 | 0.8445 |
| No log | 2.4848 | 246 | 0.6869 | 0.4211 | 0.6869 | 0.8288 |
| No log | 2.5051 | 248 | 0.7723 | 0.2949 | 0.7723 | 0.8788 |
| No log | 2.5253 | 250 | 0.7573 | 0.1769 | 0.7573 | 0.8703 |
| No log | 2.5455 | 252 | 0.6902 | 0.4615 | 0.6902 | 0.8308 |
| No log | 2.5657 | 254 | 0.7689 | 0.1270 | 0.7689 | 0.8769 |
| No log | 2.5859 | 256 | 0.7299 | 0.1270 | 0.7299 | 0.8543 |
| No log | 2.6061 | 258 | 0.7886 | 0.2029 | 0.7886 | 0.8880 |
| No log | 2.6263 | 260 | 0.9831 | 0.1987 | 0.9831 | 0.9915 |
| No log | 2.6465 | 262 | 1.1918 | -0.0421 | 1.1918 | 1.0917 |
| No log | 2.6667 | 264 | 0.9125 | 0.1987 | 0.9125 | 0.9553 |
| No log | 2.6869 | 266 | 0.7091 | 0.384 | 0.7091 | 0.8421 |
| No log | 2.7071 | 268 | 0.6956 | 0.1270 | 0.6956 | 0.8340 |
| No log | 2.7273 | 270 | 0.6873 | 0.3231 | 0.6873 | 0.8290 |
| No log | 2.7475 | 272 | 0.7966 | 0.5075 | 0.7966 | 0.8925 |
| No log | 2.7677 | 274 | 1.1146 | -0.0421 | 1.1146 | 1.0557 |
| No log | 2.7879 | 276 | 0.9790 | 0.1987 | 0.9790 | 0.9894 |
| No log | 2.8081 | 278 | 0.6960 | 0.384 | 0.6960 | 0.8343 |
| No log | 2.8283 | 280 | 0.6773 | 0.0 | 0.6773 | 0.8230 |
| No log | 2.8485 | 282 | 0.6872 | 0.0 | 0.6872 | 0.8290 |
| No log | 2.8687 | 284 | 0.6513 | 0.0 | 0.6513 | 0.8071 |
| No log | 2.8889 | 286 | 0.7678 | 0.384 | 0.7678 | 0.8763 |
| No log | 2.9091 | 288 | 0.8712 | 0.3444 | 0.8712 | 0.9334 |
| No log | 2.9293 | 290 | 0.6873 | 0.384 | 0.6873 | 0.8291 |
| No log | 2.9495 | 292 | 0.6442 | 0.0 | 0.6442 | 0.8026 |
| No log | 2.9697 | 294 | 0.7407 | 0.1538 | 0.7407 | 0.8606 |
| No log | 2.9899 | 296 | 0.7156 | 0.1852 | 0.7156 | 0.8460 |
| No log | 3.0101 | 298 | 0.6347 | 0.0 | 0.6347 | 0.7967 |
| No log | 3.0303 | 300 | 0.6004 | 0.384 | 0.6004 | 0.7748 |
| No log | 3.0505 | 302 | 0.6003 | 0.384 | 0.6003 | 0.7748 |
| No log | 3.0707 | 304 | 0.6332 | 0.384 | 0.6332 | 0.7957 |
| No log | 3.0909 | 306 | 0.5999 | 0.2143 | 0.5999 | 0.7745 |
| No log | 3.1111 | 308 | 0.6406 | -0.0185 | 0.6406 | 0.8004 |
| No log | 3.1313 | 310 | 0.7295 | 0.1270 | 0.7295 | 0.8541 |
| No log | 3.1515 | 312 | 0.7011 | 0.1538 | 0.7011 | 0.8373 |
| No log | 3.1717 | 314 | 0.6232 | 0.1818 | 0.6232 | 0.7894 |
| No log | 3.1919 | 316 | 0.6172 | 0.3433 | 0.6172 | 0.7856 |
| No log | 3.2121 | 318 | 0.6205 | 0.3433 | 0.6205 | 0.7877 |
| No log | 3.2323 | 320 | 0.6281 | 0.3433 | 0.6281 | 0.7925 |
| No log | 3.2525 | 322 | 0.6564 | 0.384 | 0.6564 | 0.8102 |
| No log | 3.2727 | 324 | 0.6655 | 0.2143 | 0.6655 | 0.8158 |
| No log | 3.2929 | 326 | 0.6782 | 0.0 | 0.6782 | 0.8235 |
| No log | 3.3131 | 328 | 0.6804 | 0.0 | 0.6804 | 0.8249 |
| No log | 3.3333 | 330 | 0.6952 | 0.2143 | 0.6952 | 0.8338 |
| No log | 3.3535 | 332 | 0.8226 | 0.2029 | 0.8226 | 0.9070 |
| No log | 3.3737 | 334 | 0.8431 | 0.1987 | 0.8431 | 0.9182 |
| No log | 3.3939 | 336 | 0.6922 | 0.384 | 0.6922 | 0.8320 |
| No log | 3.4141 | 338 | 0.7109 | 0.1270 | 0.7109 | 0.8431 |
| No log | 3.4343 | 340 | 0.7595 | 0.1270 | 0.7595 | 0.8715 |
| No log | 3.4545 | 342 | 0.7124 | 0.1270 | 0.7124 | 0.8440 |
| No log | 3.4747 | 344 | 0.7997 | 0.384 | 0.7997 | 0.8942 |
| No log | 3.4949 | 346 | 1.0784 | 0.1921 | 1.0784 | 1.0385 |
| No log | 3.5152 | 348 | 1.0930 | 0.1921 | 1.0930 | 1.0455 |
| No log | 3.5354 | 350 | 0.9029 | 0.1987 | 0.9029 | 0.9502 |
| No log | 3.5556 | 352 | 0.8295 | 0.384 | 0.8295 | 0.9108 |
| No log | 3.5758 | 354 | 0.8338 | 0.384 | 0.8338 | 0.9131 |
| No log | 3.5960 | 356 | 0.8585 | 0.384 | 0.8585 | 0.9265 |
| No log | 3.6162 | 358 | 0.9370 | 0.1951 | 0.9370 | 0.9680 |
| No log | 3.6364 | 360 | 0.9141 | 0.2747 | 0.9141 | 0.9561 |
| No log | 3.6566 | 362 | 0.8511 | 0.4211 | 0.8511 | 0.9226 |
| No log | 3.6768 | 364 | 0.8672 | 0.4000 | 0.8672 | 0.9312 |
| No log | 3.6970 | 366 | 0.9334 | 0.2840 | 0.9334 | 0.9661 |
| No log | 3.7172 | 368 | 0.9953 | 0.2747 | 0.9953 | 0.9976 |
| No log | 3.7374 | 370 | 0.9750 | 0.1734 | 0.9750 | 0.9874 |
| No log | 3.7576 | 372 | 0.8934 | 0.1750 | 0.8934 | 0.9452 |
| No log | 3.7778 | 374 | 0.8191 | 0.3433 | 0.8191 | 0.9050 |
| No log | 3.7980 | 376 | 0.7768 | 0.384 | 0.7768 | 0.8814 |
| No log | 3.8182 | 378 | 0.8431 | 0.384 | 0.8431 | 0.9182 |
| No log | 3.8384 | 380 | 1.0740 | -0.0565 | 1.0740 | 1.0363 |
| No log | 3.8586 | 382 | 1.3304 | 0.0788 | 1.3304 | 1.1534 |
| No log | 3.8788 | 384 | 1.2628 | 0.0788 | 1.2628 | 1.1237 |
| No log | 3.8990 | 386 | 0.9786 | 0.1951 | 0.9786 | 0.9892 |
| No log | 3.9192 | 388 | 0.8249 | 0.2029 | 0.8249 | 0.9082 |
| No log | 3.9394 | 390 | 0.7578 | 0.0320 | 0.7578 | 0.8705 |
| No log | 3.9596 | 392 | 0.7596 | 0.0320 | 0.7596 | 0.8716 |
| No log | 3.9798 | 394 | 0.7666 | 0.0320 | 0.7666 | 0.8755 |
| No log | 4.0 | 396 | 0.8368 | 0.2029 | 0.8368 | 0.9148 |
| No log | 4.0202 | 398 | 0.9290 | 0.1987 | 0.9290 | 0.9639 |
| No log | 4.0404 | 400 | 0.8786 | 0.1987 | 0.8786 | 0.9374 |
| No log | 4.0606 | 402 | 0.7645 | 0.0320 | 0.7645 | 0.8744 |
| No log | 4.0808 | 404 | 0.7536 | 0.0 | 0.7536 | 0.8681 |
| No log | 4.1010 | 406 | 0.7712 | 0.0320 | 0.7712 | 0.8782 |
| No log | 4.1212 | 408 | 0.9349 | 0.1987 | 0.9349 | 0.9669 |
| No log | 4.1414 | 410 | 1.2310 | 0.1921 | 1.2310 | 1.1095 |
| No log | 4.1616 | 412 | 1.2531 | -0.0421 | 1.2531 | 1.1194 |
| No log | 4.1818 | 414 | 1.0973 | 0.1921 | 1.0973 | 1.0475 |
| No log | 4.2020 | 416 | 0.8580 | 0.0320 | 0.8580 | 0.9263 |
| No log | 4.2222 | 418 | 0.7965 | 0.1538 | 0.7965 | 0.8924 |
| No log | 4.2424 | 420 | 0.8042 | 0.1538 | 0.8042 | 0.8968 |
| No log | 4.2626 | 422 | 0.8183 | 0.1538 | 0.8183 | 0.9046 |
| No log | 4.2828 | 424 | 0.8488 | 0.0320 | 0.8488 | 0.9213 |
| No log | 4.3030 | 426 | 0.9763 | 0.1921 | 0.9763 | 0.9881 |
| No log | 4.3232 | 428 | 1.0226 | 0.1921 | 1.0226 | 1.0112 |
| No log | 4.3434 | 430 | 0.9847 | 0.1921 | 0.9847 | 0.9923 |
| No log | 4.3636 | 432 | 0.8696 | 0.0320 | 0.8696 | 0.9325 |
| No log | 4.3838 | 434 | 0.8647 | 0.0320 | 0.8647 | 0.9299 |
| No log | 4.4040 | 436 | 0.8952 | 0.0320 | 0.8952 | 0.9461 |
| No log | 4.4242 | 438 | 0.8750 | 0.0320 | 0.8750 | 0.9354 |
| No log | 4.4444 | 440 | 0.8135 | 0.0 | 0.8135 | 0.9020 |
| No log | 4.4646 | 442 | 0.8037 | 0.0 | 0.8037 | 0.8965 |
| No log | 4.4848 | 444 | 0.8395 | 0.0320 | 0.8395 | 0.9163 |
| No log | 4.5051 | 446 | 0.9364 | 0.2029 | 0.9364 | 0.9677 |
| No log | 4.5253 | 448 | 0.9262 | 0.2029 | 0.9262 | 0.9624 |
| No log | 4.5455 | 450 | 0.8185 | 0.0320 | 0.8185 | 0.9047 |
| No log | 4.5657 | 452 | 0.7780 | 0.0 | 0.7780 | 0.8820 |
| No log | 4.5859 | 454 | 0.7793 | 0.0 | 0.7793 | 0.8828 |
| No log | 4.6061 | 456 | 0.8350 | 0.2029 | 0.8350 | 0.9138 |
| No log | 4.6263 | 458 | 0.9371 | 0.1987 | 0.9371 | 0.9681 |
| No log | 4.6465 | 460 | 0.8989 | 0.2029 | 0.8989 | 0.9481 |
| No log | 4.6667 | 462 | 0.7672 | 0.2029 | 0.7672 | 0.8759 |
| No log | 4.6869 | 464 | 0.7358 | 0.1852 | 0.7358 | 0.8578 |
| No log | 4.7071 | 466 | 0.7294 | 0.1852 | 0.7294 | 0.8540 |
| No log | 4.7273 | 468 | 0.7245 | 0.384 | 0.7245 | 0.8512 |
| No log | 4.7475 | 470 | 0.7245 | 0.384 | 0.7245 | 0.8512 |
| No log | 4.7677 | 472 | 0.7235 | 0.2029 | 0.7235 | 0.8506 |
| No log | 4.7879 | 474 | 0.6957 | 0.3636 | 0.6957 | 0.8341 |
| No log | 4.8081 | 476 | 0.7275 | 0.1270 | 0.7275 | 0.8529 |
| No log | 4.8283 | 478 | 0.7382 | 0.1270 | 0.7382 | 0.8592 |
| No log | 4.8485 | 480 | 0.6924 | 0.1270 | 0.6924 | 0.8321 |
| No log | 4.8687 | 482 | 0.6771 | 0.5075 | 0.6771 | 0.8228 |
| No log | 4.8889 | 484 | 0.6992 | 0.384 | 0.6992 | 0.8362 |
| No log | 4.9091 | 486 | 0.6943 | 0.384 | 0.6943 | 0.8333 |
| No log | 4.9293 | 488 | 0.6724 | 0.5075 | 0.6724 | 0.8200 |
| No log | 4.9495 | 490 | 0.6705 | 0.5075 | 0.6705 | 0.8188 |
| No log | 4.9697 | 492 | 0.6826 | 0.384 | 0.6826 | 0.8262 |
| No log | 4.9899 | 494 | 0.6784 | 0.5075 | 0.6784 | 0.8237 |
| No log | 5.0101 | 496 | 0.6806 | 0.5075 | 0.6806 | 0.8250 |
| No log | 5.0303 | 498 | 0.6776 | 0.4211 | 0.6776 | 0.8232 |
| 0.4617 | 5.0505 | 500 | 0.6905 | 0.1270 | 0.6905 | 0.8309 |
| 0.4617 | 5.0707 | 502 | 0.6892 | 0.4615 | 0.6892 | 0.8302 |
| 0.4617 | 5.0909 | 504 | 0.7388 | 0.384 | 0.7388 | 0.8595 |
| 0.4617 | 5.1111 | 506 | 0.7651 | 0.2029 | 0.7651 | 0.8747 |
| 0.4617 | 5.1313 | 508 | 0.7350 | 0.384 | 0.7350 | 0.8573 |
| 0.4617 | 5.1515 | 510 | 0.7106 | 0.384 | 0.7106 | 0.8430 |
| 0.4617 | 5.1717 | 512 | 0.7218 | 0.384 | 0.7218 | 0.8496 |
| 0.4617 | 5.1919 | 514 | 0.7384 | 0.384 | 0.7384 | 0.8593 |
| 0.4617 | 5.2121 | 516 | 0.7398 | 0.2143 | 0.7398 | 0.8601 |
| 0.4617 | 5.2323 | 518 | 0.7824 | 0.384 | 0.7824 | 0.8845 |
| 0.4617 | 5.2525 | 520 | 0.8221 | 0.2029 | 0.8221 | 0.9067 |
| 0.4617 | 5.2727 | 522 | 0.7909 | 0.384 | 0.7909 | 0.8893 |
| 0.4617 | 5.2929 | 524 | 0.7673 | 0.384 | 0.7673 | 0.8759 |
| 0.4617 | 5.3131 | 526 | 0.7445 | 0.2143 | 0.7445 | 0.8628 |
| 0.4617 | 5.3333 | 528 | 0.7453 | 0.2143 | 0.7453 | 0.8633 |
| 0.4617 | 5.3535 | 530 | 0.7578 | 0.384 | 0.7578 | 0.8705 |
| 0.4617 | 5.3737 | 532 | 0.7849 | 0.2029 | 0.7849 | 0.8859 |
| 0.4617 | 5.3939 | 534 | 0.7659 | 0.384 | 0.7659 | 0.8752 |
| 0.4617 | 5.4141 | 536 | 0.7437 | 0.3433 | 0.7437 | 0.8624 |
| 0.4617 | 5.4343 | 538 | 0.7375 | 0.2878 | 0.7375 | 0.8588 |
| 0.4617 | 5.4545 | 540 | 0.7432 | 0.4211 | 0.7432 | 0.8621 |
| 0.4617 | 5.4747 | 542 | 0.7894 | 0.2029 | 0.7894 | 0.8885 |
| 0.4617 | 5.4949 | 544 | 0.9132 | 0.2029 | 0.9132 | 0.9556 |
| 0.4617 | 5.5152 | 546 | 0.9850 | 0.1951 | 0.9850 | 0.9925 |
| 0.4617 | 5.5354 | 548 | 0.9236 | 0.2029 | 0.9236 | 0.9611 |
| 0.4617 | 5.5556 | 550 | 0.8349 | 0.384 | 0.8349 | 0.9137 |
| 0.4617 | 5.5758 | 552 | 0.7635 | 0.0 | 0.7635 | 0.8738 |
| 0.4617 | 5.5960 | 554 | 0.7528 | 0.0 | 0.7528 | 0.8677 |
| 0.4617 | 5.6162 | 556 | 0.7691 | 0.0 | 0.7691 | 0.8770 |
| 0.4617 | 5.6364 | 558 | 0.8116 | 0.384 | 0.8116 | 0.9009 |
| 0.4617 | 5.6566 | 560 | 0.8908 | 0.2029 | 0.8908 | 0.9438 |
| 0.4617 | 5.6768 | 562 | 0.9508 | 0.1987 | 0.9508 | 0.9751 |
| 0.4617 | 5.6970 | 564 | 1.0473 | 0.1921 | 1.0473 | 1.0234 |
| 0.4617 | 5.7172 | 566 | 1.0473 | 0.1921 | 1.0473 | 1.0234 |
| 0.4617 | 5.7374 | 568 | 0.9289 | 0.1987 | 0.9289 | 0.9638 |
| 0.4617 | 5.7576 | 570 | 0.8104 | 0.3433 | 0.8104 | 0.9002 |
| 0.4617 | 5.7778 | 572 | 0.7963 | 0.3433 | 0.7963 | 0.8924 |
| 0.4617 | 5.7980 | 574 | 0.8428 | 0.2029 | 0.8428 | 0.9180 |
| 0.4617 | 5.8182 | 576 | 0.8713 | 0.2029 | 0.8713 | 0.9334 |
| 0.4617 | 5.8384 | 578 | 0.8590 | 0.2029 | 0.8590 | 0.9268 |
| 0.4617 | 5.8586 | 580 | 0.8821 | 0.2029 | 0.8821 | 0.9392 |
| 0.4617 | 5.8788 | 582 | 0.8557 | 0.2029 | 0.8557 | 0.9251 |
| 0.4617 | 5.8990 | 584 | 0.8626 | 0.2029 | 0.8626 | 0.9288 |
| 0.4617 | 5.9192 | 586 | 0.8270 | 0.2029 | 0.8270 | 0.9094 |
| 0.4617 | 5.9394 | 588 | 0.8479 | 0.2029 | 0.8479 | 0.9208 |
| 0.4617 | 5.9596 | 590 | 0.8319 | 0.2029 | 0.8319 | 0.9121 |
| 0.4617 | 5.9798 | 592 | 0.8631 | 0.2029 | 0.8631 | 0.9291 |
| 0.4617 | 6.0 | 594 | 0.9320 | 0.1987 | 0.9320 | 0.9654 |
| 0.4617 | 6.0202 | 596 | 0.9303 | 0.1951 | 0.9303 | 0.9645 |
| 0.4617 | 6.0404 | 598 | 0.8493 | 0.2029 | 0.8493 | 0.9216 |
| 0.4617 | 6.0606 | 600 | 0.8000 | 0.3265 | 0.8000 | 0.8944 |
| 0.4617 | 6.0808 | 602 | 0.7864 | 0.5075 | 0.7864 | 0.8868 |
| 0.4617 | 6.1010 | 604 | 0.8066 | 0.2029 | 0.8066 | 0.8981 |
| 0.4617 | 6.1212 | 606 | 0.8411 | 0.2029 | 0.8411 | 0.9171 |
| 0.4617 | 6.1414 | 608 | 0.7934 | 0.2029 | 0.7934 | 0.8907 |
| 0.4617 | 6.1616 | 610 | 0.7829 | 0.384 | 0.7829 | 0.8848 |
| 0.4617 | 6.1818 | 612 | 0.7933 | 0.384 | 0.7933 | 0.8907 |
| 0.4617 | 6.2020 | 614 | 0.8320 | 0.2029 | 0.8320 | 0.9122 |
| 0.4617 | 6.2222 | 616 | 0.8289 | 0.2029 | 0.8289 | 0.9104 |
| 0.4617 | 6.2424 | 618 | 0.8067 | 0.2029 | 0.8067 | 0.8982 |
| 0.4617 | 6.2626 | 620 | 0.8523 | 0.1987 | 0.8523 | 0.9232 |
| 0.4617 | 6.2828 | 622 | 0.8392 | 0.2029 | 0.8392 | 0.9161 |
| 0.4617 | 6.3030 | 624 | 0.8004 | 0.2029 | 0.8004 | 0.8946 |
| 0.4617 | 6.3232 | 626 | 0.7944 | 0.384 | 0.7944 | 0.8913 |
| 0.4617 | 6.3434 | 628 | 0.8116 | 0.2029 | 0.8116 | 0.9009 |
| 0.4617 | 6.3636 | 630 | 0.8725 | 0.2029 | 0.8725 | 0.9341 |
| 0.4617 | 6.3838 | 632 | 0.9278 | 0.1951 | 0.9278 | 0.9632 |
| 0.4617 | 6.4040 | 634 | 0.8919 | 0.1987 | 0.8919 | 0.9444 |
| 0.4617 | 6.4242 | 636 | 0.8090 | 0.2029 | 0.8090 | 0.8994 |
| 0.4617 | 6.4444 | 638 | 0.7393 | 0.384 | 0.7393 | 0.8598 |
| 0.4617 | 6.4646 | 640 | 0.7157 | 0.3636 | 0.7157 | 0.8460 |
| 0.4617 | 6.4848 | 642 | 0.7261 | 0.384 | 0.7261 | 0.8521 |
| 0.4617 | 6.5051 | 644 | 0.7503 | 0.2029 | 0.7503 | 0.8662 |
| 0.4617 | 6.5253 | 646 | 0.8019 | 0.2029 | 0.8019 | 0.8955 |
| 0.4617 | 6.5455 | 648 | 0.8140 | 0.2029 | 0.8140 | 0.9022 |
| 0.4617 | 6.5657 | 650 | 0.8252 | 0.2029 | 0.8252 | 0.9084 |
| 0.4617 | 6.5859 | 652 | 0.8389 | 0.2029 | 0.8389 | 0.9159 |
| 0.4617 | 6.6061 | 654 | 0.8216 | 0.2029 | 0.8216 | 0.9064 |
| 0.4617 | 6.6263 | 656 | 0.7953 | 0.2029 | 0.7953 | 0.8918 |
| 0.4617 | 6.6465 | 658 | 0.7717 | 0.2029 | 0.7717 | 0.8784 |
| 0.4617 | 6.6667 | 660 | 0.7857 | 0.2029 | 0.7857 | 0.8864 |
| 0.4617 | 6.6869 | 662 | 0.8221 | 0.2029 | 0.8221 | 0.9067 |
| 0.4617 | 6.7071 | 664 | 0.9082 | 0.1987 | 0.9082 | 0.9530 |
| 0.4617 | 6.7273 | 666 | 0.9232 | 0.1987 | 0.9232 | 0.9608 |
| 0.4617 | 6.7475 | 668 | 0.8499 | 0.2029 | 0.8499 | 0.9219 |
| 0.4617 | 6.7677 | 670 | 0.7994 | 0.2029 | 0.7994 | 0.8941 |
| 0.4617 | 6.7879 | 672 | 0.7864 | 0.2029 | 0.7864 | 0.8868 |
| 0.4617 | 6.8081 | 674 | 0.8177 | 0.2029 | 0.8177 | 0.9043 |
| 0.4617 | 6.8283 | 676 | 0.8775 | 0.2029 | 0.8775 | 0.9368 |
| 0.4617 | 6.8485 | 678 | 0.8881 | 0.1987 | 0.8881 | 0.9424 |
| 0.4617 | 6.8687 | 680 | 0.8717 | 0.2029 | 0.8717 | 0.9336 |
| 0.4617 | 6.8889 | 682 | 0.8423 | 0.2029 | 0.8423 | 0.9178 |
| 0.4617 | 6.9091 | 684 | 0.8006 | 0.2029 | 0.8006 | 0.8948 |
| 0.4617 | 6.9293 | 686 | 0.7739 | 0.2029 | 0.7739 | 0.8797 |
| 0.4617 | 6.9495 | 688 | 0.7809 | 0.2029 | 0.7809 | 0.8837 |
| 0.4617 | 6.9697 | 690 | 0.7957 | 0.2029 | 0.7957 | 0.8920 |
| 0.4617 | 6.9899 | 692 | 0.8261 | 0.2029 | 0.8261 | 0.9089 |
| 0.4617 | 7.0101 | 694 | 0.8116 | 0.2029 | 0.8116 | 0.9009 |
| 0.4617 | 7.0303 | 696 | 0.8180 | 0.2029 | 0.8180 | 0.9044 |
| 0.4617 | 7.0505 | 698 | 0.8426 | 0.2029 | 0.8426 | 0.9179 |
| 0.4617 | 7.0707 | 700 | 0.8360 | 0.2029 | 0.8360 | 0.9143 |
| 0.4617 | 7.0909 | 702 | 0.8294 | 0.2029 | 0.8294 | 0.9107 |
| 0.4617 | 7.1111 | 704 | 0.8248 | 0.2029 | 0.8248 | 0.9082 |
| 0.4617 | 7.1313 | 706 | 0.8245 | 0.2029 | 0.8245 | 0.9080 |
| 0.4617 | 7.1515 | 708 | 0.8240 | 0.2029 | 0.8240 | 0.9077 |
| 0.4617 | 7.1717 | 710 | 0.8072 | 0.2029 | 0.8072 | 0.8985 |
| 0.4617 | 7.1919 | 712 | 0.7901 | 0.2029 | 0.7901 | 0.8889 |
| 0.4617 | 7.2121 | 714 | 0.8020 | 0.2029 | 0.8020 | 0.8956 |
| 0.4617 | 7.2323 | 716 | 0.8012 | 0.2029 | 0.8012 | 0.8951 |
| 0.4617 | 7.2525 | 718 | 0.8052 | 0.2029 | 0.8052 | 0.8973 |
| 0.4617 | 7.2727 | 720 | 0.8222 | 0.2029 | 0.8222 | 0.9068 |
| 0.4617 | 7.2929 | 722 | 0.8313 | 0.2029 | 0.8313 | 0.9118 |
| 0.4617 | 7.3131 | 724 | 0.8335 | 0.2029 | 0.8335 | 0.9130 |
| 0.4617 | 7.3333 | 726 | 0.8679 | 0.1987 | 0.8679 | 0.9316 |
| 0.4617 | 7.3535 | 728 | 0.8829 | 0.1987 | 0.8829 | 0.9396 |
| 0.4617 | 7.3737 | 730 | 0.8676 | 0.1987 | 0.8676 | 0.9315 |
| 0.4617 | 7.3939 | 732 | 0.8309 | 0.2029 | 0.8309 | 0.9116 |
| 0.4617 | 7.4141 | 734 | 0.8300 | 0.2029 | 0.8300 | 0.9110 |
| 0.4617 | 7.4343 | 736 | 0.8564 | 0.2029 | 0.8564 | 0.9254 |
| 0.4617 | 7.4545 | 738 | 0.8798 | 0.1987 | 0.8798 | 0.9380 |
| 0.4617 | 7.4747 | 740 | 0.8667 | 0.1987 | 0.8667 | 0.9310 |
| 0.4617 | 7.4949 | 742 | 0.8735 | 0.1987 | 0.8735 | 0.9346 |
| 0.4617 | 7.5152 | 744 | 0.9050 | 0.1987 | 0.9050 | 0.9513 |
| 0.4617 | 7.5354 | 746 | 0.9377 | 0.1951 | 0.9377 | 0.9684 |
| 0.4617 | 7.5556 | 748 | 0.9645 | 0.1951 | 0.9645 | 0.9821 |
| 0.4617 | 7.5758 | 750 | 0.9683 | 0.1951 | 0.9683 | 0.9840 |
| 0.4617 | 7.5960 | 752 | 0.9486 | 0.1951 | 0.9486 | 0.9739 |
| 0.4617 | 7.6162 | 754 | 0.9288 | 0.1987 | 0.9288 | 0.9637 |
| 0.4617 | 7.6364 | 756 | 0.9244 | 0.1987 | 0.9244 | 0.9614 |
| 0.4617 | 7.6566 | 758 | 0.8921 | 0.1987 | 0.8921 | 0.9445 |
| 0.4617 | 7.6768 | 760 | 0.8422 | 0.1987 | 0.8422 | 0.9177 |
| 0.4617 | 7.6970 | 762 | 0.8131 | 0.2029 | 0.8131 | 0.9017 |
| 0.4617 | 7.7172 | 764 | 0.7830 | 0.2029 | 0.7830 | 0.8849 |
| 0.4617 | 7.7374 | 766 | 0.7726 | 0.2029 | 0.7726 | 0.8790 |
| 0.4617 | 7.7576 | 768 | 0.7794 | 0.2029 | 0.7794 | 0.8828 |
| 0.4617 | 7.7778 | 770 | 0.7964 | 0.2029 | 0.7964 | 0.8924 |
| 0.4617 | 7.7980 | 772 | 0.8388 | 0.2029 | 0.8388 | 0.9159 |
| 0.4617 | 7.8182 | 774 | 0.8916 | 0.1987 | 0.8916 | 0.9442 |
| 0.4617 | 7.8384 | 776 | 0.9113 | 0.1987 | 0.9113 | 0.9546 |
| 0.4617 | 7.8586 | 778 | 0.9078 | 0.1987 | 0.9078 | 0.9528 |
| 0.4617 | 7.8788 | 780 | 0.8704 | 0.1987 | 0.8704 | 0.9329 |
| 0.4617 | 7.8990 | 782 | 0.8164 | 0.2029 | 0.8164 | 0.9035 |
| 0.4617 | 7.9192 | 784 | 0.7926 | 0.2029 | 0.7926 | 0.8903 |
| 0.4617 | 7.9394 | 786 | 0.7926 | 0.2029 | 0.7926 | 0.8903 |
| 0.4617 | 7.9596 | 788 | 0.8297 | 0.2029 | 0.8297 | 0.9109 |
| 0.4617 | 7.9798 | 790 | 0.8586 | 0.1987 | 0.8586 | 0.9266 |
| 0.4617 | 8.0 | 792 | 0.8656 | 0.1987 | 0.8656 | 0.9304 |
| 0.4617 | 8.0202 | 794 | 0.9050 | 0.1987 | 0.9050 | 0.9513 |
| 0.4617 | 8.0404 | 796 | 0.9370 | 0.1987 | 0.9370 | 0.9680 |
| 0.4617 | 8.0606 | 798 | 0.9210 | 0.1987 | 0.9210 | 0.9597 |
| 0.4617 | 8.0808 | 800 | 0.9182 | 0.1987 | 0.9182 | 0.9582 |
| 0.4617 | 8.1010 | 802 | 0.9507 | 0.1987 | 0.9507 | 0.9751 |
| 0.4617 | 8.1212 | 804 | 0.9864 | 0.1987 | 0.9864 | 0.9932 |
| 0.4617 | 8.1414 | 806 | 0.9704 | 0.1987 | 0.9704 | 0.9851 |
| 0.4617 | 8.1616 | 808 | 0.9448 | 0.1987 | 0.9448 | 0.9720 |
| 0.4617 | 8.1818 | 810 | 0.9472 | 0.1987 | 0.9472 | 0.9733 |
| 0.4617 | 8.2020 | 812 | 0.9437 | 0.1987 | 0.9437 | 0.9715 |
| 0.4617 | 8.2222 | 814 | 0.9635 | 0.1987 | 0.9635 | 0.9816 |
| 0.4617 | 8.2424 | 816 | 0.9944 | 0.1987 | 0.9944 | 0.9972 |
| 0.4617 | 8.2626 | 818 | 0.9957 | 0.1987 | 0.9957 | 0.9978 |
| 0.4617 | 8.2828 | 820 | 0.9694 | 0.1987 | 0.9694 | 0.9846 |
| 0.4617 | 8.3030 | 822 | 0.9303 | 0.1987 | 0.9303 | 0.9645 |
| 0.4617 | 8.3232 | 824 | 0.9262 | 0.1987 | 0.9262 | 0.9624 |
| 0.4617 | 8.3434 | 826 | 0.9242 | 0.1987 | 0.9242 | 0.9614 |
| 0.4617 | 8.3636 | 828 | 0.9471 | 0.1987 | 0.9471 | 0.9732 |
| 0.4617 | 8.3838 | 830 | 0.9760 | 0.1987 | 0.9760 | 0.9879 |
| 0.4617 | 8.4040 | 832 | 0.9810 | 0.1987 | 0.9810 | 0.9904 |
| 0.4617 | 8.4242 | 834 | 0.9689 | 0.1987 | 0.9689 | 0.9843 |
| 0.4617 | 8.4444 | 836 | 0.9473 | 0.1987 | 0.9473 | 0.9733 |
| 0.4617 | 8.4646 | 838 | 0.9080 | 0.1987 | 0.9080 | 0.9529 |
| 0.4617 | 8.4848 | 840 | 0.8669 | 0.2029 | 0.8669 | 0.9311 |
| 0.4617 | 8.5051 | 842 | 0.8496 | 0.2029 | 0.8496 | 0.9218 |
| 0.4617 | 8.5253 | 844 | 0.8529 | 0.2029 | 0.8529 | 0.9235 |
| 0.4617 | 8.5455 | 846 | 0.8661 | 0.2029 | 0.8661 | 0.9307 |
| 0.4617 | 8.5657 | 848 | 0.9038 | 0.1987 | 0.9038 | 0.9507 |
| 0.4617 | 8.5859 | 850 | 0.9538 | 0.1987 | 0.9538 | 0.9766 |
| 0.4617 | 8.6061 | 852 | 0.9743 | 0.1987 | 0.9743 | 0.9871 |
| 0.4617 | 8.6263 | 854 | 0.9872 | 0.1987 | 0.9872 | 0.9936 |
| 0.4617 | 8.6465 | 856 | 0.9842 | 0.1987 | 0.9842 | 0.9921 |
| 0.4617 | 8.6667 | 858 | 0.9818 | 0.1987 | 0.9818 | 0.9909 |
| 0.4617 | 8.6869 | 860 | 0.9862 | 0.1987 | 0.9862 | 0.9931 |
| 0.4617 | 8.7071 | 862 | 0.9802 | 0.1987 | 0.9802 | 0.9900 |
| 0.4617 | 8.7273 | 864 | 0.9524 | 0.1987 | 0.9524 | 0.9759 |
| 0.4617 | 8.7475 | 866 | 0.9165 | 0.1987 | 0.9165 | 0.9573 |
| 0.4617 | 8.7677 | 868 | 0.8807 | 0.1987 | 0.8807 | 0.9385 |
| 0.4617 | 8.7879 | 870 | 0.8628 | 0.2029 | 0.8628 | 0.9289 |
| 0.4617 | 8.8081 | 872 | 0.8614 | 0.2029 | 0.8614 | 0.9281 |
| 0.4617 | 8.8283 | 874 | 0.8763 | 0.1987 | 0.8763 | 0.9361 |
| 0.4617 | 8.8485 | 876 | 0.8916 | 0.1987 | 0.8916 | 0.9443 |
| 0.4617 | 8.8687 | 878 | 0.9208 | 0.1987 | 0.9208 | 0.9596 |
| 0.4617 | 8.8889 | 880 | 0.9524 | 0.1987 | 0.9524 | 0.9759 |
| 0.4617 | 8.9091 | 882 | 0.9677 | 0.1987 | 0.9677 | 0.9837 |
| 0.4617 | 8.9293 | 884 | 0.9648 | 0.1987 | 0.9648 | 0.9822 |
| 0.4617 | 8.9495 | 886 | 0.9395 | 0.1987 | 0.9395 | 0.9693 |
| 0.4617 | 8.9697 | 888 | 0.9091 | 0.1987 | 0.9091 | 0.9535 |
| 0.4617 | 8.9899 | 890 | 0.8917 | 0.1987 | 0.8917 | 0.9443 |
| 0.4617 | 9.0101 | 892 | 0.8814 | 0.1987 | 0.8814 | 0.9388 |
| 0.4617 | 9.0303 | 894 | 0.8792 | 0.1987 | 0.8792 | 0.9377 |
| 0.4617 | 9.0505 | 896 | 0.8932 | 0.1987 | 0.8932 | 0.9451 |
| 0.4617 | 9.0707 | 898 | 0.9066 | 0.1987 | 0.9066 | 0.9521 |
| 0.4617 | 9.0909 | 900 | 0.9217 | 0.1987 | 0.9217 | 0.9601 |
| 0.4617 | 9.1111 | 902 | 0.9465 | 0.1987 | 0.9465 | 0.9729 |
| 0.4617 | 9.1313 | 904 | 0.9535 | 0.1987 | 0.9535 | 0.9765 |
| 0.4617 | 9.1515 | 906 | 0.9453 | 0.1987 | 0.9453 | 0.9722 |
| 0.4617 | 9.1717 | 908 | 0.9238 | 0.1987 | 0.9238 | 0.9611 |
| 0.4617 | 9.1919 | 910 | 0.9144 | 0.1987 | 0.9144 | 0.9563 |
| 0.4617 | 9.2121 | 912 | 0.9007 | 0.1987 | 0.9007 | 0.9490 |
| 0.4617 | 9.2323 | 914 | 0.8845 | 0.1987 | 0.8845 | 0.9405 |
| 0.4617 | 9.2525 | 916 | 0.8656 | 0.1987 | 0.8656 | 0.9304 |
| 0.4617 | 9.2727 | 918 | 0.8505 | 0.2029 | 0.8505 | 0.9222 |
| 0.4617 | 9.2929 | 920 | 0.8442 | 0.2029 | 0.8442 | 0.9188 |
| 0.4617 | 9.3131 | 922 | 0.8490 | 0.2029 | 0.8490 | 0.9214 |
| 0.4617 | 9.3333 | 924 | 0.8608 | 0.1987 | 0.8608 | 0.9278 |
| 0.4617 | 9.3535 | 926 | 0.8727 | 0.1987 | 0.8727 | 0.9342 |
| 0.4617 | 9.3737 | 928 | 0.8807 | 0.1987 | 0.8807 | 0.9384 |
| 0.4617 | 9.3939 | 930 | 0.8794 | 0.1987 | 0.8794 | 0.9378 |
| 0.4617 | 9.4141 | 932 | 0.8861 | 0.1987 | 0.8861 | 0.9413 |
| 0.4617 | 9.4343 | 934 | 0.8906 | 0.1987 | 0.8906 | 0.9437 |
| 0.4617 | 9.4545 | 936 | 0.8978 | 0.1987 | 0.8978 | 0.9475 |
| 0.4617 | 9.4747 | 938 | 0.9052 | 0.1987 | 0.9052 | 0.9514 |
| 0.4617 | 9.4949 | 940 | 0.9149 | 0.1987 | 0.9149 | 0.9565 |
| 0.4617 | 9.5152 | 942 | 0.9178 | 0.1987 | 0.9178 | 0.9580 |
| 0.4617 | 9.5354 | 944 | 0.9246 | 0.1987 | 0.9246 | 0.9616 |
| 0.4617 | 9.5556 | 946 | 0.9233 | 0.1987 | 0.9233 | 0.9609 |
| 0.4617 | 9.5758 | 948 | 0.9202 | 0.1987 | 0.9202 | 0.9593 |
| 0.4617 | 9.5960 | 950 | 0.9129 | 0.1987 | 0.9129 | 0.9555 |
| 0.4617 | 9.6162 | 952 | 0.9008 | 0.1987 | 0.9008 | 0.9491 |
| 0.4617 | 9.6364 | 954 | 0.8922 | 0.1987 | 0.8922 | 0.9446 |
| 0.4617 | 9.6566 | 956 | 0.8856 | 0.1987 | 0.8856 | 0.9411 |
| 0.4617 | 9.6768 | 958 | 0.8848 | 0.1987 | 0.8848 | 0.9406 |
| 0.4617 | 9.6970 | 960 | 0.8857 | 0.1987 | 0.8857 | 0.9411 |
| 0.4617 | 9.7172 | 962 | 0.8871 | 0.1987 | 0.8871 | 0.9419 |
| 0.4617 | 9.7374 | 964 | 0.8914 | 0.1987 | 0.8914 | 0.9441 |
| 0.4617 | 9.7576 | 966 | 0.8958 | 0.1987 | 0.8958 | 0.9465 |
| 0.4617 | 9.7778 | 968 | 0.9012 | 0.1987 | 0.9012 | 0.9493 |
| 0.4617 | 9.7980 | 970 | 0.9089 | 0.1987 | 0.9089 | 0.9533 |
| 0.4617 | 9.8182 | 972 | 0.9175 | 0.1987 | 0.9175 | 0.9579 |
| 0.4617 | 9.8384 | 974 | 0.9259 | 0.1987 | 0.9259 | 0.9623 |
| 0.4617 | 9.8586 | 976 | 0.9343 | 0.1987 | 0.9343 | 0.9666 |
| 0.4617 | 9.8788 | 978 | 0.9425 | 0.1987 | 0.9425 | 0.9708 |
| 0.4617 | 9.8990 | 980 | 0.9473 | 0.1987 | 0.9473 | 0.9733 |
| 0.4617 | 9.9192 | 982 | 0.9501 | 0.1987 | 0.9501 | 0.9747 |
| 0.4617 | 9.9394 | 984 | 0.9522 | 0.1987 | 0.9522 | 0.9758 |
| 0.4617 | 9.9596 | 986 | 0.9525 | 0.1987 | 0.9525 | 0.9760 |
| 0.4617 | 9.9798 | 988 | 0.9520 | 0.1987 | 0.9520 | 0.9757 |
| 0.4617 | 10.0 | 990 | 0.9518 | 0.1987 | 0.9518 | 0.9756 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/RP-Naughty-v1.0e-8b-GGUF | mradermacher | 2024-11-25T10:59:47Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-11-25T10:43:05Z | ---
base_model: MrRobotoAI/RP-Naughty-v1.0e-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MrRobotoAI/RP-Naughty-v1.0e-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Naughty-v1.0e-8b-GGUF/resolve/main/RP-Naughty-v1.0e-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ahid1/xlm-roberta-base-finetuned-panx-all | ahid1 | 2024-11-25T10:54:35Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T10:40:49Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1719
- F1: 0.8568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2937 | 1.0 | 835 | 0.1942 | 0.8142 |
| 0.1544 | 2.0 | 1670 | 0.1658 | 0.8460 |
| 0.0991 | 3.0 | 2505 | 0.1719 | 0.8568 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
anhdang000/UniChart-Pretrain | anhdang000 | 2024-11-25T10:52:55Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-19T16:26:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
July-Tokyo/xlm-roberta-base-finetuned-panx-it | July-Tokyo | 2024-11-25T10:48:56Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T10:38:14Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2623
- F1: 0.8196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7112 | 1.0 | 70 | 0.3326 | 0.7465 |
| 0.2753 | 2.0 | 140 | 0.2575 | 0.8041 |
| 0.1793 | 3.0 | 210 | 0.2623 | 0.8196 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k15_task3_organization_fold0 | MayBashendy | 2024-11-25T10:45:32Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T10:37:59Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k15_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k15_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9839
- Qwk: -0.1818
- Mse: 0.9839
- Rmse: 0.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0263 | 2 | 4.0892 | -0.0072 | 4.0892 | 2.0222 |
| No log | 0.0526 | 4 | 2.2108 | 0.0 | 2.2108 | 1.4869 |
| No log | 0.0789 | 6 | 1.3794 | 0.0 | 1.3794 | 1.1745 |
| No log | 0.1053 | 8 | 1.8577 | -0.0080 | 1.8577 | 1.3630 |
| No log | 0.1316 | 10 | 2.2145 | 0.0 | 2.2145 | 1.4881 |
| No log | 0.1579 | 12 | 1.5343 | 0.0833 | 1.5343 | 1.2387 |
| No log | 0.1842 | 14 | 0.8763 | 0.0 | 0.8763 | 0.9361 |
| No log | 0.2105 | 16 | 0.8789 | 0.0 | 0.8789 | 0.9375 |
| No log | 0.2368 | 18 | 0.9541 | 0.0 | 0.9541 | 0.9768 |
| No log | 0.2632 | 20 | 1.2096 | 0.1834 | 1.2096 | 1.0998 |
| No log | 0.2895 | 22 | 1.4628 | 0.0 | 1.4628 | 1.2094 |
| No log | 0.3158 | 24 | 1.3886 | 0.0 | 1.3886 | 1.1784 |
| No log | 0.3421 | 26 | 1.2008 | -0.0421 | 1.2008 | 1.0958 |
| No log | 0.3684 | 28 | 1.2140 | -0.0296 | 1.2140 | 1.1018 |
| No log | 0.3947 | 30 | 1.2237 | -0.0296 | 1.2237 | 1.1062 |
| No log | 0.4211 | 32 | 1.1628 | 0.0610 | 1.1628 | 1.0783 |
| No log | 0.4474 | 34 | 1.1713 | 0.0610 | 1.1713 | 1.0823 |
| No log | 0.4737 | 36 | 1.1400 | 0.0530 | 1.1400 | 1.0677 |
| No log | 0.5 | 38 | 1.0800 | 0.0530 | 1.0800 | 1.0392 |
| No log | 0.5263 | 40 | 1.1381 | 0.1895 | 1.1381 | 1.0668 |
| No log | 0.5526 | 42 | 1.2351 | 0.0873 | 1.2351 | 1.1113 |
| No log | 0.5789 | 44 | 1.2973 | 0.0873 | 1.2973 | 1.1390 |
| No log | 0.6053 | 46 | 1.2182 | 0.0873 | 1.2182 | 1.1037 |
| No log | 0.6316 | 48 | 1.2507 | 0.0873 | 1.2507 | 1.1184 |
| No log | 0.6579 | 50 | 1.4212 | 0.0 | 1.4212 | 1.1921 |
| No log | 0.6842 | 52 | 1.2621 | 0.0873 | 1.2621 | 1.1234 |
| No log | 0.7105 | 54 | 0.9785 | 0.0 | 0.9785 | 0.9892 |
| No log | 0.7368 | 56 | 0.8989 | 0.0 | 0.8989 | 0.9481 |
| No log | 0.7632 | 58 | 0.8790 | 0.0 | 0.8790 | 0.9376 |
| No log | 0.7895 | 60 | 1.0438 | 0.0610 | 1.0438 | 1.0217 |
| No log | 0.8158 | 62 | 1.6691 | 0.1834 | 1.6691 | 1.2919 |
| No log | 0.8421 | 64 | 1.8192 | 0.1834 | 1.8192 | 1.3488 |
| No log | 0.8684 | 66 | 1.3396 | 0.2870 | 1.3396 | 1.1574 |
| No log | 0.8947 | 68 | 1.1613 | 0.2870 | 1.1613 | 1.0776 |
| No log | 0.9211 | 70 | 1.0976 | 0.4407 | 1.0976 | 1.0477 |
| No log | 0.9474 | 72 | 0.9985 | 0.0 | 0.9985 | 0.9992 |
| No log | 0.9737 | 74 | 1.0107 | 0.2143 | 1.0107 | 1.0054 |
| No log | 1.0 | 76 | 1.1095 | 0.2956 | 1.1095 | 1.0533 |
| No log | 1.0263 | 78 | 1.1555 | 0.2870 | 1.1555 | 1.0750 |
| No log | 1.0526 | 80 | 1.2358 | 0.2870 | 1.2358 | 1.1117 |
| No log | 1.0789 | 82 | 1.0509 | 0.1987 | 1.0509 | 1.0251 |
| No log | 1.1053 | 84 | 0.9037 | 0.0 | 0.9037 | 0.9507 |
| No log | 1.1316 | 86 | 0.9111 | 0.0 | 0.9111 | 0.9545 |
| No log | 1.1579 | 88 | 0.9610 | 0.5217 | 0.9610 | 0.9803 |
| No log | 1.1842 | 90 | 1.0089 | 0.3293 | 1.0089 | 1.0044 |
| No log | 1.2105 | 92 | 1.1377 | 0.3053 | 1.1377 | 1.0666 |
| No log | 1.2368 | 94 | 1.1146 | 0.3164 | 1.1146 | 1.0557 |
| No log | 1.2632 | 96 | 1.1290 | 0.3164 | 1.1290 | 1.0625 |
| No log | 1.2895 | 98 | 1.1543 | 0.3053 | 1.1543 | 1.0744 |
| No log | 1.3158 | 100 | 1.1472 | 0.3053 | 1.1472 | 1.0711 |
| No log | 1.3421 | 102 | 1.0622 | 0.384 | 1.0622 | 1.0306 |
| No log | 1.3684 | 104 | 1.0403 | 0.384 | 1.0403 | 1.0200 |
| No log | 1.3947 | 106 | 0.9481 | 0.2143 | 0.9481 | 0.9737 |
| No log | 1.4211 | 108 | 0.8899 | 0.0 | 0.8899 | 0.9434 |
| No log | 1.4474 | 110 | 0.8953 | 0.0435 | 0.8953 | 0.9462 |
| No log | 1.4737 | 112 | 0.9627 | 0.0530 | 0.9627 | 0.9812 |
| No log | 1.5 | 114 | 0.9246 | 0.0320 | 0.9246 | 0.9616 |
| No log | 1.5263 | 116 | 0.9677 | 0.0320 | 0.9677 | 0.9837 |
| No log | 1.5526 | 118 | 1.2496 | 0.0530 | 1.2496 | 1.1179 |
| No log | 1.5789 | 120 | 1.8048 | -0.1204 | 1.8048 | 1.3434 |
| No log | 1.6053 | 122 | 1.6866 | -0.1579 | 1.6866 | 1.2987 |
| No log | 1.6316 | 124 | 1.0973 | -0.1440 | 1.0973 | 1.0475 |
| No log | 1.6579 | 126 | 0.8907 | 0.0 | 0.8907 | 0.9438 |
| No log | 1.6842 | 128 | 0.9543 | -0.1786 | 0.9543 | 0.9769 |
| No log | 1.7105 | 130 | 1.1611 | 0.0530 | 1.1611 | 1.0776 |
| No log | 1.7368 | 132 | 1.2968 | -0.1808 | 1.2968 | 1.1388 |
| No log | 1.7632 | 134 | 1.1335 | 0.0530 | 1.1335 | 1.0647 |
| No log | 1.7895 | 136 | 0.8705 | 0.0 | 0.8705 | 0.9330 |
| No log | 1.8158 | 138 | 0.8212 | 0.0 | 0.8212 | 0.9062 |
| No log | 1.8421 | 140 | 0.8200 | 0.0 | 0.8200 | 0.9056 |
| No log | 1.8684 | 142 | 0.8513 | 0.0 | 0.8513 | 0.9226 |
| No log | 1.8947 | 144 | 0.8344 | 0.0 | 0.8344 | 0.9134 |
| No log | 1.9211 | 146 | 0.8187 | 0.0 | 0.8187 | 0.9048 |
| No log | 1.9474 | 148 | 0.8065 | 0.0 | 0.8065 | 0.8981 |
| No log | 1.9737 | 150 | 0.8022 | 0.0 | 0.8022 | 0.8957 |
| No log | 2.0 | 152 | 0.8028 | 0.0 | 0.8028 | 0.8960 |
| No log | 2.0263 | 154 | 0.8087 | 0.0 | 0.8087 | 0.8993 |
| No log | 2.0526 | 156 | 0.8594 | 0.0320 | 0.8594 | 0.9271 |
| No log | 2.0789 | 158 | 0.8974 | 0.0320 | 0.8974 | 0.9473 |
| No log | 2.1053 | 160 | 0.9366 | -0.1786 | 0.9366 | 0.9678 |
| No log | 2.1316 | 162 | 0.9992 | 0.0 | 0.9992 | 0.9996 |
| No log | 2.1579 | 164 | 0.9885 | 0.0 | 0.9885 | 0.9943 |
| No log | 2.1842 | 166 | 0.9751 | -0.1786 | 0.9751 | 0.9875 |
| No log | 2.2105 | 168 | 0.9895 | 0.0320 | 0.9895 | 0.9948 |
| No log | 2.2368 | 170 | 0.9678 | 0.0 | 0.9678 | 0.9838 |
| No log | 2.2632 | 172 | 0.9735 | 0.0 | 0.9735 | 0.9867 |
| No log | 2.2895 | 174 | 0.9635 | 0.0 | 0.9635 | 0.9816 |
| No log | 2.3158 | 176 | 0.9552 | -0.1786 | 0.9552 | 0.9773 |
| No log | 2.3421 | 178 | 1.0213 | 0.0320 | 1.0213 | 1.0106 |
| No log | 2.3684 | 180 | 0.9275 | -0.1786 | 0.9275 | 0.9631 |
| No log | 2.3947 | 182 | 0.8691 | 0.0 | 0.8691 | 0.9323 |
| No log | 2.4211 | 184 | 0.8557 | 0.0 | 0.8557 | 0.9251 |
| No log | 2.4474 | 186 | 0.9158 | 0.0320 | 0.9158 | 0.9570 |
| No log | 2.4737 | 188 | 0.9736 | 0.0320 | 0.9736 | 0.9867 |
| No log | 2.5 | 190 | 0.8911 | 0.0320 | 0.8911 | 0.9440 |
| No log | 2.5263 | 192 | 0.8465 | 0.0 | 0.8465 | 0.9201 |
| No log | 2.5526 | 194 | 0.8384 | 0.0 | 0.8384 | 0.9157 |
| No log | 2.5789 | 196 | 0.8276 | 0.0 | 0.8276 | 0.9097 |
| No log | 2.6053 | 198 | 0.8163 | 0.0 | 0.8163 | 0.9035 |
| No log | 2.6316 | 200 | 0.8433 | 0.0320 | 0.8433 | 0.9183 |
| No log | 2.6579 | 202 | 0.9686 | 0.0435 | 0.9686 | 0.9841 |
| No log | 2.6842 | 204 | 0.8415 | 0.0320 | 0.8415 | 0.9174 |
| No log | 2.7105 | 206 | 0.8848 | -0.0342 | 0.8848 | 0.9407 |
| No log | 2.7368 | 208 | 0.9090 | -0.0342 | 0.9090 | 0.9534 |
| No log | 2.7632 | 210 | 0.9599 | -0.1818 | 0.9599 | 0.9797 |
| No log | 2.7895 | 212 | 1.1007 | -0.0927 | 1.1007 | 1.0491 |
| No log | 2.8158 | 214 | 1.1171 | -0.2721 | 1.1171 | 1.0569 |
| No log | 2.8421 | 216 | 1.1683 | -0.2721 | 1.1683 | 1.0809 |
| No log | 2.8684 | 218 | 1.0611 | -0.0342 | 1.0611 | 1.0301 |
| No log | 2.8947 | 220 | 1.0798 | -0.0342 | 1.0798 | 1.0391 |
| No log | 2.9211 | 222 | 1.1384 | -0.1818 | 1.1384 | 1.0670 |
| No log | 2.9474 | 224 | 1.2284 | -0.2721 | 1.2284 | 1.1083 |
| No log | 2.9737 | 226 | 1.3290 | -0.2754 | 1.3290 | 1.1528 |
| No log | 3.0 | 228 | 1.1862 | -0.2754 | 1.1862 | 1.0891 |
| No log | 3.0263 | 230 | 1.0458 | -0.0185 | 1.0458 | 1.0227 |
| No log | 3.0526 | 232 | 1.0431 | -0.0185 | 1.0431 | 1.0213 |
| No log | 3.0789 | 234 | 1.1033 | -0.2721 | 1.1033 | 1.0504 |
| No log | 3.1053 | 236 | 1.2969 | -0.0927 | 1.2969 | 1.1388 |
| No log | 3.1316 | 238 | 1.1937 | -0.2754 | 1.1937 | 1.0926 |
| No log | 3.1579 | 240 | 1.1124 | -0.3134 | 1.1124 | 1.0547 |
| No log | 3.1842 | 242 | 1.0661 | -0.0185 | 1.0661 | 1.0325 |
| No log | 3.2105 | 244 | 1.0806 | -0.1818 | 1.0806 | 1.0395 |
| No log | 3.2368 | 246 | 1.2697 | -0.2754 | 1.2697 | 1.1268 |
| No log | 3.2632 | 248 | 1.2498 | -0.3200 | 1.2498 | 1.1179 |
| No log | 3.2895 | 250 | 1.0874 | -0.0185 | 1.0874 | 1.0428 |
| No log | 3.3158 | 252 | 1.0650 | -0.0185 | 1.0650 | 1.0320 |
| No log | 3.3421 | 254 | 1.0471 | 0.0 | 1.0471 | 1.0233 |
| No log | 3.3684 | 256 | 1.0269 | 0.0 | 1.0269 | 1.0134 |
| No log | 3.3947 | 258 | 1.0936 | -0.1786 | 1.0936 | 1.0458 |
| No log | 3.4211 | 260 | 1.1736 | -0.3200 | 1.1736 | 1.0833 |
| No log | 3.4474 | 262 | 1.1173 | -0.1786 | 1.1173 | 1.0570 |
| No log | 3.4737 | 264 | 1.0277 | 0.0 | 1.0277 | 1.0137 |
| No log | 3.5 | 266 | 1.0238 | 0.0 | 1.0238 | 1.0118 |
| No log | 3.5263 | 268 | 1.0382 | 0.0 | 1.0382 | 1.0189 |
| No log | 3.5526 | 270 | 1.0396 | 0.0 | 1.0396 | 1.0196 |
| No log | 3.5789 | 272 | 0.9955 | -0.0185 | 0.9955 | 0.9978 |
| No log | 3.6053 | 274 | 0.9625 | -0.0185 | 0.9625 | 0.9811 |
| No log | 3.6316 | 276 | 0.9554 | -0.0185 | 0.9554 | 0.9775 |
| No log | 3.6579 | 278 | 1.0233 | -0.1786 | 1.0233 | 1.0116 |
| No log | 3.6842 | 280 | 1.1608 | -0.0927 | 1.1608 | 1.0774 |
| No log | 3.7105 | 282 | 1.0983 | -0.1786 | 1.0983 | 1.0480 |
| No log | 3.7368 | 284 | 0.9925 | 0.0 | 0.9925 | 0.9962 |
| No log | 3.7632 | 286 | 0.9892 | -0.0185 | 0.9892 | 0.9946 |
| No log | 3.7895 | 288 | 1.0001 | 0.0 | 1.0001 | 1.0000 |
| No log | 3.8158 | 290 | 1.0232 | 0.0 | 1.0232 | 1.0115 |
| No log | 3.8421 | 292 | 1.0635 | -0.1786 | 1.0635 | 1.0313 |
| No log | 3.8684 | 294 | 1.0803 | -0.1786 | 1.0803 | 1.0394 |
| No log | 3.8947 | 296 | 1.0007 | 0.0 | 1.0007 | 1.0003 |
| No log | 3.9211 | 298 | 1.0005 | 0.0 | 1.0005 | 1.0002 |
| No log | 3.9474 | 300 | 1.0713 | -0.1786 | 1.0713 | 1.0350 |
| No log | 3.9737 | 302 | 1.1065 | -0.1786 | 1.1065 | 1.0519 |
| No log | 4.0 | 304 | 1.0267 | -0.1786 | 1.0267 | 1.0133 |
| No log | 4.0263 | 306 | 0.9583 | 0.0 | 0.9583 | 0.9790 |
| No log | 4.0526 | 308 | 0.9391 | 0.0 | 0.9391 | 0.9691 |
| No log | 4.0789 | 310 | 0.9676 | -0.0185 | 0.9676 | 0.9836 |
| No log | 4.1053 | 312 | 0.9584 | -0.0185 | 0.9584 | 0.9790 |
| No log | 4.1316 | 314 | 0.9644 | -0.0185 | 0.9644 | 0.9821 |
| No log | 4.1579 | 316 | 0.9533 | -0.0185 | 0.9533 | 0.9764 |
| No log | 4.1842 | 318 | 0.9262 | -0.0185 | 0.9262 | 0.9624 |
| No log | 4.2105 | 320 | 0.9269 | -0.0185 | 0.9269 | 0.9627 |
| No log | 4.2368 | 322 | 1.0355 | -0.0927 | 1.0355 | 1.0176 |
| No log | 4.2632 | 324 | 1.1289 | -0.0927 | 1.1289 | 1.0625 |
| No log | 4.2895 | 326 | 1.0491 | -0.2754 | 1.0491 | 1.0243 |
| No log | 4.3158 | 328 | 0.9662 | -0.0185 | 0.9662 | 0.9829 |
| No log | 4.3421 | 330 | 0.9762 | -0.0185 | 0.9762 | 0.9880 |
| No log | 4.3684 | 332 | 0.9838 | -0.0342 | 0.9838 | 0.9919 |
| No log | 4.3947 | 334 | 0.9999 | -0.1818 | 0.9999 | 0.9999 |
| No log | 4.4211 | 336 | 1.0234 | -0.1786 | 1.0234 | 1.0116 |
| No log | 4.4474 | 338 | 1.0085 | -0.1786 | 1.0085 | 1.0043 |
| No log | 4.4737 | 340 | 0.9847 | -0.1786 | 0.9847 | 0.9923 |
| No log | 4.5 | 342 | 0.9909 | -0.1846 | 0.9909 | 0.9954 |
| No log | 4.5263 | 344 | 0.9962 | -0.1846 | 0.9962 | 0.9981 |
| No log | 4.5526 | 346 | 1.0075 | -0.1786 | 1.0075 | 1.0037 |
| No log | 4.5789 | 348 | 1.0734 | 0.0320 | 1.0734 | 1.0360 |
| No log | 4.6053 | 350 | 1.0970 | 0.0320 | 1.0970 | 1.0474 |
| No log | 4.6316 | 352 | 1.0235 | -0.1786 | 1.0235 | 1.0117 |
| No log | 4.6579 | 354 | 0.9307 | -0.1786 | 0.9307 | 0.9647 |
| No log | 4.6842 | 356 | 0.8780 | -0.0185 | 0.8780 | 0.9370 |
| No log | 4.7105 | 358 | 0.8858 | -0.0185 | 0.8858 | 0.9412 |
| No log | 4.7368 | 360 | 0.9476 | -0.1786 | 0.9476 | 0.9734 |
| No log | 4.7632 | 362 | 1.0758 | -0.1786 | 1.0758 | 1.0372 |
| No log | 4.7895 | 364 | 1.2706 | -0.2754 | 1.2706 | 1.1272 |
| No log | 4.8158 | 366 | 1.3525 | -0.2384 | 1.3525 | 1.1630 |
| No log | 4.8421 | 368 | 1.2343 | -0.1440 | 1.2343 | 1.1110 |
| No log | 4.8684 | 370 | 1.1236 | -0.1818 | 1.1236 | 1.0600 |
| No log | 4.8947 | 372 | 1.0980 | -0.1818 | 1.0980 | 1.0479 |
| No log | 4.9211 | 374 | 1.1325 | -0.1846 | 1.1325 | 1.0642 |
| No log | 4.9474 | 376 | 1.2439 | -0.1818 | 1.2439 | 1.1153 |
| No log | 4.9737 | 378 | 1.4046 | -0.2721 | 1.4046 | 1.1852 |
| No log | 5.0 | 380 | 1.4464 | -0.2721 | 1.4464 | 1.2026 |
| No log | 5.0263 | 382 | 1.3744 | -0.2721 | 1.3744 | 1.1724 |
| No log | 5.0526 | 384 | 1.2784 | -0.1846 | 1.2784 | 1.1306 |
| No log | 5.0789 | 386 | 1.2202 | 0.1081 | 1.2202 | 1.1046 |
| No log | 5.1053 | 388 | 1.2053 | -0.1846 | 1.2053 | 1.0979 |
| No log | 5.1316 | 390 | 1.2402 | -0.1846 | 1.2402 | 1.1136 |
| No log | 5.1579 | 392 | 1.3572 | -0.2754 | 1.3572 | 1.1650 |
| No log | 5.1842 | 394 | 1.4381 | -0.2754 | 1.4381 | 1.1992 |
| No log | 5.2105 | 396 | 1.3915 | -0.2754 | 1.3915 | 1.1796 |
| No log | 5.2368 | 398 | 1.2811 | -0.1493 | 1.2811 | 1.1319 |
| No log | 5.2632 | 400 | 1.2246 | -0.1846 | 1.2246 | 1.1066 |
| No log | 5.2895 | 402 | 1.2205 | -0.1846 | 1.2205 | 1.1047 |
| No log | 5.3158 | 404 | 1.2281 | -0.1818 | 1.2281 | 1.1082 |
| No log | 5.3421 | 406 | 1.2375 | -0.1818 | 1.2375 | 1.1124 |
| No log | 5.3684 | 408 | 1.2117 | -0.1818 | 1.2117 | 1.1008 |
| No log | 5.3947 | 410 | 1.1852 | -0.1818 | 1.1852 | 1.0887 |
| No log | 5.4211 | 412 | 1.1673 | 0.1538 | 1.1673 | 1.0804 |
| No log | 5.4474 | 414 | 1.1665 | -0.1818 | 1.1665 | 1.0800 |
| No log | 5.4737 | 416 | 1.2591 | -0.2721 | 1.2591 | 1.1221 |
| No log | 5.5 | 418 | 1.2421 | -0.2721 | 1.2421 | 1.1145 |
| No log | 5.5263 | 420 | 1.1400 | -0.1818 | 1.1400 | 1.0677 |
| No log | 5.5526 | 422 | 1.1285 | 0.1295 | 1.1285 | 1.0623 |
| No log | 5.5789 | 424 | 1.1319 | 0.1295 | 1.1319 | 1.0639 |
| No log | 5.6053 | 426 | 1.1492 | 0.1538 | 1.1492 | 1.0720 |
| No log | 5.6316 | 428 | 1.2873 | -0.1000 | 1.2873 | 1.1346 |
| No log | 5.6579 | 430 | 1.3553 | -0.0809 | 1.3553 | 1.1642 |
| No log | 5.6842 | 432 | 1.2319 | -0.2721 | 1.2319 | 1.1099 |
| No log | 5.7105 | 434 | 1.1045 | -0.1818 | 1.1045 | 1.0510 |
| No log | 5.7368 | 436 | 1.0679 | 0.1295 | 1.0679 | 1.0334 |
| No log | 5.7632 | 438 | 1.0825 | 0.0833 | 1.0825 | 1.0404 |
| No log | 5.7895 | 440 | 1.0612 | 0.0833 | 1.0612 | 1.0301 |
| No log | 5.8158 | 442 | 1.0210 | -0.0342 | 1.0210 | 1.0104 |
| No log | 5.8421 | 444 | 1.0139 | -0.1846 | 1.0139 | 1.0069 |
| No log | 5.8684 | 446 | 1.0450 | -0.1818 | 1.0450 | 1.0223 |
| No log | 5.8947 | 448 | 1.1060 | -0.0927 | 1.1060 | 1.0517 |
| No log | 5.9211 | 450 | 1.0988 | -0.1493 | 1.0988 | 1.0482 |
| No log | 5.9474 | 452 | 1.0432 | -0.1846 | 1.0432 | 1.0214 |
| No log | 5.9737 | 454 | 1.0292 | -0.1846 | 1.0292 | 1.0145 |
| No log | 6.0 | 456 | 1.0162 | -0.1846 | 1.0162 | 1.0081 |
| No log | 6.0263 | 458 | 1.0140 | -0.1846 | 1.0140 | 1.0070 |
| No log | 6.0526 | 460 | 1.0293 | 0.1295 | 1.0293 | 1.0145 |
| No log | 6.0789 | 462 | 1.0399 | -0.1846 | 1.0399 | 1.0198 |
| No log | 6.1053 | 464 | 1.0628 | -0.1846 | 1.0628 | 1.0309 |
| No log | 6.1316 | 466 | 1.1330 | -0.1493 | 1.1330 | 1.0644 |
| No log | 6.1579 | 468 | 1.1165 | -0.1818 | 1.1165 | 1.0566 |
| No log | 6.1842 | 470 | 1.0744 | -0.1846 | 1.0744 | 1.0365 |
| No log | 6.2105 | 472 | 1.0416 | -0.1846 | 1.0416 | 1.0206 |
| No log | 6.2368 | 474 | 1.0260 | -0.1846 | 1.0260 | 1.0129 |
| No log | 6.2632 | 476 | 1.0196 | -0.1846 | 1.0196 | 1.0098 |
| No log | 6.2895 | 478 | 1.0325 | -0.1846 | 1.0325 | 1.0161 |
| No log | 6.3158 | 480 | 1.0495 | -0.1846 | 1.0495 | 1.0245 |
| No log | 6.3421 | 482 | 1.0404 | -0.1846 | 1.0404 | 1.0200 |
| No log | 6.3684 | 484 | 1.0387 | -0.1846 | 1.0387 | 1.0191 |
| No log | 6.3947 | 486 | 1.0537 | -0.1846 | 1.0537 | 1.0265 |
| No log | 6.4211 | 488 | 1.1082 | -0.1846 | 1.1082 | 1.0527 |
| No log | 6.4474 | 490 | 1.1394 | -0.1538 | 1.1394 | 1.0674 |
| No log | 6.4737 | 492 | 1.0938 | -0.1846 | 1.0938 | 1.0458 |
| No log | 6.5 | 494 | 1.0415 | -0.1846 | 1.0415 | 1.0205 |
| No log | 6.5263 | 496 | 1.0252 | -0.1846 | 1.0252 | 1.0125 |
| No log | 6.5526 | 498 | 1.0318 | -0.1846 | 1.0318 | 1.0158 |
| 0.4252 | 6.5789 | 500 | 1.0277 | -0.1846 | 1.0277 | 1.0138 |
| 0.4252 | 6.6053 | 502 | 1.0061 | -0.1846 | 1.0061 | 1.0030 |
| 0.4252 | 6.6316 | 504 | 0.9829 | -0.0342 | 0.9829 | 0.9914 |
| 0.4252 | 6.6579 | 506 | 0.9672 | -0.0342 | 0.9672 | 0.9835 |
| 0.4252 | 6.6842 | 508 | 0.9536 | -0.0342 | 0.9536 | 0.9765 |
| 0.4252 | 6.7105 | 510 | 0.9819 | -0.1818 | 0.9819 | 0.9909 |
| 0.4252 | 6.7368 | 512 | 1.0161 | -0.1159 | 1.0161 | 1.0080 |
| 0.4252 | 6.7632 | 514 | 0.9966 | 0.0320 | 0.9966 | 0.9983 |
| 0.4252 | 6.7895 | 516 | 0.9507 | -0.1818 | 0.9507 | 0.9751 |
| 0.4252 | 6.8158 | 518 | 0.9363 | -0.0342 | 0.9363 | 0.9676 |
| 0.4252 | 6.8421 | 520 | 0.9435 | -0.0342 | 0.9435 | 0.9713 |
| 0.4252 | 6.8684 | 522 | 0.9474 | -0.1846 | 0.9474 | 0.9733 |
| 0.4252 | 6.8947 | 524 | 0.9504 | -0.1846 | 0.9504 | 0.9749 |
| 0.4252 | 6.9211 | 526 | 0.9560 | -0.1846 | 0.9560 | 0.9777 |
| 0.4252 | 6.9474 | 528 | 0.9783 | -0.1846 | 0.9783 | 0.9891 |
| 0.4252 | 6.9737 | 530 | 1.0614 | -0.0927 | 1.0614 | 1.0302 |
| 0.4252 | 7.0 | 532 | 1.1580 | -0.0927 | 1.1580 | 1.0761 |
| 0.4252 | 7.0263 | 534 | 1.2210 | 0.0610 | 1.2210 | 1.1050 |
| 0.4252 | 7.0526 | 536 | 1.1785 | -0.0927 | 1.1785 | 1.0856 |
| 0.4252 | 7.0789 | 538 | 1.0606 | 0.0435 | 1.0606 | 1.0298 |
| 0.4252 | 7.1053 | 540 | 0.9821 | -0.1818 | 0.9821 | 0.9910 |
| 0.4252 | 7.1316 | 542 | 0.9698 | -0.0342 | 0.9698 | 0.9848 |
| 0.4252 | 7.1579 | 544 | 0.9690 | -0.0342 | 0.9690 | 0.9844 |
| 0.4252 | 7.1842 | 546 | 0.9676 | -0.0342 | 0.9676 | 0.9837 |
| 0.4252 | 7.2105 | 548 | 0.9711 | -0.1846 | 0.9711 | 0.9854 |
| 0.4252 | 7.2368 | 550 | 0.9897 | -0.1818 | 0.9897 | 0.9949 |
| 0.4252 | 7.2632 | 552 | 0.9964 | -0.1818 | 0.9964 | 0.9982 |
| 0.4252 | 7.2895 | 554 | 0.9942 | -0.1846 | 0.9942 | 0.9971 |
| 0.4252 | 7.3158 | 556 | 0.9984 | -0.1846 | 0.9984 | 0.9992 |
| 0.4252 | 7.3421 | 558 | 0.9968 | -0.0342 | 0.9968 | 0.9984 |
| 0.4252 | 7.3684 | 560 | 0.9871 | -0.0342 | 0.9871 | 0.9935 |
| 0.4252 | 7.3947 | 562 | 0.9746 | -0.0342 | 0.9746 | 0.9872 |
| 0.4252 | 7.4211 | 564 | 0.9675 | -0.0342 | 0.9675 | 0.9836 |
| 0.4252 | 7.4474 | 566 | 0.9706 | 0.3016 | 0.9706 | 0.9852 |
| 0.4252 | 7.4737 | 568 | 0.9720 | 0.3016 | 0.9720 | 0.9859 |
| 0.4252 | 7.5 | 570 | 0.9508 | -0.0342 | 0.9508 | 0.9751 |
| 0.4252 | 7.5263 | 572 | 0.9313 | -0.0342 | 0.9313 | 0.9650 |
| 0.4252 | 7.5526 | 574 | 0.9419 | -0.0185 | 0.9419 | 0.9705 |
| 0.4252 | 7.5789 | 576 | 0.9839 | 0.0435 | 0.9839 | 0.9919 |
| 0.4252 | 7.6053 | 578 | 0.9977 | -0.0927 | 0.9977 | 0.9989 |
| 0.4252 | 7.6316 | 580 | 0.9693 | -0.1786 | 0.9693 | 0.9845 |
| 0.4252 | 7.6579 | 582 | 0.9443 | -0.0185 | 0.9443 | 0.9717 |
| 0.4252 | 7.6842 | 584 | 0.9419 | -0.0342 | 0.9419 | 0.9705 |
| 0.4252 | 7.7105 | 586 | 0.9470 | -0.0342 | 0.9470 | 0.9732 |
| 0.4252 | 7.7368 | 588 | 0.9475 | -0.0342 | 0.9475 | 0.9734 |
| 0.4252 | 7.7632 | 590 | 0.9530 | -0.0342 | 0.9530 | 0.9762 |
| 0.4252 | 7.7895 | 592 | 0.9748 | -0.0185 | 0.9748 | 0.9873 |
| 0.4252 | 7.8158 | 594 | 0.9963 | -0.3200 | 0.9963 | 0.9982 |
| 0.4252 | 7.8421 | 596 | 1.0036 | -0.3200 | 1.0036 | 1.0018 |
| 0.4252 | 7.8684 | 598 | 0.9826 | -0.1818 | 0.9826 | 0.9913 |
| 0.4252 | 7.8947 | 600 | 0.9595 | -0.0185 | 0.9595 | 0.9795 |
| 0.4252 | 7.9211 | 602 | 0.9531 | -0.0342 | 0.9531 | 0.9763 |
| 0.4252 | 7.9474 | 604 | 0.9503 | -0.0342 | 0.9503 | 0.9748 |
| 0.4252 | 7.9737 | 606 | 0.9508 | -0.0342 | 0.9508 | 0.9751 |
| 0.4252 | 8.0 | 608 | 0.9586 | -0.0185 | 0.9586 | 0.9791 |
| 0.4252 | 8.0263 | 610 | 0.9629 | -0.0185 | 0.9629 | 0.9813 |
| 0.4252 | 8.0526 | 612 | 0.9597 | -0.0185 | 0.9597 | 0.9796 |
| 0.4252 | 8.0789 | 614 | 0.9550 | -0.0185 | 0.9550 | 0.9772 |
| 0.4252 | 8.1053 | 616 | 0.9537 | -0.1818 | 0.9537 | 0.9766 |
| 0.4252 | 8.1316 | 618 | 0.9546 | -0.1786 | 0.9546 | 0.9770 |
| 0.4252 | 8.1579 | 620 | 0.9499 | -0.1786 | 0.9499 | 0.9746 |
| 0.4252 | 8.1842 | 622 | 0.9414 | -0.0185 | 0.9414 | 0.9703 |
| 0.4252 | 8.2105 | 624 | 0.9328 | -0.0185 | 0.9328 | 0.9658 |
| 0.4252 | 8.2368 | 626 | 0.9308 | -0.0185 | 0.9308 | 0.9648 |
| 0.4252 | 8.2632 | 628 | 0.9321 | -0.0185 | 0.9321 | 0.9654 |
| 0.4252 | 8.2895 | 630 | 0.9363 | -0.0185 | 0.9363 | 0.9676 |
| 0.4252 | 8.3158 | 632 | 0.9414 | -0.0185 | 0.9414 | 0.9703 |
| 0.4252 | 8.3421 | 634 | 0.9545 | -0.1786 | 0.9545 | 0.9770 |
| 0.4252 | 8.3684 | 636 | 0.9591 | -0.1786 | 0.9591 | 0.9793 |
| 0.4252 | 8.3947 | 638 | 0.9560 | -0.1786 | 0.9560 | 0.9778 |
| 0.4252 | 8.4211 | 640 | 0.9572 | -0.1786 | 0.9572 | 0.9784 |
| 0.4252 | 8.4474 | 642 | 0.9498 | -0.1818 | 0.9498 | 0.9746 |
| 0.4252 | 8.4737 | 644 | 0.9504 | -0.1818 | 0.9504 | 0.9749 |
| 0.4252 | 8.5 | 646 | 0.9555 | -0.1818 | 0.9555 | 0.9775 |
| 0.4252 | 8.5263 | 648 | 0.9659 | -0.1818 | 0.9659 | 0.9828 |
| 0.4252 | 8.5526 | 650 | 0.9748 | -0.1818 | 0.9748 | 0.9873 |
| 0.4252 | 8.5789 | 652 | 0.9922 | -0.1786 | 0.9922 | 0.9961 |
| 0.4252 | 8.6053 | 654 | 1.0005 | -0.1786 | 1.0005 | 1.0002 |
| 0.4252 | 8.6316 | 656 | 0.9931 | -0.1818 | 0.9931 | 0.9965 |
| 0.4252 | 8.6579 | 658 | 0.9804 | -0.1818 | 0.9804 | 0.9902 |
| 0.4252 | 8.6842 | 660 | 0.9802 | -0.1818 | 0.9802 | 0.9900 |
| 0.4252 | 8.7105 | 662 | 0.9749 | -0.1818 | 0.9749 | 0.9874 |
| 0.4252 | 8.7368 | 664 | 0.9700 | -0.1818 | 0.9700 | 0.9849 |
| 0.4252 | 8.7632 | 666 | 0.9668 | -0.1818 | 0.9668 | 0.9832 |
| 0.4252 | 8.7895 | 668 | 0.9664 | -0.1818 | 0.9664 | 0.9831 |
| 0.4252 | 8.8158 | 670 | 0.9672 | -0.1818 | 0.9672 | 0.9835 |
| 0.4252 | 8.8421 | 672 | 0.9668 | -0.1818 | 0.9668 | 0.9832 |
| 0.4252 | 8.8684 | 674 | 0.9668 | -0.1818 | 0.9668 | 0.9833 |
| 0.4252 | 8.8947 | 676 | 0.9734 | -0.1818 | 0.9734 | 0.9866 |
| 0.4252 | 8.9211 | 678 | 0.9926 | -0.1786 | 0.9926 | 0.9963 |
| 0.4252 | 8.9474 | 680 | 1.0058 | -0.1440 | 1.0058 | 1.0029 |
| 0.4252 | 8.9737 | 682 | 1.0026 | -0.1786 | 1.0026 | 1.0013 |
| 0.4252 | 9.0 | 684 | 0.9929 | -0.1786 | 0.9929 | 0.9965 |
| 0.4252 | 9.0263 | 686 | 0.9790 | -0.1818 | 0.9790 | 0.9894 |
| 0.4252 | 9.0526 | 688 | 0.9703 | -0.1818 | 0.9703 | 0.9850 |
| 0.4252 | 9.0789 | 690 | 0.9676 | -0.1818 | 0.9676 | 0.9837 |
| 0.4252 | 9.1053 | 692 | 0.9684 | -0.1818 | 0.9684 | 0.9841 |
| 0.4252 | 9.1316 | 694 | 0.9705 | -0.1818 | 0.9705 | 0.9851 |
| 0.4252 | 9.1579 | 696 | 0.9747 | -0.1818 | 0.9747 | 0.9873 |
| 0.4252 | 9.1842 | 698 | 0.9785 | -0.1818 | 0.9785 | 0.9892 |
| 0.4252 | 9.2105 | 700 | 0.9856 | -0.1818 | 0.9856 | 0.9928 |
| 0.4252 | 9.2368 | 702 | 0.9981 | -0.1786 | 0.9981 | 0.9990 |
| 0.4252 | 9.2632 | 704 | 1.0141 | -0.1786 | 1.0141 | 1.0070 |
| 0.4252 | 9.2895 | 706 | 1.0197 | -0.1786 | 1.0197 | 1.0098 |
| 0.4252 | 9.3158 | 708 | 1.0181 | -0.1786 | 1.0181 | 1.0090 |
| 0.4252 | 9.3421 | 710 | 1.0085 | -0.1786 | 1.0085 | 1.0042 |
| 0.4252 | 9.3684 | 712 | 1.0036 | -0.1786 | 1.0036 | 1.0018 |
| 0.4252 | 9.3947 | 714 | 1.0018 | -0.1786 | 1.0018 | 1.0009 |
| 0.4252 | 9.4211 | 716 | 0.9953 | -0.1786 | 0.9953 | 0.9977 |
| 0.4252 | 9.4474 | 718 | 0.9887 | -0.1818 | 0.9887 | 0.9943 |
| 0.4252 | 9.4737 | 720 | 0.9833 | -0.1818 | 0.9833 | 0.9916 |
| 0.4252 | 9.5 | 722 | 0.9833 | -0.1818 | 0.9833 | 0.9916 |
| 0.4252 | 9.5263 | 724 | 0.9857 | -0.1818 | 0.9857 | 0.9928 |
| 0.4252 | 9.5526 | 726 | 0.9876 | -0.1818 | 0.9876 | 0.9938 |
| 0.4252 | 9.5789 | 728 | 0.9884 | -0.1786 | 0.9884 | 0.9942 |
| 0.4252 | 9.6053 | 730 | 0.9888 | -0.1786 | 0.9888 | 0.9944 |
| 0.4252 | 9.6316 | 732 | 0.9875 | -0.1786 | 0.9875 | 0.9937 |
| 0.4252 | 9.6579 | 734 | 0.9838 | -0.1818 | 0.9838 | 0.9919 |
| 0.4252 | 9.6842 | 736 | 0.9818 | -0.1818 | 0.9818 | 0.9909 |
| 0.4252 | 9.7105 | 738 | 0.9821 | -0.1818 | 0.9821 | 0.9910 |
| 0.4252 | 9.7368 | 740 | 0.9826 | -0.1818 | 0.9826 | 0.9913 |
| 0.4252 | 9.7632 | 742 | 0.9833 | -0.1818 | 0.9833 | 0.9916 |
| 0.4252 | 9.7895 | 744 | 0.9835 | -0.1818 | 0.9835 | 0.9917 |
| 0.4252 | 9.8158 | 746 | 0.9833 | -0.1818 | 0.9833 | 0.9916 |
| 0.4252 | 9.8421 | 748 | 0.9827 | -0.1818 | 0.9827 | 0.9913 |
| 0.4252 | 9.8684 | 750 | 0.9827 | -0.1818 | 0.9827 | 0.9913 |
| 0.4252 | 9.8947 | 752 | 0.9834 | -0.1818 | 0.9834 | 0.9917 |
| 0.4252 | 9.9211 | 754 | 0.9836 | -0.1818 | 0.9836 | 0.9918 |
| 0.4252 | 9.9474 | 756 | 0.9840 | -0.1818 | 0.9840 | 0.9919 |
| 0.4252 | 9.9737 | 758 | 0.9839 | -0.1818 | 0.9839 | 0.9919 |
| 0.4252 | 10.0 | 760 | 0.9839 | -0.1818 | 0.9839 | 0.9919 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
yuyijiong/llm_calculator_v0.1 | yuyijiong | 2024-11-25T10:42:46Z | 6 | 2 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:yuyijiong/llm_calculator_data",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-11-25T08:56:28Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
datasets:
- yuyijiong/llm_calculator_data
---
# A language model with calculator-like functionality
* Supports up to 10 digit calculations
* Nearly 100% accuracy
* It use CoT to calculate, so the calculation process may be lengthy
* v0.1 only support addition, subtraction and multiplication.
* Addition supports the addition of multiple numbers, while subtraction and multiplication currently only supports operations with two numbers
## Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yuyijiong/llm_calculator_v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
#addition
prompt = "1234+12345+123456=?"
#subtraction
prompt="1234-12345=?"
#multiply
prompt="1234*12345=?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
do_sample=False,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Example
```
Q: 3563+123=?
A:
calculate 23 * 541:
(1, 1) 3 * 1 -> 3 + carry -> 3 -> [3] & carry 0 -> [3]
(1, 2) 3 * 4 -> 12 + carry -> 12 -> [2] & carry 1 -> [20]
(1, 3) 3 * 5 -> 15 + carry -> 16 -> [6] & carry 1 -> [1600]
temp result: 1623
(2, 1) 2 * 1 -> 2 + carry -> 2 -> [2] & carry 0 -> [20]
(2, 2) 2 * 4 -> 8 + carry -> 8 -> [8] & carry 0 -> [800]
(2, 3) 2 * 5 -> 10 + carry -> 10 -> [0] & carry 1 -> [10000]
temp result: 10820
gather temp results: 1623 + 10820
calculate 1623 + 10820:
calculate 1623 + 10820:
(1) 3 + 0 + carry -> 3 -> [3] & carry 0
(2) 2 + 2 + carry -> 4 -> [4] & carry 0
(3) 6 + 8 + carry -> 14 -> [4] & carry 1
(4) 1 + 0 + carry -> 2 -> [2] & carry 0
(5) 0 + 1 + carry -> 1 -> [1] & carry 0
gather results: 12443
final answer: 12443
``` |
MexIvanov/MistRAG-7B-ruen-v1-gguf | MexIvanov | 2024-11-25T10:42:01Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"text-generation",
"peft",
"ru",
"en",
"dataset:MexIvanov/RAG-v1-ruen",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:quantized:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-16T06:55:08Z | ---
tags:
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: mistralai/Mistral-7B-v0.3
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: apache-2.0
language:
- ru
- en
datasets:
- MexIvanov/RAG-v1-ruen
---
# Model Card for MistRAG-7B-ruen-v1
## Model Details
### Model Description
- **Developed by:** C.B. Pronin
- **Model type:** GGUF Quantizations of the merged model (MexIvanov/MistRAG-7B-ruen-v1-merged)
- **Language(s) (NLP):** Russian, English
- **License:** Apache license 2.0
- **Finetuned from model:** mistralai/Mistral-7B-v0.3
## Provided files
Model for automating question answering tasks specified to RAG pipelines in two languages English and Russian, trained on machine-translated "glaiveai/RAG-v1" dataset.
GGUF Quantizations for use with projects like ollama and llama.cpp.
| Name | Quant method | Bits | Use case |
| ---- | ---- | ---- | ----- |
| [MistRAG-7b-v1-f16.gguf](https://huggingface.co/MexIvanov/MistRAG-7B-ruen-v1-gguf/blob/main/MistRAG-7b-v1-f16.gguf) | f16 | 16 | Full F16 weights. |
| [MistRAG-7b-v1-q8_0.gguf](https://huggingface.co/MexIvanov/MistRAG-7B-ruen-v1-gguf/blob/main/MistRAG-7b-v1-q8_0.gguf) | Q8_0 | 8 | Extremely high quality, generally unneeded but max available quant. |
| [MistRAG-7b-v1-q6_K.gguf](https://huggingface.co/MexIvanov/MistRAG-7B-ruen-v1-gguf/blob/main/MistRAG-7b-v1-q6_K.gguf) | Q6_K | 6 | Very high quality, near perfect, *recommended*. |
| [MistRAG-7b-v1-q5_K_M.gguf](https://huggingface.co/MexIvanov/MistRAG-7B-ruen-v1-gguf/blob/main/MistRAG-7b-v1-q5_K_M.gguf) | Q5_K_M | 5 | High quality, *recommended*. |
| [MistRAG-7b-v1-q4_K_M.gguf](https://huggingface.co/MexIvanov/MistRAG-7B-ruen-v1-gguf/blob/main/MistRAG-7b-v1-q4_K_M.gguf) | Q4_K_M | 4 | Good quality, default size for must use cases, *recommended*. |
### In-code prompting templates
```
SYSTEM = "You are a conversational AI assistant that is provided a list of documents and a user query to answer based on information from the documents. The user also provides an answer mode which can be 'Grounded' or 'Mixed'. For answer mode Grounded only respond with exact facts from documents, for answer mode Mixed answer using facts from documents and your own knowledge. Cite all facts from the documents using <co: doc_id></co> tags."
SYSTEM_RU = "Вы — разговорный помощник ИИ, которому предоставляется список документов и запрос пользователя для ответа на основе информации из документов. Пользователь также предоставляет режим ответа, который может быть 'Обоснованный' или 'Смешанный'. Для режима ответа «Обоснованный» отвечайте только точными фактами из документов, для режима ответа «Смешанный» отвечайте, используя факты из документов и собственные знания. Ссылайтесь на все факты из документов, используя теги <co: doc_id></co>."
#English prompt template:
prompt_en = SYSTEM + '\n\n' + documents + '\n\nAnswer Mode: ' + Grounded or Mixed + '\n\nQuestion: ' + question + '\n\nResponse: ' + answer
#Russian prompt temlate:
prompt_ru = SYSTEM_RU + '\n\n' + documents + '\n\nAnswer Mode: ' + Обоснованный или Смешанный + '\n\nQuestion: ' + question_ru + '\n\nResponse: ' + answer_ru
```
### English example prompt:
```
You are a conversational AI assistant that is provided a list of documents and a user query to answer based on information from the documents. The user also provides an answer mode which can be 'Grounded' or 'Mixed'. For answer mode Grounded only respond with exact facts from documents, for answer mode Mixed answer using facts from documents and your own knowledge. Cite all facts from the documents using <co: doc_id></co> tags.
Document:0 Title: IT Project Failure Rates Text: According to a study published by the Harvard Business Review in 2011, 1 in 6 IT projects experience a cost overrun of 200% and a schedule overrun of nearly 70%. These statistics highlight the significant risk associated with managing IT projects. It is crucial to understand that these problems often begin even before the project is officially initiated. Many issues can be traced back to inadequate planning stages, where necessary precautions and thorough analyses are overlooked. The complexities of IT projects require careful handling of project proposals, client onboarding, and initial scope definitions. Failure to address these preliminary steps can lead to escalated costs and extended timelines, severely impacting the overall success of the project. Document:1 Title: Effective Project Proposal Strategies Text: An essential aspect of project management is the project proposal stage. This stage is critical because it sets the groundwork for all subsequent activities. A well-prepared project proposal can help in clearly defining the project scope, deliverables, and responsibilities, which are crucial for project success. According to project management experts, a project proposal should include a detailed analysis of the client's needs, a comprehensive plan of action, and a clear statement of work. Moreover, it's beneficial to create transparency in roles and responsibilities and to present a timeline that outlines when each project milestone will be achieved. This can significantly reduce misunderstandings and set clear expectations from the outset. Document:2 Title: Historical Overview of Project Management Text: Project management has evolved significantly over the decades. Initially, projects were managed informally using simple techniques. However, as industries grew and projects became more complex, the need for formal project management methodologies became evident. This led to the development of various frameworks and standards that guide today's project management practices. One of the key aspects that have been emphasized in modern project management is the importance of initial planning and client engagement. Early engagement with clients helps in understanding their vision and aligning the project objectives accordingly, which is essential for the successful delivery of the project. Document:3 Title: Client Onboarding and Project Kickoff Text: The process of client onboarding is a critical phase in project management. It involves preparing the client for the project journey ahead and ensuring they are fully informed about the process. Effective onboarding can mitigate many risks associated with project misunderstandings and misalignments. During this phase, project managers should focus on building a strong relationship with the client, clarifying project scopes, and discussing the budget and timeline in detail. This phase sets the tone for the project and can significantly influence its success or failure. Document:4 Title: Role of Technology in Project Management Text: In today's digital age, technology plays a crucial role in project management. From project tracking tools to communication platforms, technology helps in streamlining project processes and enhancing collaboration among team members. Effective use of technology can lead to better project outcomes by providing real-time updates and facilitating easier communication. This is particularly important in IT projects, where requirements can change rapidly, and staying updated is crucial for project success.
Answer Mode: Grounded
Question: How can the initial stages of IT project management, specifically the project proposal and client onboarding stages, be optimized to prevent the common issues of cost and schedule overruns?
Response:
```
### Russian example prompt:
```
Вы — разговорный помощник ИИ, которому предоставляется список документов и запрос пользователя для ответа на основе информации из документов. Пользователь также предоставляет режим ответа, который может быть 'Обоснованный' или 'Смешанный'. Для режима ответа «Обоснованный» отвечайте только точными фактами из документов, для режима ответа «Смешанный» отвечайте, используя факты из документов и собственные знания. Ссылайтесь на все факты из документов, используя теги <co: doc_id></co>.
Документ:0 Название: Показатели неудач ИТ-проектов Текст: Согласно исследованию, опубликованному Harvard Business Review в 2011 году, 1 из 6 ИТ-проектов перерасходует средства на 200% и выходит за рамки графика почти на 70%. Эти статистические данные подчеркивают значительный риск, связанный с управлением ИТ-проектами. Важно понимать, что эти проблемы часто начинаются еще до официального начала проекта. Многие проблемы можно отследить до неадекватных этапов планирования, когда необходимые меры предосторожности и тщательный анализ упускаются из виду. Сложность ИТ-проектов требует тщательной обработки проектных предложений, привлечения клиентов и начальных определений объема работ. Невыполнение этих предварительных шагов может привести к росту затрат и увеличению сроков, что серьезно повлияет на общий успех проекта. Документ:1 Название: Эффективные стратегии предложения проекта Текст: Важным аспектом управления проектами является этап предложения проекта. Этот этап имеет решающее значение, поскольку он устанавливает Основа для всех последующих действий. Хорошо подготовленное проектное предложение может помочь в четком определении объема проекта, результатов и обязанностей, которые имеют решающее значение для успеха проекта. По мнению экспертов по управлению проектами, проектное предложение должно включать подробный анализ потребностей клиента, всеобъемлющий план действий и четкое описание работы. Более того, полезно создать прозрачность в ролях и обязанностях и представить временную шкалу, которая описывает, когда будет достигнут каждый этап проекта. Это может значительно уменьшить недопонимание и установить четкие ожидания с самого начала. Документ:2 Название: Исторический обзор управления проектами Текст: Управление проектами значительно изменилось за десятилетия. Первоначально управление проектами осуществлялось неформально с использованием простых методов. Однако по мере роста отраслей и усложнения проектов стала очевидной потребность в формальных методологиях управления проектами. Это привело к разработке различные структуры и стандарты, которые определяют современные методы управления проектами. Одним из ключевых аспектов, которые подчеркиваются в современном управлении проектами, является важность первоначального планирования и взаимодействия с клиентом. Раннее взаимодействие с клиентами помогает понять их видение и соответствующим образом согласовать цели проекта, что имеет важное значение для успешной реализации проекта. Документ:3 Название: Привлечение клиентов и начало проекта Текст: Процесс привлечения клиентов является критически важным этапом в управлении проектами. Он включает подготовку клиента к предстоящему проектному пути и обеспечение его полной информированности о процессе. Эффективное привлечение может смягчить многие риски, связанные с недопониманием и несоответствиями проекта. На этом этапе менеджеры проектов должны сосредоточиться на построении прочных отношений с клиентом, прояснении объемов проекта и подробном обсуждении бюджета и сроков. Этот этап задает тон для проекта и может значительно влияют на его успех или неудачу. Документ:4 Название: Роль технологий в управлении проектами Текст: В сегодняшнюю цифровую эпоху технологии играют решающую роль в управлении проектами. От инструментов отслеживания проектов до коммуникационных платформ технологии помогают оптимизировать процессы проекта и улучшить сотрудничество между членами команды. Эффективное использование технологий может привести к лучшим результатам проекта за счет предоставления обновлений в режиме реального времени и упрощения коммуникации. Это особенно важно в ИТ-проектах, где требования могут быстро меняться, а поддержание актуальности имеет решающее значение для успеха проекта.
Answer Mode: Обоснованный
Question: Как можно оптимизировать начальные этапы управления ИТ-проектами, в частности этапы предложения проекта и привлечения клиентов, чтобы предотвратить распространенные проблемы, связанные с превышением затрат и графика?
Response:
```
### Generation examples:
### Example 1:
```
You are a conversational AI assistant that is provided a list of documents and a user query to answer based on information from the documents. The user also provides an answer mode which can be 'Grounded' or 'Mixed'. For answer mode Grounded only respond with exact facts from documents, for answer mode Mixed answer using facts from documents and your own knowledge. Cite all facts from the documents using <co: doc_id></co> tags.
Document:0 Title: IT Project Failure Rates Text: According to a study published by the Harvard Business Review in 2011, 1 in 6 IT projects experience a cost overrun of 200% and a schedule overrun of nearly 70%. These statistics highlight the significant risk associated with managing IT projects. It is crucial to understand that these problems often begin even before the project is officially initiated. Many issues can be traced back to inadequate planning stages, where necessary precautions and thorough analyses are overlooked. The complexities of IT projects require careful handling of project proposals, client onboarding, and initial scope definitions. Failure to address these preliminary steps can lead to escalated costs and extended timelines, severely impacting the overall success of the project. Document:1 Title: Effective Project Proposal Strategies Text: An essential aspect of project management is the project proposal stage. This stage is critical because it sets the groundwork for all subsequent activities. A well-prepared project proposal can help in clearly defining the project scope, deliverables, and responsibilities, which are crucial for project success. According to project management experts, a project proposal should include a detailed analysis of the client's needs, a comprehensive plan of action, and a clear statement of work. Moreover, it's beneficial to create transparency in roles and responsibilities and to present a timeline that outlines when each project milestone will be achieved. This can significantly reduce misunderstandings and set clear expectations from the outset. Document:2 Title: Historical Overview of Project Management Text: Project management has evolved significantly over the decades. Initially, projects were managed informally using simple techniques. However, as industries grew and projects became more complex, the need for formal project management methodologies became evident. This led to the development of various frameworks and standards that guide today's project management practices. One of the key aspects that have been emphasized in modern project management is the importance of initial planning and client engagement. Early engagement with clients helps in understanding their vision and aligning the project objectives accordingly, which is essential for the successful delivery of the project. Document:3 Title: Client Onboarding and Project Kickoff Text: The process of client onboarding is a critical phase in project management. It involves preparing the client for the project journey ahead and ensuring they are fully informed about the process. Effective onboarding can mitigate many risks associated with project misunderstandings and misalignments. During this phase, project managers should focus on building a strong relationship with the client, clarifying project scopes, and discussing the budget and timeline in detail. This phase sets the tone for the project and can significantly influence its success or failure. Document:4 Title: Role of Technology in Project Management Text: In today's digital age, technology plays a crucial role in project management. From project tracking tools to communication platforms, technology helps in streamlining project processes and enhancing collaboration among team members. Effective use of technology can lead to better project outcomes by providing real-time updates and facilitating easier communication. This is particularly important in IT projects, where requirements can change rapidly, and staying updated is crucial for project success.
Answer Mode: Grounded
Question: How can the initial stages of IT project management, specifically the project proposal and client onboarding stages, be optimized to prevent the common issues of cost and schedule overruns?
Response:
Cited Documents: 0, 1
Answer: <co:0>IT projects often suffer from high failure rates due to issues that begin even in the initial planning stages, where necessary precautions and analyses are overlooked.</co> <co:1>An effective project proposal should include a detailed analysis of the client's needs, a comprehensive plan of action, and a clear statement of work, which can help define the project scope and deliverables more clearly and reduce misunderstandings.</co> By focusing on these stages and ensuring a strong client engagement and understanding of project objectives, project managers can significantly reduce the risks associated with cost and schedule overruns.
```
### Example 2:
```
Вы — разговорный помощник ИИ, которому предоставляется список документов и запрос пользователя для ответа на основе информации из документов. Пользователь также предоставляет режим ответа, который может быть 'Обоснованный' или 'Смешанный'. Для режима ответа «Обоснованный» отвечайте только точными фактами из документов, для режима ответа «Смешанный» отвечайте, используя факты из документов и собственные знания. Ссылайтесь на все факты из документов, используя теги <co: doc_id></co>.
Документ:0 Название: Показатели неудач ИТ-проектов Текст: Согласно исследованию, опубликованному Harvard Business Review в 2011 году, 1 из 6 ИТ-проектов перерасходует средства на 200% и выходит за рамки графика почти на 70%. Эти статистические данные подчеркивают значительный риск, связанный с управлением ИТ-проектами. Важно понимать, что эти проблемы часто начинаются еще до официального начала проекта. Многие проблемы можно отследить до неадекватных этапов планирования, когда необходимые меры предосторожности и тщательный анализ упускаются из виду. Сложность ИТ-проектов требует тщательной обработки проектных предложений, привлечения клиентов и начальных определений объема работ. Невыполнение этих предварительных шагов может привести к росту затрат и увеличению сроков, что серьезно повлияет на общий успех проекта. Документ:1 Название: Эффективные стратегии предложения проекта Текст: Важным аспектом управления проектами является этап предложения проекта. Этот этап имеет решающее значение, поскольку он устанавливает Основа для всех последующих действий. Хорошо подготовленное проектное предложение может помочь в четком определении объема проекта, результатов и обязанностей, которые имеют решающее значение для успеха проекта. По мнению экспертов по управлению проектами, проектное предложение должно включать подробный анализ потребностей клиента, всеобъемлющий план действий и четкое описание работы. Более того, полезно создать прозрачность в ролях и обязанностях и представить временную шкалу, которая описывает, когда будет достигнут каждый этап проекта. Это может значительно уменьшить недопонимание и установить четкие ожидания с самого начала. Документ:2 Название: Исторический обзор управления проектами Текст: Управление проектами значительно изменилось за десятилетия. Первоначально управление проектами осуществлялось неформально с использованием простых методов. Однако по мере роста отраслей и усложнения проектов стала очевидной потребность в формальных методологиях управления проектами. Это привело к разработке различные структуры и стандарты, которые определяют современные методы управления проектами. Одним из ключевых аспектов, которые подчеркиваются в современном управлении проектами, является важность первоначального планирования и взаимодействия с клиентом. Раннее взаимодействие с клиентами помогает понять их видение и соответствующим образом согласовать цели проекта, что имеет важное значение для успешной реализации проекта. Документ:3 Название: Привлечение клиентов и начало проекта Текст: Процесс привлечения клиентов является критически важным этапом в управлении проектами. Он включает подготовку клиента к предстоящему проектному пути и обеспечение его полной информированности о процессе. Эффективное привлечение может смягчить многие риски, связанные с недопониманием и несоответствиями проекта. На этом этапе менеджеры проектов должны сосредоточиться на построении прочных отношений с клиентом, прояснении объемов проекта и подробном обсуждении бюджета и сроков. Этот этап задает тон для проекта и может значительно влияют на его успех или неудачу. Документ:4 Название: Роль технологий в управлении проектами Текст: В сегодняшнюю цифровую эпоху технологии играют решающую роль в управлении проектами. От инструментов отслеживания проектов до коммуникационных платформ технологии помогают оптимизировать процессы проекта и улучшить сотрудничество между членами команды. Эффективное использование технологий может привести к лучшим результатам проекта за счет предоставления обновлений в режиме реального времени и упрощения коммуникации. Это особенно важно в ИТ-проектах, где требования могут быстро меняться, а поддержание актуальности имеет решающее значение для успеха проекта.
Answer Mode: Grounded
Question: Как можно оптимизировать начальные этапы управления ИТ-проектами, в частности этапы предложения проекта и привлечения клиентов, чтобы предотвратить распространенные проблемы, связанные с превышением затрат и графика?
Response:
Ссылающиеся документы: 0, 1, 3
Ответ: Чтобы оптимизировать начальные этапы управления ИТ-проектами и предотвратить распространенные проблемы, связанные с превышением затрат и графика, необходимо уделить особое внимание этапам предложения проекта и привлечения клиентов. <co:0>Исследование Harvard Business Review показало, что 1 из 6 ИТ-проектов перерасходует средства на 200% и выйдет за рамки графика почти на 70%, что часто связано с неадекватными этапами планирования, необходимыми меры предосторожности и тщательный анализ.</co> Это подчеркивает важность тщательного анализа потребностей клиента и тщательного планирования в предложении проекта. <co:1>Хорошо подготовленное проектное предложение может помочь в четком определении объема проекта, результатов и обязанностей, что имеет решающее значение для успеха проекта.</co> Кроме того, <co:3>эффективное привлечение клиентов помогает понять их видение и согласовывать цели проекта, что может значительно смягчить недопонимание и установить четкие ожидания с самого начала.</co> Эти стратегические шаги в начале проекта могут значительно снизить риск перерасхода средств и увеличения сроков, повысив общий успех проекта.
```
### Bias, Risks, and Limitations
This model does not have any moderation mechanisms. Use at your own risk, the author(s) are not responsible for any usage or output of this model.
|
IIC/bert-base-spanish-wwm-cased-ehealth_kd | IIC | 2024-11-25T10:41:48Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"token-classification",
"es",
"dataset:ehealth_kd",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T14:43:31Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "ehealth_kd"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-ehealth_kd
results:
- task:
type: token-classification
dataset:
name: eHealth-KD
type: ehealth_kd
split: test
metrics:
- name: f1
type: f1
value: 0.843
pipeline_tag: token-classification
---
# bert-base-spanish-wwm-cased-ehealth_kd
This model is a finetuned version of bert-base-spanish-wwm-cased for the eHealth-KD dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.843
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 64 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/xlm-roberta-large-distemist | IIC | 2024-11-25T10:41:40Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"xlm-roberta-large",
"token-classification",
"es",
"dataset:bigbio/distemist",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T09:32:05Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- xlm-roberta-large
license: mit
datasets:
- "bigbio/distemist"
metrics:
- f1
model-index:
- name: IIC/xlm-roberta-large-distemist
results:
- task:
type: token-classification
dataset:
name: distemist
type: bigbio/distemist
split: test
metrics:
- name: f1
type: f1
value: 0.817
pipeline_tag: token-classification
---
# xlm-roberta-large-distemist
This model is a finetuned version of xlm-roberta-large for the distemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.817
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/bert-base-spanish-wwm-cased-ctebmsp | IIC | 2024-11-25T10:41:34Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"token-classification",
"es",
"dataset:lcampillos/ctebmsp",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T06:46:59Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "lcampillos/ctebmsp"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-ctebmsp
results:
- task:
type: token-classification
dataset:
name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish)
type: lcampillos/ctebmsp
split: test
metrics:
- name: f1
type: f1
value: 0.88
pipeline_tag: token-classification
---
# bert-base-spanish-wwm-cased-ctebmsp
This model is a finetuned version of bert-base-spanish-wwm-cased for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.88
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/XLM_R_Galen-pharmaconer | IIC | 2024-11-25T10:41:28Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"XLM_R_Galen",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T16:20:10Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- XLM_R_Galen
license: mit
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: IIC/XLM_R_Galen-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
split: test
metrics:
- name: f1
type: f1
value: 0.915
pipeline_tag: token-classification
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# XLM_R_Galen-pharmaconer
This model is a finetuned version of XLM_R_Galen for the pharmaconer dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.915
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/bert-base-spanish-wwm-cased-pharmaconer | IIC | 2024-11-25T10:41:26Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T16:08:54Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
split: test
metrics:
- name: f1
type: f1
value: 0.908
pipeline_tag: token-classification
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# bert-base-spanish-wwm-cased-pharmaconer
This model is a finetuned version of bert-base-spanish-wwm-cased for the pharmaconer dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.908
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 32 |
| learning rate | 3e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/xlm-roberta-large-pharmaconer | IIC | 2024-11-25T10:41:25Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"xlm-roberta-large",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T16:15:06Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- xlm-roberta-large
license: mit
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: IIC/xlm-roberta-large-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
split: test
metrics:
- name: f1
type: f1
value: 0.924
pipeline_tag: token-classification
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# xlm-roberta-large-pharmaconer
This model is a finetuned version of xlm-roberta-large for the pharmaconer dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.924
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 64 |
| learning rate | 3e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/bsc-bio-ehr-es-pharmaconer | IIC | 2024-11-25T10:41:23Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bsc-bio-ehr-es",
"token-classification",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-21T16:11:44Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bsc-bio-ehr-es
license: apache-2.0
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: IIC/bsc-bio-ehr-es-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
split: test
metrics:
- name: f1
type: f1
value: 0.904
pipeline_tag: token-classification
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# bsc-bio-ehr-es-pharmaconer
This model is a finetuned version of bsc-bio-ehr-es for the pharmaconer dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.904
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/bert-base-spanish-wwm-cased-caresC | IIC | 2024-11-25T10:41:19Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"es",
"dataset:chizhikchi/CARES",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-20T15:45:56Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-caresC
results:
- task:
type: multi-label-classification
dataset:
name: Cares Chapters
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.835
pipeline_tag: text-classification
---
# bert-base-spanish-wwm-cased-caresC
This model is a finetuned version of bert-base-spanish-wwm-cased for the Cares Chapters dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.835
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/mdeberta-v3-base-caresC | IIC | 2024-11-25T10:41:15Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"es",
"dataset:chizhikchi/CARES",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-20T15:46:42Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-caresC
results:
- task:
type: multi-label-classification
dataset:
name: Cares Chapters
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.756
pipeline_tag: text-classification
---
# mdeberta-v3-base-caresC
This model is a finetuned version of mdeberta-v3-base for the Cares Chapters dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.756
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/bsc-bio-ehr-es-caresC | IIC | 2024-11-25T10:41:14Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bsc-bio-ehr-es",
"es",
"dataset:chizhikchi/CARES",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-20T15:48:50Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bsc-bio-ehr-es
license: apache-2.0
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/bsc-bio-ehr-es-caresC
results:
- task:
type: multi-label-classification
dataset:
name: Cares Chapters
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.862
pipeline_tag: text-classification
---
# bsc-bio-ehr-es-caresC
This model is a finetuned version of bsc-bio-ehr-es for the Cares Chapters dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.862
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/BETO_Galen-caresA | IIC | 2024-11-25T10:41:11Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"BETO_Galen",
"es",
"dataset:chizhikchi/CARES",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-20T15:39:02Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- BETO_Galen
license: mit
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/BETO_Galen-caresA
results:
- task:
type: multi-label-classification
dataset:
name: Cares Area
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.977
pipeline_tag: text-classification
---
# BETO_Galen-caresA
This model is a finetuned version of BETO_Galen for the Cares Area dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.977
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/bert-base-spanish-wwm-cased-caresA | IIC | 2024-11-25T10:41:06Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"spanish",
"bert-base-spanish-wwm-cased",
"es",
"dataset:chizhikchi/CARES",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-20T15:27:00Z | ---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "chizhikchi/CARES"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-caresA
results:
- task:
type: multi-label-classification
dataset:
name: Cares Area
type: chizhikchi/CARES
split: test
metrics:
- name: f1
type: f1
value: 0.992
pipeline_tag: text-classification
---
# bert-base-spanish-wwm-cased-caresA
This model is a finetuned version of bert-base-spanish-wwm-cased for the cantemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.992
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 32 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/BETO_Galen | IIC | 2024-11-25T10:41:05Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"beto",
"galen",
"es",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-06-19T11:10:49Z | ---
language: es
tags:
- beto
- galen
license: mit
---
# BETO Galén
This is a third party reupload of the original BETO Galén model, available in [GitHub](https://github.com/guilopgar/ClinicalCodingTransformerES).
Please refer to the original publication for more information
## BibTeX entry and citation info
```bibtex
@article{9430499,
author={López-García, Guillermo and Jerez, José M. and Ribelles, Nuria and Alba, Emilio and Veredas, Francisco J.},
journal={IEEE Access},
title={Transformers for Clinical Coding in Spanish},
year={2021},
volume={9},
number={},
pages={72387-72397},
doi={10.1109/ACCESS.2021.3080085}}
```
|
IIC/bsc-bio-ehr-es-cantemist | IIC | 2024-11-25T10:41:03Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"bsc-bio-ehr-es",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-19T15:23:23Z | ---
language: es
tags:
- biomedical
- clinical
- eHR
- spanish
- bsc-bio-ehr-es
license: apache-2.0
datasets:
- "PlanTL-GOB-ES/cantemist-ner"
metrics:
- f1
model-index:
- name: IIC/bsc-bio-ehr-es-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- name: f1
type: f1
value: 0.864
widget:
- text: "El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral) PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo."
- text: "Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo pulmonar, con hallazgos altamente sospechosos de carcinoma."
- text: "Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre hígado cirrótico, en paciente con índice Child-Pugh B."
---
# bsc-bio-ehr-es-cantemist
This model is a finetuned version of bsc-bio-ehr-es for the cantemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.864
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 2e05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/XLM_R_Galen-cantemist | IIC | 2024-11-25T10:41:02Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"XLM_R_Galen",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-19T15:32:03Z | ---
language: es
tags:
- biomedical
- clinical
- eHR
- spanish
- XLM_R_Galen
license: mit
datasets:
- "PlanTL-GOB-ES/cantemist-ner"
metrics:
- f1
model-index:
- name: IIC/XLM_R_Galen-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- name: f1
type: f1
value: 0.898
widget:
- text: "El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral) PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo."
- text: "Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo pulmonar, con hallazgos altamente sospechosos de carcinoma."
- text: "Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre hígado cirrótico, en paciente con índice Child-Pugh B."
---
# XLM_R_Galen-cantemist
This model is a finetuned version of XLM_R_Galen for the cantemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.898
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
IIC/BETO_Galen-cantemist | IIC | 2024-11-25T10:40:57Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"BETO_Galen",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-19T15:31:09Z | ---
language: es
tags:
- biomedical
- clinical
- eHR
- spanish
- BETO_Galen
license: mit
datasets:
- "PlanTL-GOB-ES/cantemist-ner"
metrics:
- f1
model-index:
- name: IIC/BETO_Galen-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- name: f1
type: f1
value: 0.802
widget:
- text: "El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral) PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo."
- text: "Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo pulmonar, con hallazgos altamente sospechosos de carcinoma."
- text: "Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre hígado cirrótico, en paciente con índice Child-Pugh B."
---
# BETO_Galen-cantemist
This model is a finetuned version of BETO_Galen for the cantemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.802
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 3e05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
|
alpcansoydas/product-model-class-level-bert-total47label_ifhavemorethan100sampleperclass-25.11.2024-0.60acc | alpcansoydas | 2024-11-25T10:39:31Z | 209 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T10:39:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahid1/xlm-roberta-base-finetuned-panx-it | ahid1 | 2024-11-25T10:38:37Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T10:35:50Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2465
- F1: 0.8298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7606 | 1.0 | 70 | 0.3201 | 0.7487 |
| 0.2895 | 2.0 | 140 | 0.2722 | 0.7857 |
| 0.1834 | 3.0 | 210 | 0.2465 | 0.8298 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mini1013/master_cate_ac10 | mini1013 | 2024-11-25T10:33:39Z | 225 | 0 | setfit | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | text-classification | 2024-11-25T10:33:19Z | ---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 벤시몽 RAIN BOOTS MID - 7color DOLPHIN GREY_40 260 오리상점
- text: 플레이볼 오리진 뮬 (PLAYBALL ORIGIN MULE) NY (Off White) 화이트_230 주식회사 에프앤에프
- text: XDMNBTX0037 빅 사이즈 봄여름 블로퍼 고양이 액체설 블랙_265 푸른바다
- text: 다이어트 슬리퍼 다리 부종 스트레칭 균형 실내화 핑크 33-37_33 글로벌다이렉트
- text: 케즈 챔피온 스트랩 캔버스5 M01778F001 Black/Black/Black_230 블루빌리
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.6511206701381028
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 9.0 | <ul><li>'로저비비에 로저 비비어 i 러브 비비어 슬링백 펌프스 RVW53834670PE5 여성 37 주식회사 페칭'</li><li>'크롬베즈 스티치 장식 통굽펌프스 KP55797MA 카멜/245 sellerhub'</li><li>'HOBOKEN PS1511 PH2208 (3컬러) 브라운 230 NC_백화점'</li></ul> |
| 2.0 | <ul><li>'어그클래식울트라미니 ugg 어그부츠 여성 방한화 여자 발편한 겨울 신발 1116109 Sage Blossom_US 6(230) 울바이울'</li><li>'해외문스타 810s ET027 마르케 모디 운동화 장화 레인부츠 일본 직구 300_코요테_모디ET027 뉴저지홀세일'</li><li>'무릎 위에 앉다 장화 롱부츠 굽이 거칠다 평평한 바닥 고통 라이더 부츠 블랙_225 ZHANG YOUHUA'</li></ul> |
| 0.0 | <ul><li>'단화 한복신발 여성 새 혼례 소프트 한복구두 전통 꽃신 자수 39_빅화이트백봉이는한사이즈크게찍으셨으면좋겠습 대복컴퍼니'</li><li>'한복구두 꽃신 양단 생활한복 키높이 단화 굽 빅사이즈 담그어 여름 터지는 구슬 화이트-3.5cm_41 대한민국 일등 상점'</li><li>'여자 키높이 신발 여성 꽃신 한복 구두 전통 계량한복 37_화이트12(지연) 유럽걸스'</li></ul> |
| 4.0 | <ul><li>'남여공용 청키 클로그 바운서 샌들 (3ASDCBC33) 블랙(50BKS)_240 '</li><li>'[포멜카멜레]쥬얼장식트위드샌들 3cm FJS1F1SS024 아이보리/255 에이케이에스앤디(주) AK플라자 평택점'</li><li>'[하프클럽/] 에끌라 투웨이 주얼 샌들 33.카멜/245mm 롯데아이몰'</li></ul> |
| 8.0 | <ul><li>'에스콰이아 여성 발편한 경량 세미 캐주얼 앵클 워커 부츠 3cm J278C 브라운_230 (주) 패션플러스'</li><li>'[제옥스](신세계강남점) 스페리카 EC7 여성 워커부츠-블랙 W1B6VDJ3W11 블랙_245(38) 주식회사 에스에스지닷컴'</li><li>'(신세계강남점)금강 랜드로바 경량 컴포트 여성 워커 부츠 LANBOC4107WK1 240 신세계백화점'</li></ul> |
| 6.0 | <ul><li>'10mm 2중바닥 실내 슬리퍼 병원 거실 호텔 실내화 슬리퍼-타올천_고급-C_검정 주식회사 하루이'</li><li>'소프달링 남녀공용 뽀글이 스마일 털슬리퍼 여성 겨울 털실내화 VJ/왕스마일/옐로우_255 소프달링'</li><li>'소프달링 남녀공용 뽀글이 스마일 털슬리퍼 여성 겨울 털실내화 VJ/왕스마일/옐로우_245 소프달링'</li></ul> |
| 3.0 | <ul><li>'지안비토로씨 여성 마고 미드 부티 GIA36T75BLU18A1A00 EU 38.5 봉쥬르유럽'</li><li>'모다아울렛 121507 여성 7cm 깔끔 스틸레토 부티 구두 블랙k040_250 ◈217326053◈ MODA아울렛'</li><li>'미들부츠 미들힐 봄신상 워커 롱부츠 봄 가을신상 힐 블랙 245 바이포비'</li></ul> |
| 5.0 | <ul><li>'[공식판매] 버켄스탁 지제 에바 EVA 블랙 화이트 07 비트루트퍼플 키즈_220 (34) 좁은발볼 (Narrow) '</li><li>'eva 털슬리퍼 방한 방수 따듯한 털신 통굽 실내 화 기모 크로스오버 블랙M 소보로샵'</li><li>'크록스호환내피 털 탈부착 퍼 겨울 슬리퍼 안감 크림화이트(주니어)_C10-165(155~165) 인터코리아'</li></ul> |
| 7.0 | <ul><li>'[밸롭] 구름 브리즈 베이지 구름 브리즈 베이지245 (주)지티에스글로벌'</li><li>'[스텝100] 무지외반증 허리디스크 평발 신발 무릎 관절 중년 여성 운동화 화이트핑크플라워_235 스텝100'</li><li>'물컹슈즈 2.0 기능성 운동화 발편한 쿠션 운동화 무지외반증신발 족저근막염 물컹 업그레이드2.0_네이비_46(280mm) 주식회사 나인투식스'</li></ul> |
| 1.0 | <ul><li>'베라왕 스타일온에어 23SS 청 플랫폼 로퍼 80111682 G 667381 틸블루_230 DM ENG'</li><li>'[MUJI] 발수 발이 편한 스니커 머스터드 235mm 4550182676303 무인양품(주)'</li><li>'[반스(슈즈)]반스 어센틱 체커보드 스니커즈 (VN000W4NDI0) 4.240 롯데아이몰'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.6511 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_ac10")
# Run inference
preds = model("XDMNBTX0037 빅 사이즈 봄여름 블로퍼 고양이 액체설 블랙_265 푸른바다")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 10.504 | 21 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0127 | 1 | 0.4172 | - |
| 0.6329 | 50 | 0.3266 | - |
| 1.2658 | 100 | 0.1718 | - |
| 1.8987 | 150 | 0.095 | - |
| 2.5316 | 200 | 0.0257 | - |
| 3.1646 | 250 | 0.0142 | - |
| 3.7975 | 300 | 0.0026 | - |
| 4.4304 | 350 | 0.0164 | - |
| 5.0633 | 400 | 0.01 | - |
| 5.6962 | 450 | 0.0004 | - |
| 6.3291 | 500 | 0.0003 | - |
| 6.9620 | 550 | 0.0002 | - |
| 7.5949 | 600 | 0.0002 | - |
| 8.2278 | 650 | 0.0001 | - |
| 8.8608 | 700 | 0.0001 | - |
| 9.4937 | 750 | 0.0001 | - |
| 10.1266 | 800 | 0.0001 | - |
| 10.7595 | 850 | 0.0001 | - |
| 11.3924 | 900 | 0.0001 | - |
| 12.0253 | 950 | 0.0001 | - |
| 12.6582 | 1000 | 0.0001 | - |
| 13.2911 | 1050 | 0.0001 | - |
| 13.9241 | 1100 | 0.0001 | - |
| 14.5570 | 1150 | 0.0001 | - |
| 15.1899 | 1200 | 0.0001 | - |
| 15.8228 | 1250 | 0.0001 | - |
| 16.4557 | 1300 | 0.0001 | - |
| 17.0886 | 1350 | 0.0001 | - |
| 17.7215 | 1400 | 0.0001 | - |
| 18.3544 | 1450 | 0.0001 | - |
| 18.9873 | 1500 | 0.0001 | - |
| 19.6203 | 1550 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k10_task3_organization_fold0 | MayBashendy | 2024-11-25T10:32:25Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T10:27:27Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k10_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k10_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9510
- Qwk: 0.0
- Mse: 0.9510
- Rmse: 0.9752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0385 | 2 | 4.2632 | -0.0072 | 4.2632 | 2.0647 |
| No log | 0.0769 | 4 | 2.2092 | -0.0722 | 2.2092 | 1.4863 |
| No log | 0.1154 | 6 | 1.1120 | -0.0927 | 1.1120 | 1.0545 |
| No log | 0.1538 | 8 | 1.1528 | 0.0530 | 1.1528 | 1.0737 |
| No log | 0.1923 | 10 | 1.3029 | -0.0296 | 1.3029 | 1.1415 |
| No log | 0.2308 | 12 | 1.2346 | -0.0296 | 1.2346 | 1.1111 |
| No log | 0.2692 | 14 | 1.1317 | -0.0296 | 1.1317 | 1.0638 |
| No log | 0.3077 | 16 | 1.1026 | -0.0732 | 1.1026 | 1.0500 |
| No log | 0.3462 | 18 | 1.0850 | 0.2029 | 1.0850 | 1.0417 |
| No log | 0.3846 | 20 | 1.2114 | 0.0833 | 1.2114 | 1.1006 |
| No log | 0.4231 | 22 | 1.3676 | 0.0 | 1.3676 | 1.1694 |
| No log | 0.4615 | 24 | 1.2144 | -0.0087 | 1.2144 | 1.1020 |
| No log | 0.5 | 26 | 1.1744 | -0.0565 | 1.1744 | 1.0837 |
| No log | 0.5385 | 28 | 1.2264 | -0.0185 | 1.2264 | 1.1075 |
| No log | 0.5769 | 30 | 1.2330 | -0.0185 | 1.2330 | 1.1104 |
| No log | 0.6154 | 32 | 1.1787 | -0.1379 | 1.1787 | 1.0857 |
| No log | 0.6538 | 34 | 1.1674 | -0.1579 | 1.1674 | 1.0805 |
| No log | 0.6923 | 36 | 1.2079 | -0.0185 | 1.2079 | 1.0991 |
| No log | 0.7308 | 38 | 1.2877 | 0.0 | 1.2877 | 1.1348 |
| No log | 0.7692 | 40 | 1.3044 | 0.0 | 1.3044 | 1.1421 |
| No log | 0.8077 | 42 | 1.4101 | 0.0 | 1.4101 | 1.1875 |
| No log | 0.8462 | 44 | 1.5019 | 0.0 | 1.5019 | 1.2255 |
| No log | 0.8846 | 46 | 1.3287 | 0.0 | 1.3287 | 1.1527 |
| No log | 0.9231 | 48 | 1.4012 | 0.0 | 1.4012 | 1.1837 |
| No log | 0.9615 | 50 | 1.3947 | -0.1048 | 1.3947 | 1.1810 |
| No log | 1.0 | 52 | 1.5866 | 0.0 | 1.5866 | 1.2596 |
| No log | 1.0385 | 54 | 1.6531 | 0.0 | 1.6531 | 1.2857 |
| No log | 1.0769 | 56 | 1.6339 | 0.0 | 1.6339 | 1.2782 |
| No log | 1.1154 | 58 | 1.7011 | 0.0 | 1.7011 | 1.3043 |
| No log | 1.1538 | 60 | 1.8476 | 0.0 | 1.8476 | 1.3593 |
| No log | 1.1923 | 62 | 1.7546 | 0.0 | 1.7546 | 1.3246 |
| No log | 1.2308 | 64 | 1.5127 | 0.0 | 1.5127 | 1.2299 |
| No log | 1.2692 | 66 | 1.2359 | -0.4756 | 1.2359 | 1.1117 |
| No log | 1.3077 | 68 | 1.0332 | 0.0 | 1.0332 | 1.0164 |
| No log | 1.3462 | 70 | 0.9250 | 0.2143 | 0.9250 | 0.9617 |
| No log | 1.3846 | 72 | 0.9677 | 0.3444 | 0.9677 | 0.9837 |
| No log | 1.4231 | 74 | 0.9284 | 0.2143 | 0.9284 | 0.9635 |
| No log | 1.4615 | 76 | 0.8917 | 0.0 | 0.8917 | 0.9443 |
| No log | 1.5 | 78 | 0.8932 | 0.0 | 0.8932 | 0.9451 |
| No log | 1.5385 | 80 | 0.8944 | 0.0 | 0.8944 | 0.9457 |
| No log | 1.5769 | 82 | 0.8920 | 0.2143 | 0.8920 | 0.9445 |
| No log | 1.6154 | 84 | 0.8837 | 0.0 | 0.8837 | 0.9401 |
| No log | 1.6538 | 86 | 0.8896 | 0.0 | 0.8896 | 0.9432 |
| No log | 1.6923 | 88 | 0.9527 | -0.1786 | 0.9527 | 0.9761 |
| No log | 1.7308 | 90 | 0.9744 | -0.1786 | 0.9744 | 0.9871 |
| No log | 1.7692 | 92 | 0.9385 | 0.2143 | 0.9385 | 0.9688 |
| No log | 1.8077 | 94 | 0.9884 | 0.3444 | 0.9884 | 0.9942 |
| No log | 1.8462 | 96 | 0.9427 | 0.3444 | 0.9427 | 0.9709 |
| No log | 1.8846 | 98 | 0.8367 | 0.2143 | 0.8367 | 0.9147 |
| No log | 1.9231 | 100 | 0.9757 | -0.1786 | 0.9757 | 0.9878 |
| No log | 1.9615 | 102 | 0.9657 | -0.1786 | 0.9657 | 0.9827 |
| No log | 2.0 | 104 | 0.7923 | 0.0 | 0.7923 | 0.8901 |
| No log | 2.0385 | 106 | 0.8280 | 0.3623 | 0.8280 | 0.9099 |
| No log | 2.0769 | 108 | 0.7973 | 0.384 | 0.7973 | 0.8929 |
| No log | 2.1154 | 110 | 0.7511 | 0.0 | 0.7511 | 0.8666 |
| No log | 2.1538 | 112 | 0.8135 | 0.0 | 0.8135 | 0.9019 |
| No log | 2.1923 | 114 | 0.8248 | 0.0 | 0.8248 | 0.9082 |
| No log | 2.2308 | 116 | 0.7810 | 0.0 | 0.7810 | 0.8838 |
| No log | 2.2692 | 118 | 0.8996 | 0.0 | 0.8996 | 0.9485 |
| No log | 2.3077 | 120 | 1.0392 | -0.1786 | 1.0392 | 1.0194 |
| No log | 2.3462 | 122 | 1.0409 | -0.1786 | 1.0409 | 1.0203 |
| No log | 2.3846 | 124 | 0.8657 | 0.0 | 0.8657 | 0.9304 |
| No log | 2.4231 | 126 | 0.8348 | 0.1951 | 0.8348 | 0.9136 |
| No log | 2.4615 | 128 | 0.9880 | 0.3164 | 0.9880 | 0.9940 |
| No log | 2.5 | 130 | 0.6816 | 0.3623 | 0.6816 | 0.8256 |
| No log | 2.5385 | 132 | 0.8098 | 0.0 | 0.8098 | 0.8999 |
| No log | 2.5769 | 134 | 1.0336 | -0.1786 | 1.0336 | 1.0166 |
| No log | 2.6154 | 136 | 1.0465 | -0.1786 | 1.0465 | 1.0230 |
| No log | 2.6538 | 138 | 0.9823 | 0.0 | 0.9823 | 0.9911 |
| No log | 2.6923 | 140 | 0.7761 | 0.0 | 0.7761 | 0.8810 |
| No log | 2.7308 | 142 | 0.7305 | 0.0 | 0.7305 | 0.8547 |
| No log | 2.7692 | 144 | 0.8030 | 0.0 | 0.8030 | 0.8961 |
| No log | 2.8077 | 146 | 0.9460 | -0.1786 | 0.9460 | 0.9726 |
| No log | 2.8462 | 148 | 1.0552 | -0.1786 | 1.0552 | 1.0272 |
| No log | 2.8846 | 150 | 1.0626 | -0.1786 | 1.0626 | 1.0308 |
| No log | 2.9231 | 152 | 0.9744 | -0.1786 | 0.9744 | 0.9871 |
| No log | 2.9615 | 154 | 0.9081 | -0.1786 | 0.9081 | 0.9530 |
| No log | 3.0 | 156 | 0.8509 | 0.0 | 0.8509 | 0.9225 |
| No log | 3.0385 | 158 | 0.8263 | 0.0 | 0.8263 | 0.9090 |
| No log | 3.0769 | 160 | 0.9130 | 0.0 | 0.9130 | 0.9555 |
| No log | 3.1154 | 162 | 1.0264 | -0.1786 | 1.0264 | 1.0131 |
| No log | 3.1538 | 164 | 1.0862 | -0.1786 | 1.0862 | 1.0422 |
| No log | 3.1923 | 166 | 1.0547 | -0.1786 | 1.0547 | 1.0270 |
| No log | 3.2308 | 168 | 0.9452 | 0.0 | 0.9452 | 0.9722 |
| No log | 3.2692 | 170 | 0.8473 | 0.0 | 0.8473 | 0.9205 |
| No log | 3.3077 | 172 | 0.8250 | 0.0 | 0.8250 | 0.9083 |
| No log | 3.3462 | 174 | 0.7903 | 0.2143 | 0.7903 | 0.8890 |
| No log | 3.3846 | 176 | 0.7898 | 0.2143 | 0.7898 | 0.8887 |
| No log | 3.4231 | 178 | 0.8224 | 0.2143 | 0.8224 | 0.9069 |
| No log | 3.4615 | 180 | 0.8364 | 0.2143 | 0.8364 | 0.9146 |
| No log | 3.5 | 182 | 0.8060 | 0.2143 | 0.8060 | 0.8978 |
| No log | 3.5385 | 184 | 0.7865 | 0.2143 | 0.7865 | 0.8869 |
| No log | 3.5769 | 186 | 0.7786 | 0.384 | 0.7786 | 0.8824 |
| No log | 3.6154 | 188 | 0.7435 | 0.2143 | 0.7435 | 0.8623 |
| No log | 3.6538 | 190 | 0.7972 | 0.2143 | 0.7972 | 0.8928 |
| No log | 3.6923 | 192 | 0.8453 | 0.0 | 0.8453 | 0.9194 |
| No log | 3.7308 | 194 | 0.8009 | 0.0 | 0.8009 | 0.8949 |
| No log | 3.7692 | 196 | 0.6832 | 0.2143 | 0.6832 | 0.8266 |
| No log | 3.8077 | 198 | 0.6603 | 0.1818 | 0.6603 | 0.8126 |
| No log | 3.8462 | 200 | 0.7464 | -0.0342 | 0.7464 | 0.8640 |
| No log | 3.8846 | 202 | 0.8869 | 0.1270 | 0.8869 | 0.9417 |
| No log | 3.9231 | 204 | 0.8835 | 0.1270 | 0.8835 | 0.9400 |
| No log | 3.9615 | 206 | 0.7576 | -0.0342 | 0.7576 | 0.8704 |
| No log | 4.0 | 208 | 0.7115 | -0.0185 | 0.7115 | 0.8435 |
| No log | 4.0385 | 210 | 0.7184 | 0.0 | 0.7184 | 0.8476 |
| No log | 4.0769 | 212 | 0.6715 | 0.0 | 0.6715 | 0.8194 |
| No log | 4.1154 | 214 | 0.6525 | 0.2143 | 0.6525 | 0.8077 |
| No log | 4.1538 | 216 | 0.6737 | 0.384 | 0.6737 | 0.8208 |
| No log | 4.1923 | 218 | 0.6641 | 0.2143 | 0.6641 | 0.8149 |
| No log | 4.2308 | 220 | 0.7274 | 0.0 | 0.7274 | 0.8529 |
| No log | 4.2692 | 222 | 0.8297 | -0.0185 | 0.8297 | 0.9109 |
| No log | 4.3077 | 224 | 0.8282 | -0.0185 | 0.8282 | 0.9101 |
| No log | 4.3462 | 226 | 0.7279 | 0.0 | 0.7279 | 0.8532 |
| No log | 4.3846 | 228 | 0.7431 | 0.384 | 0.7431 | 0.8621 |
| No log | 4.4231 | 230 | 0.8343 | 0.3623 | 0.8343 | 0.9134 |
| No log | 4.4615 | 232 | 0.7906 | 0.3623 | 0.7906 | 0.8892 |
| No log | 4.5 | 234 | 0.7653 | 0.2143 | 0.7653 | 0.8748 |
| No log | 4.5385 | 236 | 0.9808 | 0.0 | 0.9808 | 0.9904 |
| No log | 4.5769 | 238 | 1.0983 | 0.0 | 1.0983 | 1.0480 |
| No log | 4.6154 | 240 | 1.0580 | 0.0 | 1.0580 | 1.0286 |
| No log | 4.6538 | 242 | 0.8939 | -0.1786 | 0.8939 | 0.9454 |
| No log | 4.6923 | 244 | 0.7915 | 0.3623 | 0.7915 | 0.8897 |
| No log | 4.7308 | 246 | 0.9940 | 0.3293 | 0.9940 | 0.9970 |
| No log | 4.7692 | 248 | 0.9338 | 0.3293 | 0.9338 | 0.9663 |
| No log | 4.8077 | 250 | 0.7559 | 0.3623 | 0.7559 | 0.8694 |
| No log | 4.8462 | 252 | 0.8323 | 0.0 | 0.8323 | 0.9123 |
| No log | 4.8846 | 254 | 0.9465 | 0.0 | 0.9465 | 0.9729 |
| No log | 4.9231 | 256 | 0.9212 | 0.1852 | 0.9212 | 0.9598 |
| No log | 4.9615 | 258 | 0.7880 | 0.0 | 0.7880 | 0.8877 |
| No log | 5.0 | 260 | 0.7151 | 0.384 | 0.7151 | 0.8457 |
| No log | 5.0385 | 262 | 0.7989 | 0.3444 | 0.7989 | 0.8938 |
| No log | 5.0769 | 264 | 0.8778 | 0.3444 | 0.8778 | 0.9369 |
| No log | 5.1154 | 266 | 0.8384 | 0.3444 | 0.8384 | 0.9157 |
| No log | 5.1538 | 268 | 0.8340 | 0.3444 | 0.8340 | 0.9132 |
| No log | 5.1923 | 270 | 0.8031 | 0.3623 | 0.8031 | 0.8962 |
| No log | 5.2308 | 272 | 0.7635 | 0.2143 | 0.7635 | 0.8738 |
| No log | 5.2692 | 274 | 0.7753 | 0.0 | 0.7753 | 0.8805 |
| No log | 5.3077 | 276 | 0.7704 | 0.0 | 0.7704 | 0.8777 |
| No log | 5.3462 | 278 | 0.7631 | 0.384 | 0.7631 | 0.8735 |
| No log | 5.3846 | 280 | 0.7689 | 0.0 | 0.7689 | 0.8769 |
| No log | 5.4231 | 282 | 0.7717 | 0.2143 | 0.7717 | 0.8785 |
| No log | 5.4615 | 284 | 0.7766 | 0.0 | 0.7766 | 0.8813 |
| No log | 5.5 | 286 | 0.8101 | 0.0 | 0.8101 | 0.9001 |
| No log | 5.5385 | 288 | 0.8183 | 0.0 | 0.8183 | 0.9046 |
| No log | 5.5769 | 290 | 0.8205 | 0.0 | 0.8205 | 0.9058 |
| No log | 5.6154 | 292 | 0.9011 | 0.0 | 0.9011 | 0.9492 |
| No log | 5.6538 | 294 | 0.9587 | -0.0342 | 0.9587 | 0.9791 |
| No log | 5.6923 | 296 | 0.9472 | -0.0342 | 0.9472 | 0.9732 |
| No log | 5.7308 | 298 | 0.9640 | -0.1846 | 0.9640 | 0.9818 |
| No log | 5.7692 | 300 | 0.9874 | -0.1846 | 0.9874 | 0.9937 |
| No log | 5.8077 | 302 | 0.9874 | -0.1846 | 0.9874 | 0.9937 |
| No log | 5.8462 | 304 | 0.9633 | -0.1846 | 0.9633 | 0.9815 |
| No log | 5.8846 | 306 | 0.9723 | -0.1846 | 0.9723 | 0.9861 |
| No log | 5.9231 | 308 | 1.0185 | -0.1846 | 1.0185 | 1.0092 |
| No log | 5.9615 | 310 | 1.0376 | -0.1846 | 1.0376 | 1.0186 |
| No log | 6.0 | 312 | 0.9653 | -0.1846 | 0.9653 | 0.9825 |
| No log | 6.0385 | 314 | 0.8858 | -0.0342 | 0.8858 | 0.9411 |
| No log | 6.0769 | 316 | 0.8549 | -0.0342 | 0.8549 | 0.9246 |
| No log | 6.1154 | 318 | 0.8425 | -0.0342 | 0.8425 | 0.9179 |
| No log | 6.1538 | 320 | 0.8756 | -0.0342 | 0.8756 | 0.9357 |
| No log | 6.1923 | 322 | 0.8960 | -0.0342 | 0.8960 | 0.9466 |
| No log | 6.2308 | 324 | 0.9624 | -0.0342 | 0.9624 | 0.9810 |
| No log | 6.2692 | 326 | 0.9741 | -0.0342 | 0.9741 | 0.9870 |
| No log | 6.3077 | 328 | 0.9203 | -0.0342 | 0.9203 | 0.9593 |
| No log | 6.3462 | 330 | 0.8578 | -0.0342 | 0.8578 | 0.9262 |
| No log | 6.3846 | 332 | 0.8466 | -0.0342 | 0.8466 | 0.9201 |
| No log | 6.4231 | 334 | 0.8667 | -0.0342 | 0.8667 | 0.9309 |
| No log | 6.4615 | 336 | 0.9138 | -0.0342 | 0.9138 | 0.9560 |
| No log | 6.5 | 338 | 0.9664 | -0.1846 | 0.9664 | 0.9831 |
| No log | 6.5385 | 340 | 0.9971 | -0.1846 | 0.9971 | 0.9985 |
| No log | 6.5769 | 342 | 0.9708 | -0.1818 | 0.9708 | 0.9853 |
| No log | 6.6154 | 344 | 0.9358 | -0.1818 | 0.9358 | 0.9674 |
| No log | 6.6538 | 346 | 0.9181 | -0.1818 | 0.9181 | 0.9582 |
| No log | 6.6923 | 348 | 0.9178 | -0.1846 | 0.9178 | 0.9580 |
| No log | 6.7308 | 350 | 0.9534 | -0.1846 | 0.9534 | 0.9764 |
| No log | 6.7692 | 352 | 0.9497 | -0.1846 | 0.9497 | 0.9745 |
| No log | 6.8077 | 354 | 0.9163 | -0.0342 | 0.9163 | 0.9572 |
| No log | 6.8462 | 356 | 0.8615 | 0.1538 | 0.8615 | 0.9282 |
| No log | 6.8846 | 358 | 0.8553 | 0.2949 | 0.8553 | 0.9248 |
| No log | 6.9231 | 360 | 0.8522 | 0.2949 | 0.8522 | 0.9231 |
| No log | 6.9615 | 362 | 0.8416 | 0.1538 | 0.8416 | 0.9174 |
| No log | 7.0 | 364 | 0.8769 | 0.1538 | 0.8769 | 0.9364 |
| No log | 7.0385 | 366 | 0.9577 | -0.1846 | 0.9577 | 0.9786 |
| No log | 7.0769 | 368 | 0.9954 | -0.1846 | 0.9954 | 0.9977 |
| No log | 7.1154 | 370 | 0.9891 | -0.1846 | 0.9891 | 0.9945 |
| No log | 7.1538 | 372 | 0.9758 | -0.1846 | 0.9758 | 0.9878 |
| No log | 7.1923 | 374 | 0.9328 | -0.1846 | 0.9328 | 0.9658 |
| No log | 7.2308 | 376 | 0.9048 | 0.1538 | 0.9048 | 0.9512 |
| No log | 7.2692 | 378 | 0.8962 | 0.1818 | 0.8962 | 0.9467 |
| No log | 7.3077 | 380 | 0.8959 | -0.0185 | 0.8959 | 0.9465 |
| No log | 7.3462 | 382 | 0.9070 | -0.0185 | 0.9070 | 0.9524 |
| No log | 7.3846 | 384 | 0.9060 | -0.0185 | 0.9060 | 0.9519 |
| No log | 7.4231 | 386 | 0.8985 | -0.0185 | 0.8985 | 0.9479 |
| No log | 7.4615 | 388 | 0.8964 | -0.0185 | 0.8964 | 0.9468 |
| No log | 7.5 | 390 | 0.8881 | -0.0185 | 0.8881 | 0.9424 |
| No log | 7.5385 | 392 | 0.8665 | -0.0185 | 0.8665 | 0.9309 |
| No log | 7.5769 | 394 | 0.8550 | 0.1818 | 0.8550 | 0.9246 |
| No log | 7.6154 | 396 | 0.8598 | 0.384 | 0.8598 | 0.9273 |
| No log | 7.6538 | 398 | 0.8851 | 0.3623 | 0.8851 | 0.9408 |
| No log | 7.6923 | 400 | 0.9008 | 0.3623 | 0.9008 | 0.9491 |
| No log | 7.7308 | 402 | 0.9172 | 0.2143 | 0.9172 | 0.9577 |
| No log | 7.7692 | 404 | 0.9438 | 0.0320 | 0.9438 | 0.9715 |
| No log | 7.8077 | 406 | 0.9747 | 0.0149 | 0.9747 | 0.9873 |
| No log | 7.8462 | 408 | 0.9837 | 0.0149 | 0.9837 | 0.9918 |
| No log | 7.8846 | 410 | 0.9938 | 0.0149 | 0.9938 | 0.9969 |
| No log | 7.9231 | 412 | 1.0009 | 0.0149 | 1.0009 | 1.0004 |
| No log | 7.9615 | 414 | 0.9968 | 0.0149 | 0.9968 | 0.9984 |
| No log | 8.0 | 416 | 0.9684 | 0.0149 | 0.9684 | 0.9841 |
| No log | 8.0385 | 418 | 0.9514 | 0.0149 | 0.9514 | 0.9754 |
| No log | 8.0769 | 420 | 0.9629 | 0.0149 | 0.9629 | 0.9813 |
| No log | 8.1154 | 422 | 0.9967 | -0.1846 | 0.9967 | 0.9983 |
| No log | 8.1538 | 424 | 1.0087 | -0.1846 | 1.0087 | 1.0044 |
| No log | 8.1923 | 426 | 0.9948 | -0.1846 | 0.9948 | 0.9974 |
| No log | 8.2308 | 428 | 0.9713 | 0.0 | 0.9713 | 0.9855 |
| No log | 8.2692 | 430 | 0.9401 | 0.0 | 0.9401 | 0.9696 |
| No log | 8.3077 | 432 | 0.9542 | 0.0 | 0.9542 | 0.9768 |
| No log | 8.3462 | 434 | 0.9802 | 0.0 | 0.9802 | 0.9900 |
| No log | 8.3846 | 436 | 1.0188 | 0.0 | 1.0188 | 1.0093 |
| No log | 8.4231 | 438 | 1.0381 | 0.0 | 1.0381 | 1.0189 |
| No log | 8.4615 | 440 | 1.0257 | 0.0 | 1.0257 | 1.0127 |
| No log | 8.5 | 442 | 1.0011 | 0.0 | 1.0011 | 1.0005 |
| No log | 8.5385 | 444 | 0.9711 | 0.0 | 0.9711 | 0.9855 |
| No log | 8.5769 | 446 | 0.9626 | 0.0149 | 0.9626 | 0.9811 |
| No log | 8.6154 | 448 | 0.9686 | 0.0272 | 0.9686 | 0.9842 |
| No log | 8.6538 | 450 | 0.9648 | 0.0149 | 0.9648 | 0.9822 |
| No log | 8.6923 | 452 | 0.9600 | 0.0149 | 0.9600 | 0.9798 |
| No log | 8.7308 | 454 | 0.9643 | 0.0149 | 0.9643 | 0.9820 |
| No log | 8.7692 | 456 | 0.9719 | 0.0 | 0.9719 | 0.9858 |
| No log | 8.8077 | 458 | 0.9839 | 0.0 | 0.9839 | 0.9919 |
| No log | 8.8462 | 460 | 0.9901 | 0.0 | 0.9901 | 0.9950 |
| No log | 8.8846 | 462 | 0.9938 | 0.0 | 0.9938 | 0.9969 |
| No log | 8.9231 | 464 | 0.9945 | 0.0 | 0.9945 | 0.9973 |
| No log | 8.9615 | 466 | 0.9873 | 0.0149 | 0.9873 | 0.9936 |
| No log | 9.0 | 468 | 0.9816 | 0.0149 | 0.9816 | 0.9908 |
| No log | 9.0385 | 470 | 0.9796 | 0.0149 | 0.9796 | 0.9897 |
| No log | 9.0769 | 472 | 0.9746 | 0.0149 | 0.9746 | 0.9872 |
| No log | 9.1154 | 474 | 0.9676 | 0.0149 | 0.9676 | 0.9837 |
| No log | 9.1538 | 476 | 0.9625 | 0.0149 | 0.9625 | 0.9811 |
| No log | 9.1923 | 478 | 0.9601 | 0.0149 | 0.9601 | 0.9799 |
| No log | 9.2308 | 480 | 0.9600 | 0.0149 | 0.9600 | 0.9798 |
| No log | 9.2692 | 482 | 0.9600 | 0.0149 | 0.9600 | 0.9798 |
| No log | 9.3077 | 484 | 0.9565 | 0.0149 | 0.9565 | 0.9780 |
| No log | 9.3462 | 486 | 0.9530 | 0.0149 | 0.9530 | 0.9762 |
| No log | 9.3846 | 488 | 0.9514 | 0.0149 | 0.9514 | 0.9754 |
| No log | 9.4231 | 490 | 0.9517 | 0.0149 | 0.9517 | 0.9755 |
| No log | 9.4615 | 492 | 0.9520 | 0.0149 | 0.9520 | 0.9757 |
| No log | 9.5 | 494 | 0.9472 | 0.0149 | 0.9472 | 0.9733 |
| No log | 9.5385 | 496 | 0.9430 | 0.0149 | 0.9430 | 0.9711 |
| No log | 9.5769 | 498 | 0.9399 | 0.0149 | 0.9399 | 0.9695 |
| 0.3675 | 9.6154 | 500 | 0.9369 | 0.0149 | 0.9369 | 0.9679 |
| 0.3675 | 9.6538 | 502 | 0.9347 | 0.0149 | 0.9347 | 0.9668 |
| 0.3675 | 9.6923 | 504 | 0.9353 | 0.0149 | 0.9353 | 0.9671 |
| 0.3675 | 9.7308 | 506 | 0.9375 | 0.0 | 0.9375 | 0.9682 |
| 0.3675 | 9.7692 | 508 | 0.9385 | 0.0149 | 0.9385 | 0.9688 |
| 0.3675 | 9.8077 | 510 | 0.9405 | 0.0 | 0.9405 | 0.9698 |
| 0.3675 | 9.8462 | 512 | 0.9427 | 0.0 | 0.9427 | 0.9709 |
| 0.3675 | 9.8846 | 514 | 0.9466 | 0.0 | 0.9466 | 0.9729 |
| 0.3675 | 9.9231 | 516 | 0.9493 | 0.0 | 0.9493 | 0.9743 |
| 0.3675 | 9.9615 | 518 | 0.9505 | 0.0 | 0.9505 | 0.9749 |
| 0.3675 | 10.0 | 520 | 0.9510 | 0.0 | 0.9510 | 0.9752 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Generativ/naxos | Generativ | 2024-11-25T10:31:54Z | 41 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-11-25T09:33:09Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: naxos
---
# Naxos
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `naxos` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Generativ/naxos', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
AlexeyRyzhikov/donut-V2.0 | AlexeyRyzhikov | 2024-11-25T10:30:36Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-25T10:30:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Mistral-7B-v0.1-sft-spin-10k | AmberYifan | 2024-11-25T10:29:45Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/mistral-v0.1-7b-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/mistral-v0.1-7b-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T08:59:08Z | ---
base_model: AmberYifan/mistral-v0.1-7b-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Mistral-7B-v0.1-sft-spin-10k
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Mistral-7B-v0.1-sft-spin-10k
This model is a fine-tuned version of [AmberYifan/mistral-v0.1-7b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/mistral-v0.1-7b-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Mistral-7B-v0.1-sft-spin-10k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/lskvhomm)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LakhyajitDas/my-fine-tuned-model | LakhyajitDas | 2024-11-25T10:29:41Z | 199 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-25T08:50:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k5_task3_organization_fold1 | MayBashendy | 2024-11-25T10:26:38Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T10:23:27Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k5_task3_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k5_task3_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9938
- Qwk: -0.0820
- Mse: 0.9938
- Rmse: 0.9969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0769 | 2 | 6.4882 | 0.0120 | 6.4882 | 2.5472 |
| No log | 0.1538 | 4 | 2.8615 | 0.0763 | 2.8615 | 1.6916 |
| No log | 0.2308 | 6 | 1.2489 | 0.0 | 1.2489 | 1.1175 |
| No log | 0.3077 | 8 | 1.2975 | 0.0120 | 1.2975 | 1.1391 |
| No log | 0.3846 | 10 | 1.4121 | 0.0253 | 1.4121 | 1.1883 |
| No log | 0.4615 | 12 | 1.0061 | 0.0253 | 1.0061 | 1.0030 |
| No log | 0.5385 | 14 | 0.7697 | 0.1879 | 0.7697 | 0.8773 |
| No log | 0.6154 | 16 | 1.3628 | 0.0 | 1.3628 | 1.1674 |
| No log | 0.6923 | 18 | 1.3278 | 0.0 | 1.3278 | 1.1523 |
| No log | 0.7692 | 20 | 1.0906 | 0.0 | 1.0906 | 1.0443 |
| No log | 0.8462 | 22 | 0.9415 | 0.0 | 0.9415 | 0.9703 |
| No log | 0.9231 | 24 | 0.8609 | 0.0763 | 0.8609 | 0.9278 |
| No log | 1.0 | 26 | 1.0114 | 0.0120 | 1.0114 | 1.0057 |
| No log | 1.0769 | 28 | 0.8698 | 0.0120 | 0.8698 | 0.9326 |
| No log | 1.1538 | 30 | 1.0945 | -0.1139 | 1.0945 | 1.0462 |
| No log | 1.2308 | 32 | 1.5711 | 0.0 | 1.5711 | 1.2534 |
| No log | 1.3077 | 34 | 1.4726 | 0.0 | 1.4726 | 1.2135 |
| No log | 1.3846 | 36 | 1.3741 | 0.0120 | 1.3741 | 1.1722 |
| No log | 1.4615 | 38 | 1.3272 | -0.1139 | 1.3272 | 1.1521 |
| No log | 1.5385 | 40 | 0.8637 | -0.4602 | 0.8637 | 0.9293 |
| No log | 1.6154 | 42 | 0.7004 | 0.0 | 0.7004 | 0.8369 |
| No log | 1.6923 | 44 | 0.7366 | 0.0 | 0.7366 | 0.8583 |
| No log | 1.7692 | 46 | 0.9411 | -0.4275 | 0.9411 | 0.9701 |
| No log | 1.8462 | 48 | 1.3252 | 0.0120 | 1.3252 | 1.1512 |
| No log | 1.9231 | 50 | 1.5053 | 0.0 | 1.5053 | 1.2269 |
| No log | 2.0 | 52 | 1.5209 | 0.0 | 1.5209 | 1.2332 |
| No log | 2.0769 | 54 | 1.3356 | 0.0 | 1.3356 | 1.1557 |
| No log | 2.1538 | 56 | 0.9869 | 0.0 | 0.9869 | 0.9934 |
| No log | 2.2308 | 58 | 0.8408 | 0.1646 | 0.8408 | 0.9170 |
| No log | 2.3077 | 60 | 0.8580 | 0.0 | 0.8580 | 0.9263 |
| No log | 2.3846 | 62 | 0.9236 | 0.0 | 0.9236 | 0.9611 |
| No log | 2.4615 | 64 | 0.9641 | 0.0120 | 0.9641 | 0.9819 |
| No log | 2.5385 | 66 | 0.9209 | -0.1139 | 0.9209 | 0.9596 |
| No log | 2.6154 | 68 | 0.8825 | -0.4426 | 0.8825 | 0.9394 |
| No log | 2.6923 | 70 | 0.8844 | -0.2692 | 0.8844 | 0.9404 |
| No log | 2.7692 | 72 | 0.7791 | -0.2791 | 0.7791 | 0.8827 |
| No log | 2.8462 | 74 | 0.7874 | -0.2791 | 0.7874 | 0.8873 |
| No log | 2.9231 | 76 | 0.9282 | -0.2791 | 0.9282 | 0.9634 |
| No log | 3.0 | 78 | 1.2708 | -0.4275 | 1.2708 | 1.1273 |
| No log | 3.0769 | 80 | 1.2567 | -0.4426 | 1.2567 | 1.1210 |
| No log | 3.1538 | 82 | 0.8678 | -0.2791 | 0.8678 | 0.9316 |
| No log | 3.2308 | 84 | 0.5034 | 0.4211 | 0.5034 | 0.7095 |
| No log | 3.3077 | 86 | 0.4776 | 0.4590 | 0.4776 | 0.6911 |
| No log | 3.3846 | 88 | 0.5597 | 0.2443 | 0.5597 | 0.7481 |
| No log | 3.4615 | 90 | 0.9531 | 0.1239 | 0.9531 | 0.9763 |
| No log | 3.5385 | 92 | 1.4142 | -0.2222 | 1.4142 | 1.1892 |
| No log | 3.6154 | 94 | 1.4356 | -0.1058 | 1.4356 | 1.1982 |
| No log | 3.6923 | 96 | 1.1902 | -0.1074 | 1.1902 | 1.0909 |
| No log | 3.7692 | 98 | 0.8565 | 0.1239 | 0.8565 | 0.9255 |
| No log | 3.8462 | 100 | 0.5664 | 0.1239 | 0.5664 | 0.7526 |
| No log | 3.9231 | 102 | 0.4910 | 0.4211 | 0.4910 | 0.7007 |
| No log | 4.0 | 104 | 0.6272 | -0.2791 | 0.6272 | 0.7920 |
| No log | 4.0769 | 106 | 0.8063 | -0.2692 | 0.8063 | 0.8979 |
| No log | 4.1538 | 108 | 0.9651 | -0.2692 | 0.9651 | 0.9824 |
| No log | 4.2308 | 110 | 1.1817 | -0.2595 | 1.1817 | 1.0871 |
| No log | 4.3077 | 112 | 1.2166 | -0.2571 | 1.2166 | 1.1030 |
| No log | 4.3846 | 114 | 1.0163 | -0.4602 | 1.0163 | 1.0081 |
| No log | 4.4615 | 116 | 0.9236 | -0.2692 | 0.9236 | 0.9611 |
| No log | 4.5385 | 118 | 0.9074 | -0.2692 | 0.9074 | 0.9526 |
| No log | 4.6154 | 120 | 0.8761 | -0.2692 | 0.8761 | 0.9360 |
| No log | 4.6923 | 122 | 0.9739 | -0.2595 | 0.9739 | 0.9869 |
| No log | 4.7692 | 124 | 1.0063 | -0.2571 | 1.0063 | 1.0032 |
| No log | 4.8462 | 126 | 1.0867 | 0.0403 | 1.0867 | 1.0425 |
| No log | 4.9231 | 128 | 1.0136 | 0.0403 | 1.0136 | 1.0068 |
| No log | 5.0 | 130 | 0.9020 | 0.2443 | 0.9020 | 0.9497 |
| No log | 5.0769 | 132 | 0.8408 | 0.2443 | 0.8408 | 0.9170 |
| No log | 5.1538 | 134 | 1.0709 | 0.0403 | 1.0709 | 1.0348 |
| No log | 5.2308 | 136 | 1.1822 | 0.0403 | 1.1822 | 1.0873 |
| No log | 5.3077 | 138 | 0.9413 | 0.0571 | 0.9413 | 0.9702 |
| No log | 5.3846 | 140 | 0.7544 | 0.1270 | 0.7544 | 0.8686 |
| No log | 5.4615 | 142 | 0.7863 | 0.2443 | 0.7863 | 0.8867 |
| No log | 5.5385 | 144 | 1.1251 | 0.0403 | 1.1251 | 1.0607 |
| No log | 5.6154 | 146 | 1.2707 | 0.0403 | 1.2707 | 1.1273 |
| No log | 5.6923 | 148 | 1.1481 | 0.0403 | 1.1481 | 1.0715 |
| No log | 5.7692 | 150 | 1.1328 | 0.0403 | 1.1328 | 1.0644 |
| No log | 5.8462 | 152 | 1.2027 | 0.0403 | 1.2027 | 1.0967 |
| No log | 5.9231 | 154 | 1.2689 | 0.0403 | 1.2689 | 1.1265 |
| No log | 6.0 | 156 | 1.2094 | 0.0403 | 1.2094 | 1.0997 |
| No log | 6.0769 | 158 | 0.9554 | 0.0403 | 0.9554 | 0.9774 |
| No log | 6.1538 | 160 | 0.7977 | 0.2443 | 0.7977 | 0.8932 |
| No log | 6.2308 | 162 | 0.8738 | 0.0403 | 0.8738 | 0.9348 |
| No log | 6.3077 | 164 | 0.9806 | 0.0403 | 0.9806 | 0.9903 |
| No log | 6.3846 | 166 | 0.9343 | 0.0403 | 0.9343 | 0.9666 |
| No log | 6.4615 | 168 | 0.9511 | 0.0403 | 0.9511 | 0.9753 |
| No log | 6.5385 | 170 | 1.0951 | 0.0403 | 1.0951 | 1.0465 |
| No log | 6.6154 | 172 | 1.2844 | 0.0403 | 1.2844 | 1.1333 |
| No log | 6.6923 | 174 | 1.1801 | 0.0403 | 1.1801 | 1.0863 |
| No log | 6.7692 | 176 | 0.9290 | 0.2443 | 0.9290 | 0.9638 |
| No log | 6.8462 | 178 | 0.8739 | -0.0820 | 0.8739 | 0.9348 |
| No log | 6.9231 | 180 | 0.9441 | -0.0820 | 0.9441 | 0.9716 |
| No log | 7.0 | 182 | 1.1352 | 0.0571 | 1.1352 | 1.0655 |
| No log | 7.0769 | 184 | 1.2470 | 0.0403 | 1.2470 | 1.1167 |
| No log | 7.1538 | 186 | 1.1198 | -0.2595 | 1.1198 | 1.0582 |
| No log | 7.2308 | 188 | 0.9832 | -0.0820 | 0.9832 | 0.9916 |
| No log | 7.3077 | 190 | 0.8924 | -0.2692 | 0.8924 | 0.9447 |
| No log | 7.3846 | 192 | 0.9152 | -0.2692 | 0.9152 | 0.9566 |
| No log | 7.4615 | 194 | 1.0180 | -0.0708 | 1.0180 | 1.0090 |
| No log | 7.5385 | 196 | 1.1520 | -0.2595 | 1.1520 | 1.0733 |
| No log | 7.6154 | 198 | 1.1170 | -0.2595 | 1.1170 | 1.0569 |
| No log | 7.6923 | 200 | 1.0034 | -0.0708 | 1.0034 | 1.0017 |
| No log | 7.7692 | 202 | 0.9057 | -0.0708 | 0.9057 | 0.9517 |
| No log | 7.8462 | 204 | 0.9190 | -0.0708 | 0.9190 | 0.9587 |
| No log | 7.9231 | 206 | 0.9038 | -0.0708 | 0.9038 | 0.9507 |
| No log | 8.0 | 208 | 0.8591 | -0.0708 | 0.8591 | 0.9269 |
| No log | 8.0769 | 210 | 0.8657 | -0.0820 | 0.8657 | 0.9304 |
| No log | 8.1538 | 212 | 0.9487 | -0.0820 | 0.9487 | 0.9740 |
| No log | 8.2308 | 214 | 1.0921 | 0.0403 | 1.0921 | 1.0450 |
| No log | 8.3077 | 216 | 1.1481 | 0.0403 | 1.1481 | 1.0715 |
| No log | 8.3846 | 218 | 1.0795 | 0.0403 | 1.0795 | 1.0390 |
| No log | 8.4615 | 220 | 0.9810 | -0.0820 | 0.9810 | 0.9905 |
| No log | 8.5385 | 222 | 0.8751 | -0.0820 | 0.8751 | 0.9355 |
| No log | 8.6154 | 224 | 0.8409 | -0.0820 | 0.8409 | 0.9170 |
| No log | 8.6923 | 226 | 0.8786 | -0.0820 | 0.8786 | 0.9373 |
| No log | 8.7692 | 228 | 0.9480 | -0.0820 | 0.9480 | 0.9736 |
| No log | 8.8462 | 230 | 1.0155 | -0.0820 | 1.0155 | 1.0077 |
| No log | 8.9231 | 232 | 1.0158 | -0.0820 | 1.0158 | 1.0079 |
| No log | 9.0 | 234 | 1.0166 | -0.0820 | 1.0166 | 1.0083 |
| No log | 9.0769 | 236 | 1.0048 | -0.0820 | 1.0048 | 1.0024 |
| No log | 9.1538 | 238 | 1.0121 | -0.0820 | 1.0121 | 1.0060 |
| No log | 9.2308 | 240 | 1.0128 | -0.0820 | 1.0128 | 1.0064 |
| No log | 9.3077 | 242 | 1.0097 | -0.0820 | 1.0097 | 1.0048 |
| No log | 9.3846 | 244 | 1.0094 | -0.0820 | 1.0094 | 1.0047 |
| No log | 9.4615 | 246 | 1.0006 | -0.0820 | 1.0006 | 1.0003 |
| No log | 9.5385 | 248 | 1.0092 | -0.0820 | 1.0092 | 1.0046 |
| No log | 9.6154 | 250 | 1.0080 | -0.0820 | 1.0080 | 1.0040 |
| No log | 9.6923 | 252 | 1.0038 | -0.0820 | 1.0038 | 1.0019 |
| No log | 9.7692 | 254 | 0.9971 | -0.0820 | 0.9971 | 0.9985 |
| No log | 9.8462 | 256 | 0.9948 | -0.0820 | 0.9948 | 0.9974 |
| No log | 9.9231 | 258 | 0.9944 | -0.0820 | 0.9944 | 0.9972 |
| No log | 10.0 | 260 | 0.9938 | -0.0820 | 0.9938 | 0.9969 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
createveai/Qwen2-VL-2B-Instruct-abliterated-4bit | createveai | 2024-11-25T10:26:22Z | 95 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-11-25T10:23:53Z | ---
base_model: huihui-ai/Qwen2-VL-2B-Instruct-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** createveai
- **License:** apache-2.0
- **Finetuned from model :** huihui-ai/Qwen2-VL-2B-Instruct-abliterated
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
July-Tokyo/xlm-roberta-base-finetuned-panx-de-fr | July-Tokyo | 2024-11-25T10:19:28Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-25T10:03:14Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1622
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2816 | 1.0 | 715 | 0.1804 | 0.8239 |
| 0.1473 | 2.0 | 1430 | 0.1610 | 0.8491 |
| 0.0934 | 3.0 | 2145 | 0.1622 | 0.8617 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.1
|
mradermacher/Marco-01-slerp4-7B-GGUF | mradermacher | 2024-11-25T10:18:24Z | 31 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Marco-01-slerp4-7B",
"base_model:quantized:allknowingroger/Marco-01-slerp4-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-25T09:31:46Z | ---
base_model: allknowingroger/Marco-01-slerp4-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/allknowingroger/Marco-01-slerp4-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Marco-01-slerp4-7B-GGUF/resolve/main/Marco-01-slerp4-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MU-NLPC/CzeGPT-2 | MU-NLPC | 2024-11-25T10:16:57Z | 48,106 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"cs",
"dataset:csTenTen17",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-28T11:50:42Z | ---
language: cs
license: cc-by-nc-sa-4.0
datasets:
- csTenTen17
---
# CzeGPT-2
CzeGPT-2 is a Czech version of GPT-2 language model by OpenAI with LM Head on top. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124 M trainable parameters. It was trained on a 5 GB slice of cleaned csTenTen17 dataset.
The model is a good building block for any down-stream task requiring autoregressive text generation.
# Tokenizer
Along, we also provide a tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase. It is the byte-level BPE tokenizer used in the original paper and was trained on the whole 5 GB train set.
# Training results
The model's perplexity on a 250 MB random slice of csTenTen17 dataset is **42.12**. This value is unfortunately not directly comparable to any other model, since there is no competition in Czech autoregressive models yet (and comparison with models for other languages is meaningless, because of different tokenization and test data).
# Running the predictions
The repository includes a simple Jupyter Notebook that can help with the first steps when using the model.
## How to cite
Hájek A. and Horák A. *CzeGPT-2 – Training New Model for Czech Generative Text Processing Evaluated with the Summarization Task*.
IEEE Access, vol. 12, 34570–34581, Elsevier, 2024. https://doi.org/10.1109/ACCESS.2024.3371689
@article{hajek_horak2024,
author = "Adam Hájek and Aleš Horák",
title = "CzeGPT-2 -- Training New Model for Czech Generative Text Processing Evaluated with the Summarization Task",
journal= "IEEE Access",
year = "2024",
volume = "12",
pages = "34570--34581",
doi = "10.1109/ACCESS.2024.3371689",
} |
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k4_task3_organization_fold0 | MayBashendy | 2024-11-25T10:16:32Z | 161 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-25T10:13:39Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k4_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k4_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1975
- Qwk: -0.2375
- Mse: 1.1975
- Rmse: 1.0943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.08 | 2 | 4.6299 | -0.0034 | 4.6299 | 2.1517 |
| No log | 0.16 | 4 | 2.8190 | -0.0577 | 2.8190 | 1.6790 |
| No log | 0.24 | 6 | 1.2810 | 0.2870 | 1.2810 | 1.1318 |
| No log | 0.32 | 8 | 1.2064 | 0.1852 | 1.2064 | 1.0984 |
| No log | 0.4 | 10 | 1.3418 | 0.0873 | 1.3418 | 1.1584 |
| No log | 0.48 | 12 | 1.4324 | 0.0873 | 1.4324 | 1.1968 |
| No log | 0.56 | 14 | 1.4271 | 0.0873 | 1.4271 | 1.1946 |
| No log | 0.64 | 16 | 1.2683 | 0.0788 | 1.2683 | 1.1262 |
| No log | 0.72 | 18 | 1.2650 | 0.0788 | 1.2650 | 1.1247 |
| No log | 0.8 | 20 | 1.2037 | 0.0788 | 1.2037 | 1.0971 |
| No log | 0.88 | 22 | 1.2983 | 0.0833 | 1.2983 | 1.1394 |
| No log | 0.96 | 24 | 1.3221 | 0.0873 | 1.3221 | 1.1498 |
| No log | 1.04 | 26 | 1.1991 | 0.0833 | 1.1991 | 1.0951 |
| No log | 1.12 | 28 | 1.0703 | 0.0737 | 1.0703 | 1.0345 |
| No log | 1.2 | 30 | 0.9290 | 0.2029 | 0.9290 | 0.9638 |
| No log | 1.28 | 32 | 1.0283 | 0.1987 | 1.0283 | 1.0140 |
| No log | 1.3600 | 34 | 1.1382 | -0.0421 | 1.1382 | 1.0669 |
| No log | 1.44 | 36 | 1.0924 | -0.0732 | 1.0924 | 1.0452 |
| No log | 1.52 | 38 | 1.0098 | 0.2143 | 1.0098 | 1.0049 |
| No log | 1.6 | 40 | 1.0563 | 0.0530 | 1.0563 | 1.0278 |
| No log | 1.6800 | 42 | 1.1795 | 0.0737 | 1.1795 | 1.0861 |
| No log | 1.76 | 44 | 1.1959 | 0.0737 | 1.1959 | 1.0936 |
| No log | 1.8400 | 46 | 1.1819 | 0.0788 | 1.1819 | 1.0872 |
| No log | 1.92 | 48 | 1.1432 | -0.0421 | 1.1432 | 1.0692 |
| No log | 2.0 | 50 | 1.1489 | -0.0421 | 1.1489 | 1.0719 |
| No log | 2.08 | 52 | 1.2008 | -0.0565 | 1.2008 | 1.0958 |
| No log | 2.16 | 54 | 1.1576 | -0.0565 | 1.1576 | 1.0759 |
| No log | 2.24 | 56 | 1.0946 | -0.0732 | 1.0946 | 1.0462 |
| No log | 2.32 | 58 | 1.1192 | -0.0732 | 1.1192 | 1.0579 |
| No log | 2.4 | 60 | 1.1440 | -0.0732 | 1.1440 | 1.0696 |
| No log | 2.48 | 62 | 1.1855 | -0.0732 | 1.1855 | 1.0888 |
| No log | 2.56 | 64 | 1.0275 | 0.0320 | 1.0275 | 1.0137 |
| No log | 2.64 | 66 | 0.9542 | 0.0 | 0.9542 | 0.9769 |
| No log | 2.7200 | 68 | 0.9395 | 0.0 | 0.9395 | 0.9693 |
| No log | 2.8 | 70 | 0.9143 | 0.0 | 0.9143 | 0.9562 |
| No log | 2.88 | 72 | 0.8535 | 0.2143 | 0.8535 | 0.9239 |
| No log | 2.96 | 74 | 0.8774 | 0.384 | 0.8774 | 0.9367 |
| No log | 3.04 | 76 | 0.8699 | 0.2143 | 0.8699 | 0.9327 |
| No log | 3.12 | 78 | 0.8539 | 0.0 | 0.8539 | 0.9241 |
| No log | 3.2 | 80 | 0.8294 | 0.0 | 0.8294 | 0.9107 |
| No log | 3.2800 | 82 | 0.8159 | 0.0 | 0.8159 | 0.9033 |
| No log | 3.36 | 84 | 0.8802 | 0.2143 | 0.8802 | 0.9382 |
| No log | 3.44 | 86 | 1.1040 | -0.0732 | 1.1040 | 1.0507 |
| No log | 3.52 | 88 | 1.1471 | -0.0565 | 1.1471 | 1.0710 |
| No log | 3.6 | 90 | 0.9672 | 0.1987 | 0.9672 | 0.9835 |
| No log | 3.68 | 92 | 0.8477 | 0.2143 | 0.8477 | 0.9207 |
| No log | 3.76 | 94 | 0.8363 | 0.2143 | 0.8363 | 0.9145 |
| No log | 3.84 | 96 | 0.8445 | 0.2143 | 0.8445 | 0.9190 |
| No log | 3.92 | 98 | 0.8871 | 0.3636 | 0.8871 | 0.9419 |
| No log | 4.0 | 100 | 0.9312 | 0.3231 | 0.9312 | 0.9650 |
| No log | 4.08 | 102 | 0.9101 | 0.3231 | 0.9101 | 0.9540 |
| No log | 4.16 | 104 | 0.9171 | 0.1295 | 0.9171 | 0.9577 |
| No log | 4.24 | 106 | 0.9626 | 0.3231 | 0.9626 | 0.9811 |
| No log | 4.32 | 108 | 1.0551 | 0.1538 | 1.0551 | 1.0272 |
| No log | 4.4 | 110 | 1.2065 | -0.0809 | 1.2065 | 1.0984 |
| No log | 4.48 | 112 | 1.2999 | -0.0645 | 1.2999 | 1.1401 |
| No log | 4.5600 | 114 | 1.3993 | -0.0645 | 1.3993 | 1.1829 |
| No log | 4.64 | 116 | 1.3760 | -0.0645 | 1.3760 | 1.1730 |
| No log | 4.72 | 118 | 1.2322 | -0.0809 | 1.2322 | 1.1101 |
| No log | 4.8 | 120 | 1.1092 | 0.0272 | 1.1092 | 1.0532 |
| No log | 4.88 | 122 | 1.1002 | 0.0272 | 1.1002 | 1.0489 |
| No log | 4.96 | 124 | 1.1456 | 0.0272 | 1.1456 | 1.0703 |
| No log | 5.04 | 126 | 1.3237 | -0.0645 | 1.3237 | 1.1505 |
| No log | 5.12 | 128 | 1.2705 | -0.0645 | 1.2705 | 1.1272 |
| No log | 5.2 | 130 | 1.0929 | 0.0435 | 1.0929 | 1.0454 |
| No log | 5.28 | 132 | 1.0678 | 0.0320 | 1.0678 | 1.0333 |
| No log | 5.36 | 134 | 1.1337 | 0.0435 | 1.1337 | 1.0648 |
| No log | 5.44 | 136 | 1.0999 | 0.0320 | 1.0999 | 1.0488 |
| No log | 5.52 | 138 | 1.0237 | 0.0320 | 1.0237 | 1.0118 |
| No log | 5.6 | 140 | 1.0149 | 0.0320 | 1.0149 | 1.0074 |
| No log | 5.68 | 142 | 1.0536 | 0.0320 | 1.0536 | 1.0264 |
| No log | 5.76 | 144 | 1.1730 | 0.0435 | 1.1730 | 1.0831 |
| No log | 5.84 | 146 | 1.2612 | -0.0732 | 1.2612 | 1.1230 |
| No log | 5.92 | 148 | 1.3168 | -0.0732 | 1.3168 | 1.1475 |
| No log | 6.0 | 150 | 1.2701 | -0.2384 | 1.2701 | 1.1270 |
| No log | 6.08 | 152 | 1.3025 | -0.2375 | 1.3025 | 1.1413 |
| No log | 6.16 | 154 | 1.2760 | -0.1065 | 1.2760 | 1.1296 |
| No log | 6.24 | 156 | 1.2807 | -0.1065 | 1.2807 | 1.1317 |
| No log | 6.32 | 158 | 1.3331 | -0.1065 | 1.3331 | 1.1546 |
| No log | 6.4 | 160 | 1.4417 | 0.0410 | 1.4417 | 1.2007 |
| No log | 6.48 | 162 | 1.5828 | 0.0410 | 1.5828 | 1.2581 |
| No log | 6.5600 | 164 | 1.6016 | 0.0410 | 1.6016 | 1.2655 |
| No log | 6.64 | 166 | 1.6733 | 0.0410 | 1.6733 | 1.2936 |
| No log | 6.72 | 168 | 1.6667 | 0.0410 | 1.6667 | 1.2910 |
| No log | 6.8 | 170 | 1.5033 | 0.0410 | 1.5033 | 1.2261 |
| No log | 6.88 | 172 | 1.3823 | 0.0330 | 1.3823 | 1.1757 |
| No log | 6.96 | 174 | 1.2780 | 0.1538 | 1.2780 | 1.1305 |
| No log | 7.04 | 176 | 1.1893 | 0.1316 | 1.1893 | 1.0905 |
| No log | 7.12 | 178 | 1.1873 | 0.0149 | 1.1873 | 1.0896 |
| No log | 7.2 | 180 | 1.2640 | 0.0272 | 1.2640 | 1.1243 |
| No log | 7.28 | 182 | 1.3310 | -0.0809 | 1.3310 | 1.1537 |
| No log | 7.36 | 184 | 1.2932 | -0.0732 | 1.2932 | 1.1372 |
| No log | 7.44 | 186 | 1.2558 | -0.2384 | 1.2558 | 1.1206 |
| No log | 7.52 | 188 | 1.1725 | 0.0149 | 1.1725 | 1.0828 |
| No log | 7.6 | 190 | 1.1119 | 0.0149 | 1.1119 | 1.0545 |
| No log | 7.68 | 192 | 1.0927 | 0.0149 | 1.0927 | 1.0453 |
| No log | 7.76 | 194 | 1.1083 | 0.0149 | 1.1083 | 1.0528 |
| No log | 7.84 | 196 | 1.1735 | 0.0272 | 1.1735 | 1.0833 |
| No log | 7.92 | 198 | 1.2644 | -0.2384 | 1.2644 | 1.1244 |
| No log | 8.0 | 200 | 1.3790 | -0.0732 | 1.3790 | 1.1743 |
| No log | 8.08 | 202 | 1.4414 | -0.0565 | 1.4414 | 1.2006 |
| No log | 8.16 | 204 | 1.4324 | -0.0732 | 1.4324 | 1.1968 |
| No log | 8.24 | 206 | 1.4515 | -0.0565 | 1.4515 | 1.2048 |
| No log | 8.32 | 208 | 1.4075 | -0.0732 | 1.4075 | 1.1864 |
| No log | 8.4 | 210 | 1.3814 | -0.0732 | 1.3814 | 1.1753 |
| No log | 8.48 | 212 | 1.3075 | -0.2375 | 1.3075 | 1.1435 |
| No log | 8.56 | 214 | 1.2362 | -0.2375 | 1.2362 | 1.1119 |
| No log | 8.64 | 216 | 1.1800 | 0.0272 | 1.1800 | 1.0863 |
| No log | 8.72 | 218 | 1.1314 | 0.0149 | 1.1314 | 1.0637 |
| No log | 8.8 | 220 | 1.0773 | 0.0149 | 1.0773 | 1.0379 |
| No log | 8.88 | 222 | 1.0475 | 0.0149 | 1.0475 | 1.0235 |
| No log | 8.96 | 224 | 1.0364 | 0.0149 | 1.0364 | 1.0181 |
| No log | 9.04 | 226 | 1.0471 | 0.0149 | 1.0471 | 1.0233 |
| No log | 9.12 | 228 | 1.0754 | 0.0149 | 1.0754 | 1.0370 |
| No log | 9.2 | 230 | 1.1052 | 0.0149 | 1.1052 | 1.0513 |
| No log | 9.28 | 232 | 1.1253 | 0.0149 | 1.1253 | 1.0608 |
| No log | 9.36 | 234 | 1.1432 | 0.0149 | 1.1432 | 1.0692 |
| No log | 9.44 | 236 | 1.1660 | 0.0149 | 1.1660 | 1.0798 |
| No log | 9.52 | 238 | 1.1851 | -0.2375 | 1.1851 | 1.0886 |
| No log | 9.6 | 240 | 1.1974 | -0.2375 | 1.1974 | 1.0943 |
| No log | 9.68 | 242 | 1.1972 | -0.2375 | 1.1972 | 1.0942 |
| No log | 9.76 | 244 | 1.1961 | -0.2375 | 1.1961 | 1.0936 |
| No log | 9.84 | 246 | 1.1964 | -0.2375 | 1.1964 | 1.0938 |
| No log | 9.92 | 248 | 1.1980 | -0.2375 | 1.1980 | 1.0945 |
| No log | 10.0 | 250 | 1.1975 | -0.2375 | 1.1975 | 1.0943 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Subsets and Splits