modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
lesso10/dbaf1184-5899-4d20-964a-33ecb54ce416
|
lesso10
| 2025-01-15T16:22:26Z
| 14
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T16:12:57Z
|
---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dbaf1184-5899-4d20-964a-33ecb54ce416
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: true
chat_template: llama3
datasets:
- data_files:
- c17c59740c4fc07c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c17c59740c4fc07c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso10/dbaf1184-5899-4d20-964a-33ecb54ce416
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/c17c59740c4fc07c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21989a9f-7539-4956-94ad-ee9bd5631eb8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21989a9f-7539-4956-94ad-ee9bd5631eb8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dbaf1184-5899-4d20-964a-33ecb54ce416
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0017 | 5 | nan |
| 0.0 | 0.0034 | 10 | nan |
| 0.0 | 0.0051 | 15 | nan |
| 0.0 | 0.0068 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JacksonBrune/9d3840b2-d488-4fbf-97e3-b030d9e3517c
|
JacksonBrune
| 2025-01-15T16:19:37Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-01-15T15:50:19Z
|
---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d3840b2-d488-4fbf-97e3-b030d9e3517c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6e6190eb26c4eb66_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6e6190eb26c4eb66_train_data.json
type:
field_input: user_input
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/9d3840b2-d488-4fbf-97e3-b030d9e3517c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6e6190eb26c4eb66_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22a7e7a8-f3c8-4b2e-bae8-382fa9d6d294
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22a7e7a8-f3c8-4b2e-bae8-382fa9d6d294
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d3840b2-d488-4fbf-97e3-b030d9e3517c
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.145 | 0.0001 | 1 | 2.5498 |
| 2.7497 | 0.0002 | 3 | 2.5405 |
| 1.9838 | 0.0003 | 6 | 2.3407 |
| 2.057 | 0.0005 | 9 | 1.8708 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
antonvinny/whisper-tiny-gs
|
antonvinny
| 2025-01-15T16:18:37Z
| 32
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:quantized:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-01-15T15:02:21Z
|
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-gs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-gs
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Wer: 2.4950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:--------:|:----:|:---------------:|:------:|
| 0.0002 | 66.6667 | 200 | 0.0003 | 2.4950 |
| 0.0001 | 133.3333 | 400 | 0.0001 | 2.4950 |
| 0.0001 | 200.0 | 600 | 0.0001 | 2.4950 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
dimasik2987/77bfbadf-495d-408a-be33-a01f7dedab05
|
dimasik2987
| 2025-01-15T16:17:23Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | 2025-01-15T16:13:05Z
|
---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 77bfbadf-495d-408a-be33-a01f7dedab05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c17c59740c4fc07c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c17c59740c4fc07c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik2987/77bfbadf-495d-408a-be33-a01f7dedab05
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/c17c59740c4fc07c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21989a9f-7539-4956-94ad-ee9bd5631eb8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21989a9f-7539-4956-94ad-ee9bd5631eb8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 77bfbadf-495d-408a-be33-a01f7dedab05
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | nan |
| 0.0 | 0.0017 | 5 | nan |
| 0.0 | 0.0034 | 10 | nan |
| 0.0 | 0.0051 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Error410/JVCGPT-Mini-beta-GGUF
|
Error410
| 2025-01-15T16:17:14Z
| 34
| 0
| null |
[
"gguf",
"jvc",
"issou",
"aya",
"fr",
"dataset:Error410/sharegpt",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-01-12T02:14:23Z
|
---
datasets:
- Error410/sharegpt
language:
- fr
base_model:
- meta-llama/Llama-3.2-3B-Instruct
tags:
- jvc
- issou
- aya
---
# Error410/JVCGPT-Mini-beta

## Description
Ce modèle est une version fine-tunée de **Llama 3.2 3B** ayant pour objectif de reproduire les styles d’écriture et les posts des utilisateurs du forum **jeuxvideo.com**. Entraîné sur une fraction des données publiques de **JVArchive**, ce modèle est conçu pour capturer le ton, l’humour et les références propres à cette communauté en ligne.
## Détails du modèle
- **Base** : Llama 3.2 (3B paramètres)
- **Dataset utilisé** : 2% de JVArchive (public et accessible librement)
- **Entraînement** : 3 heures pour 2 epoch sur un cluster de 8 NVIDIA L40S sur un contexte de 4096 tokens.
- **Objectif** : Générer des messages imitant le style des utilisateurs de jeuxvideo.com
- **Accès** : Dataset et modèles disponibles gratuitement sur notre repo [Error410](https://huggingface.co/Error410/).
## Format du prompt
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Réponds comme un membre actif du forum, en respectant le style, les références et le ton typiques du topic en cours.
Topic: <TOPIC>|eot_id|><|start_header_id|>user<|end_header_id|>
<|im_pseudo|>PSEUDO<|end_pseudo|>
<|im_date|>DATE<|end_date|>
<|begin_of_post|>POST<|end_of_post|><|eot_id|><|start_header_id|>assistant<|end_header_id|>
<|im_pseudo|>PSEUDO<|end_pseudo|>
<|im_date|>DATE<|end_date|>
<|begin_of_post|>POST<|end_of_post|><|eot_id|>
```
Template SillyTavern: https://huggingface.co/Error410/JVCGPT-Mini-beta/blob/main/SillyTavern%20Prompt%20Format.json
## Performances
- **Style** : Captures efficacement les références, expressions, et styles d’écriture caractéristiques des forums jeuxvideo.com.
- **Légèreté** : Adapté pour tout grâce à sa petit taille de 3B de paramètres.
- **Temps de réponse** : Optimisé pour des générations rapides à faible coût.
## Dataset
Le modèle a été entraîné sur une sélection de **2% des archives de JVArchive** (100 000 topics). Ces données ont été traitées et filtrées pour garantir une qualité et une diversité optimales.
## Licence
Le modèle, le dataset, et tous les fichiers associés sont mis à disposition gratuitement sous la même license (PUBLIC) que JVArchive, dans notre repo.
## Remerciements
Un grand merci à **JVArchive** pour l’accès aux données publiques et à la communauté jeuxvideo.com pour son inspiration. Ce projet est dédié aux passionnés de l’histoire du forum et à la culture internet.
## Auteurs
- [Greums](https://huggingface.co/Greums/) : Pro des datasets bordelent cimer chef
- [Undi](https://huggingface.co/Undi95/)
|
Error410/JVCGPT-Mini-beta
|
Error410
| 2025-01-15T16:16:22Z
| 17
| 0
| null |
[
"safetensors",
"llama",
"jvc",
"issou",
"aya",
"fr",
"dataset:Error410/sharegpt",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-01-12T01:45:03Z
|
---
datasets:
- Error410/sharegpt
language:
- fr
base_model:
- meta-llama/Llama-3.2-3B-Instruct
tags:
- jvc
- issou
- aya
---
# Error410/JVCGPT-Mini-beta

## Description
Ce modèle est une version fine-tunée de **Llama 3.2 3B** ayant pour objectif de reproduire les styles d’écriture et les posts des utilisateurs du forum **jeuxvideo.com**. Entraîné sur une fraction des données publiques de **JVArchive**, ce modèle est conçu pour capturer le ton, l’humour et les références propres à cette communauté en ligne.
## Détails du modèle
- **Base** : Llama 3.2 (3B paramètres)
- **Dataset utilisé** : 2% de JVArchive (public et accessible librement)
- **Entraînement** : 3 heures pour 2 epoch sur un cluster de 8 NVIDIA L40S sur un contexte de 4096 tokens.
- **Objectif** : Générer des messages imitant le style des utilisateurs de jeuxvideo.com
- **Accès** : Dataset et modèles disponibles gratuitement sur notre repo [Error410](https://huggingface.co/Error410/).
## Format du prompt
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Réponds comme un membre actif du forum, en respectant le style, les références et le ton typiques du topic en cours.
Topic: <TOPIC>|eot_id|><|start_header_id|>user<|end_header_id|>
<|im_pseudo|>PSEUDO<|end_pseudo|>
<|im_date|>DATE<|end_date|>
<|begin_of_post|>POST<|end_of_post|><|eot_id|><|start_header_id|>assistant<|end_header_id|>
<|im_pseudo|>PSEUDO<|end_pseudo|>
<|im_date|>DATE<|end_date|>
<|begin_of_post|>POST<|end_of_post|><|eot_id|>
```
Template SillyTavern: https://huggingface.co/Error410/JVCGPT-Mini-beta/blob/main/SillyTavern%20Prompt%20Format.json
## Performances
- **Style** : Captures efficacement les références, expressions, et styles d’écriture caractéristiques des forums jeuxvideo.com.
- **Légèreté** : Adapté pour tout grâce à sa petit taille de 3B de paramètres.
- **Temps de réponse** : Optimisé pour des générations rapides à faible coût.
## Dataset
Le modèle a été entraîné sur une sélection de **2% des archives de JVArchive** (100 000 topics). Ces données ont été traitées et filtrées pour garantir une qualité et une diversité optimales.
## Licence
Le modèle, le dataset, et tous les fichiers associés sont mis à disposition gratuitement sous la même license (PUBLIC) que JVArchive, dans notre repo.
## Remerciements
Un grand merci à **JVArchive** pour l’accès aux données publiques et à la communauté jeuxvideo.com pour son inspiration. Ce projet est dédié aux passionnés de l’histoire du forum et à la culture internet.
## Auteurs
- [Greums](https://huggingface.co/Greums/) : Pro des datasets bordelent cimer chef
- [Undi](https://huggingface.co/Undi95/)
|
lesso13/6d237b86-7931-4795-b258-7cf41ed07b45
|
lesso13
| 2025-01-15T16:16:06Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:49:16Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6d237b86-7931-4795-b258-7cf41ed07b45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 756452db934ae1d1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/756452db934ae1d1_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso13/6d237b86-7931-4795-b258-7cf41ed07b45
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/756452db934ae1d1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 74979543-06fb-451f-b2f1-4786823d5776
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 74979543-06fb-451f-b2f1-4786823d5776
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6d237b86-7931-4795-b258-7cf41ed07b45
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.0872 |
| 1.1047 | 0.0008 | 8 | 1.0565 |
| 1.0449 | 0.0016 | 16 | 1.0030 |
| 1.04 | 0.0023 | 24 | 0.9857 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
bbytxt/99b7be35-ecb0-4251-93fd-0b9840307a9c
|
bbytxt
| 2025-01-15T16:16:03Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T16:15:04Z
|
---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 99b7be35-ecb0-4251-93fd-0b9840307a9c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- a036efca81d0f95e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a036efca81d0f95e_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bbytxt/99b7be35-ecb0-4251-93fd-0b9840307a9c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a036efca81d0f95e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21b208df-454c-48c3-9b23-520ad94bb3d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21b208df-454c-48c3-9b23-520ad94bb3d4
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 99b7be35-ecb0-4251-93fd-0b9840307a9c
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9344 | 0.0019 | 1 | 6.9199 |
| 6.6524 | 0.0970 | 50 | 6.7949 |
| 6.5216 | 0.1941 | 100 | 6.7281 |
| 6.4882 | 0.2911 | 150 | 6.7088 |
| 6.4793 | 0.3882 | 200 | 6.7069 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
roleplaiapp/Llama-3.3-70B-Instruct-Q4_K_M-GGUF
|
roleplaiapp
| 2025-01-15T16:15:35Z
| 49
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"Llama-3.3-70B-Instruct",
"Q4_K_M",
"meta-llama",
"code",
"math",
"chat",
"roleplay",
"text-generation",
"safetensors",
"nlp",
"en",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"de",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:quantized:meta-llama/Llama-3.1-70B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-15T13:28:50Z
|
---
library_name: transformers
language:
- en
- fr
- it
- pt
- hi
- es
- th
- de
base_model:
- meta-llama/Llama-3.1-70B
tags:
- llama-cpp
- Llama-3.3-70B-Instruct
- gguf
- Q4_K_M
- llama-cpp
- gguf
- meta-llama
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
pipeline_tag: text-generation
---
# Llama-3.3-70B-Instruct-Q4_K_M-GGUF
**Repo:** `roleplaiapp/Llama-3.3-70B-Instruct-Q4_K_M -GGUF`
**Original Model:** `Llama-3.3-70B-Instruct`
**Organization:** `meta-llama`
**Quantized File:** `llama-3.3-70b-instruct-q3_k_m.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q4_K_M `
**Use Imatrix:** `False`
**Split Model:** `False`
## Overview
This is an GGUF Q4_K_M quantized version of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
## Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/)
|
exala/db_aca2_7.2.1
|
exala
| 2025-01-15T16:14:07Z
| 17
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T16:13:53Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Oscar2384/intime-sosten-copa-B-con-soft-algodon-elasticad-surtido
|
Oscar2384
| 2025-01-15T16:13:34Z
| 20
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T15:52:08Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: COPASOSELAS
---
# Intime Sosten Copa B Con Soft Algodon Elasticad Surtido
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `COPASOSELAS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Oscar2384/intime-sosten-copa-B-con-soft-algodon-elasticad-surtido', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
paultimothymooney/Qwen2.5-Coder-0.5B-Instruct-Q8_0-GGUF
|
paultimothymooney
| 2025-01-15T16:13:33Z
| 40
| 0
|
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-15T16:13:27Z
|
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# paultimothymooney/Qwen2.5-Coder-0.5B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo paultimothymooney/Qwen2.5-Coder-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo paultimothymooney/Qwen2.5-Coder-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo paultimothymooney/Qwen2.5-Coder-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo paultimothymooney/Qwen2.5-Coder-0.5B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-0.5b-instruct-q8_0.gguf -c 2048
```
|
phonemetransformers/CHILDES-Indonesian-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:12:07Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:54Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oldiday/27f6fcb8-4599-455c-bbaf-98396a85a925
|
oldiday
| 2025-01-15T16:10:02Z
| 13
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T16:00:20Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27f6fcb8-4599-455c-bbaf-98396a85a925
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e5fba043d1f22bc2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e5fba043d1f22bc2_train_data.json
type:
field_instruction: prompt
field_output: reference_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/27f6fcb8-4599-455c-bbaf-98396a85a925
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e5fba043d1f22bc2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 15e5c164-e834-4f75-975b-0d306607a752
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: 15e5c164-e834-4f75-975b-0d306607a752
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 27f6fcb8-4599-455c-bbaf-98396a85a925
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 1.4431 |
| 1.4342 | 0.0059 | 9 | 1.4399 |
| 1.4666 | 0.0117 | 18 | 1.4246 |
| 1.4174 | 0.0176 | 27 | 1.4081 |
| 1.4487 | 0.0235 | 36 | 1.3942 |
| 1.3878 | 0.0293 | 45 | 1.3837 |
| 1.3904 | 0.0352 | 54 | 1.3762 |
| 1.2992 | 0.0411 | 63 | 1.3713 |
| 1.3923 | 0.0470 | 72 | 1.3683 |
| 1.3503 | 0.0528 | 81 | 1.3666 |
| 1.4249 | 0.0587 | 90 | 1.3660 |
| 1.3777 | 0.0646 | 99 | 1.3658 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tarabukinivan/a0bf2150-f712-41d2-9b98-6973dcf1c843
|
tarabukinivan
| 2025-01-15T16:09:25Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-15T15:39:26Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a0bf2150-f712-41d2-9b98-6973dcf1c843
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 21f641ae0efcd8d6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/21f641ae0efcd8d6_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tarabukinivan/a0bf2150-f712-41d2-9b98-6973dcf1c843
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/21f641ae0efcd8d6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9e2f882e-8f4c-4fd9-b3f6-e92c5704a050
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9e2f882e-8f4c-4fd9-b3f6-e92c5704a050
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a0bf2150-f712-41d2-9b98-6973dcf1c843
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.8047 |
| 1.2561 | 0.0007 | 13 | 1.1174 |
| 1.1405 | 0.0013 | 26 | 1.0613 |
| 1.116 | 0.0020 | 39 | 1.0546 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k11_task2_organization
|
MayBashendy
| 2025-01-15T16:09:17Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T14:57:24Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k11_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k11_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5617
- Qwk: 0.4874
- Mse: 0.5617
- Rmse: 0.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0339 | 2 | 4.2196 | -0.0288 | 4.2196 | 2.0542 |
| No log | 0.0678 | 4 | 2.2233 | 0.0640 | 2.2233 | 1.4911 |
| No log | 0.1017 | 6 | 1.2161 | 0.0258 | 1.2161 | 1.1028 |
| No log | 0.1356 | 8 | 0.9924 | 0.0330 | 0.9924 | 0.9962 |
| No log | 0.1695 | 10 | 0.8343 | 0.1465 | 0.8343 | 0.9134 |
| No log | 0.2034 | 12 | 0.8470 | 0.1648 | 0.8470 | 0.9203 |
| No log | 0.2373 | 14 | 0.8884 | 0.0983 | 0.8884 | 0.9425 |
| No log | 0.2712 | 16 | 1.0015 | -0.0066 | 1.0015 | 1.0008 |
| No log | 0.3051 | 18 | 0.9798 | 0.0851 | 0.9798 | 0.9898 |
| No log | 0.3390 | 20 | 1.0506 | -0.0377 | 1.0506 | 1.0250 |
| No log | 0.3729 | 22 | 1.3475 | 0.0262 | 1.3475 | 1.1608 |
| No log | 0.4068 | 24 | 1.4195 | 0.0262 | 1.4195 | 1.1914 |
| No log | 0.4407 | 26 | 1.1388 | 0.0363 | 1.1388 | 1.0671 |
| No log | 0.4746 | 28 | 0.9469 | 0.0456 | 0.9469 | 0.9731 |
| No log | 0.5085 | 30 | 0.7915 | 0.0932 | 0.7915 | 0.8897 |
| No log | 0.5424 | 32 | 0.7764 | 0.0797 | 0.7764 | 0.8812 |
| No log | 0.5763 | 34 | 0.7658 | 0.1505 | 0.7658 | 0.8751 |
| No log | 0.6102 | 36 | 0.7574 | 0.1281 | 0.7574 | 0.8703 |
| No log | 0.6441 | 38 | 0.7671 | 0.2713 | 0.7671 | 0.8759 |
| No log | 0.6780 | 40 | 0.7597 | 0.2883 | 0.7597 | 0.8716 |
| No log | 0.7119 | 42 | 0.7352 | 0.3097 | 0.7352 | 0.8575 |
| No log | 0.7458 | 44 | 0.7179 | 0.2723 | 0.7179 | 0.8473 |
| No log | 0.7797 | 46 | 0.8210 | 0.2453 | 0.8210 | 0.9061 |
| No log | 0.8136 | 48 | 0.8228 | 0.2687 | 0.8228 | 0.9071 |
| No log | 0.8475 | 50 | 0.7828 | 0.2687 | 0.7828 | 0.8848 |
| No log | 0.8814 | 52 | 0.7498 | 0.3427 | 0.7498 | 0.8659 |
| No log | 0.9153 | 54 | 0.7103 | 0.3392 | 0.7103 | 0.8428 |
| No log | 0.9492 | 56 | 0.7264 | 0.3468 | 0.7264 | 0.8523 |
| No log | 0.9831 | 58 | 0.7145 | 0.4157 | 0.7145 | 0.8453 |
| No log | 1.0169 | 60 | 0.6964 | 0.3647 | 0.6964 | 0.8345 |
| No log | 1.0508 | 62 | 0.6822 | 0.3708 | 0.6822 | 0.8259 |
| No log | 1.0847 | 64 | 0.6574 | 0.4099 | 0.6574 | 0.8108 |
| No log | 1.1186 | 66 | 0.6465 | 0.4529 | 0.6465 | 0.8040 |
| No log | 1.1525 | 68 | 0.6532 | 0.4700 | 0.6532 | 0.8082 |
| No log | 1.1864 | 70 | 0.6603 | 0.4298 | 0.6603 | 0.8126 |
| No log | 1.2203 | 72 | 0.7369 | 0.4176 | 0.7369 | 0.8584 |
| No log | 1.2542 | 74 | 0.6974 | 0.4337 | 0.6974 | 0.8351 |
| No log | 1.2881 | 76 | 0.6845 | 0.4749 | 0.6845 | 0.8274 |
| No log | 1.3220 | 78 | 0.6924 | 0.4749 | 0.6924 | 0.8321 |
| No log | 1.3559 | 80 | 0.6937 | 0.4781 | 0.6937 | 0.8329 |
| No log | 1.3898 | 82 | 0.7165 | 0.4589 | 0.7165 | 0.8465 |
| No log | 1.4237 | 84 | 0.7688 | 0.4315 | 0.7688 | 0.8768 |
| No log | 1.4576 | 86 | 0.9003 | 0.3983 | 0.9003 | 0.9488 |
| No log | 1.4915 | 88 | 0.8929 | 0.4161 | 0.8929 | 0.9449 |
| No log | 1.5254 | 90 | 0.7462 | 0.4357 | 0.7462 | 0.8638 |
| No log | 1.5593 | 92 | 0.7201 | 0.4550 | 0.7201 | 0.8486 |
| No log | 1.5932 | 94 | 0.7240 | 0.4899 | 0.7240 | 0.8509 |
| No log | 1.6271 | 96 | 0.8073 | 0.4730 | 0.8073 | 0.8985 |
| No log | 1.6610 | 98 | 0.8365 | 0.4393 | 0.8365 | 0.9146 |
| No log | 1.6949 | 100 | 0.8551 | 0.4769 | 0.8551 | 0.9247 |
| No log | 1.7288 | 102 | 0.7937 | 0.4260 | 0.7937 | 0.8909 |
| No log | 1.7627 | 104 | 0.8459 | 0.4352 | 0.8459 | 0.9197 |
| No log | 1.7966 | 106 | 0.7920 | 0.4484 | 0.7920 | 0.8899 |
| No log | 1.8305 | 108 | 0.8247 | 0.4463 | 0.8247 | 0.9081 |
| No log | 1.8644 | 110 | 1.1376 | 0.3577 | 1.1376 | 1.0666 |
| No log | 1.8983 | 112 | 1.0599 | 0.3574 | 1.0599 | 1.0295 |
| No log | 1.9322 | 114 | 0.7327 | 0.4837 | 0.7327 | 0.8560 |
| No log | 1.9661 | 116 | 0.8170 | 0.3834 | 0.8170 | 0.9039 |
| No log | 2.0 | 118 | 0.8166 | 0.3846 | 0.8166 | 0.9037 |
| No log | 2.0339 | 120 | 0.7262 | 0.5192 | 0.7262 | 0.8522 |
| No log | 2.0678 | 122 | 0.8005 | 0.4277 | 0.8005 | 0.8947 |
| No log | 2.1017 | 124 | 0.9027 | 0.2983 | 0.9027 | 0.9501 |
| No log | 2.1356 | 126 | 0.7849 | 0.4046 | 0.7849 | 0.8859 |
| No log | 2.1695 | 128 | 0.7066 | 0.3798 | 0.7066 | 0.8406 |
| No log | 2.2034 | 130 | 0.6268 | 0.4572 | 0.6268 | 0.7917 |
| No log | 2.2373 | 132 | 0.6235 | 0.4659 | 0.6235 | 0.7896 |
| No log | 2.2712 | 134 | 0.6538 | 0.4459 | 0.6538 | 0.8086 |
| No log | 2.3051 | 136 | 0.8376 | 0.4329 | 0.8376 | 0.9152 |
| No log | 2.3390 | 138 | 0.7967 | 0.4363 | 0.7967 | 0.8926 |
| No log | 2.3729 | 140 | 0.6376 | 0.4274 | 0.6376 | 0.7985 |
| No log | 2.4068 | 142 | 0.6585 | 0.4542 | 0.6585 | 0.8115 |
| No log | 2.4407 | 144 | 0.6457 | 0.4321 | 0.6457 | 0.8035 |
| No log | 2.4746 | 146 | 0.6707 | 0.4785 | 0.6707 | 0.8189 |
| No log | 2.5085 | 148 | 0.6931 | 0.4606 | 0.6931 | 0.8325 |
| No log | 2.5424 | 150 | 0.6635 | 0.4477 | 0.6635 | 0.8145 |
| No log | 2.5763 | 152 | 0.6945 | 0.4744 | 0.6945 | 0.8334 |
| No log | 2.6102 | 154 | 0.6745 | 0.4655 | 0.6745 | 0.8213 |
| No log | 2.6441 | 156 | 0.6559 | 0.4746 | 0.6559 | 0.8099 |
| No log | 2.6780 | 158 | 0.6424 | 0.4958 | 0.6424 | 0.8015 |
| No log | 2.7119 | 160 | 0.6542 | 0.5039 | 0.6542 | 0.8088 |
| No log | 2.7458 | 162 | 0.6594 | 0.5213 | 0.6594 | 0.8120 |
| No log | 2.7797 | 164 | 0.7149 | 0.5015 | 0.7149 | 0.8455 |
| No log | 2.8136 | 166 | 0.7718 | 0.5442 | 0.7718 | 0.8785 |
| No log | 2.8475 | 168 | 0.7848 | 0.5622 | 0.7848 | 0.8859 |
| No log | 2.8814 | 170 | 0.7685 | 0.5126 | 0.7685 | 0.8766 |
| No log | 2.9153 | 172 | 0.8633 | 0.4963 | 0.8633 | 0.9291 |
| No log | 2.9492 | 174 | 0.8201 | 0.5117 | 0.8201 | 0.9056 |
| No log | 2.9831 | 176 | 0.7117 | 0.5364 | 0.7117 | 0.8436 |
| No log | 3.0169 | 178 | 0.8802 | 0.4675 | 0.8802 | 0.9382 |
| No log | 3.0508 | 180 | 0.7990 | 0.4734 | 0.7990 | 0.8939 |
| No log | 3.0847 | 182 | 0.6686 | 0.4847 | 0.6686 | 0.8176 |
| No log | 3.1186 | 184 | 0.6372 | 0.4211 | 0.6372 | 0.7983 |
| No log | 3.1525 | 186 | 0.6420 | 0.3830 | 0.6420 | 0.8013 |
| No log | 3.1864 | 188 | 0.7457 | 0.4332 | 0.7457 | 0.8636 |
| No log | 3.2203 | 190 | 0.7242 | 0.4210 | 0.7242 | 0.8510 |
| No log | 3.2542 | 192 | 0.6703 | 0.4150 | 0.6703 | 0.8187 |
| No log | 3.2881 | 194 | 0.6770 | 0.3859 | 0.6770 | 0.8228 |
| No log | 3.3220 | 196 | 0.6892 | 0.4199 | 0.6892 | 0.8302 |
| No log | 3.3559 | 198 | 0.7007 | 0.4433 | 0.7007 | 0.8371 |
| No log | 3.3898 | 200 | 0.7016 | 0.4348 | 0.7016 | 0.8376 |
| No log | 3.4237 | 202 | 0.7239 | 0.4275 | 0.7239 | 0.8508 |
| No log | 3.4576 | 204 | 0.7045 | 0.4275 | 0.7045 | 0.8394 |
| No log | 3.4915 | 206 | 0.6639 | 0.4249 | 0.6639 | 0.8148 |
| No log | 3.5254 | 208 | 0.6430 | 0.3490 | 0.6430 | 0.8019 |
| No log | 3.5593 | 210 | 0.6838 | 0.3809 | 0.6838 | 0.8269 |
| No log | 3.5932 | 212 | 0.6554 | 0.3574 | 0.6554 | 0.8096 |
| No log | 3.6271 | 214 | 0.6941 | 0.3643 | 0.6941 | 0.8331 |
| No log | 3.6610 | 216 | 0.9193 | 0.3782 | 0.9193 | 0.9588 |
| No log | 3.6949 | 218 | 0.8658 | 0.3735 | 0.8658 | 0.9305 |
| No log | 3.7288 | 220 | 0.6616 | 0.3662 | 0.6616 | 0.8134 |
| No log | 3.7627 | 222 | 0.6164 | 0.3456 | 0.6164 | 0.7851 |
| No log | 3.7966 | 224 | 0.6183 | 0.4331 | 0.6183 | 0.7863 |
| No log | 3.8305 | 226 | 0.6271 | 0.3885 | 0.6271 | 0.7919 |
| No log | 3.8644 | 228 | 0.6795 | 0.3969 | 0.6795 | 0.8243 |
| No log | 3.8983 | 230 | 0.7252 | 0.4442 | 0.7252 | 0.8516 |
| No log | 3.9322 | 232 | 0.7638 | 0.4587 | 0.7638 | 0.8739 |
| No log | 3.9661 | 234 | 0.6737 | 0.4424 | 0.6737 | 0.8208 |
| No log | 4.0 | 236 | 0.6828 | 0.4186 | 0.6828 | 0.8263 |
| No log | 4.0339 | 238 | 0.6647 | 0.4051 | 0.6647 | 0.8153 |
| No log | 4.0678 | 240 | 0.6187 | 0.3669 | 0.6187 | 0.7866 |
| No log | 4.1017 | 242 | 0.6409 | 0.4165 | 0.6409 | 0.8006 |
| No log | 4.1356 | 244 | 0.7363 | 0.4767 | 0.7363 | 0.8581 |
| No log | 4.1695 | 246 | 1.0628 | 0.3801 | 1.0628 | 1.0309 |
| No log | 4.2034 | 248 | 1.1978 | 0.3578 | 1.1978 | 1.0944 |
| No log | 4.2373 | 250 | 0.9836 | 0.4115 | 0.9836 | 0.9918 |
| No log | 4.2712 | 252 | 0.7441 | 0.4461 | 0.7441 | 0.8626 |
| No log | 4.3051 | 254 | 0.6874 | 0.4652 | 0.6874 | 0.8291 |
| No log | 4.3390 | 256 | 0.6894 | 0.4741 | 0.6894 | 0.8303 |
| No log | 4.3729 | 258 | 0.7296 | 0.4072 | 0.7296 | 0.8542 |
| No log | 4.4068 | 260 | 0.7426 | 0.4467 | 0.7426 | 0.8618 |
| No log | 4.4407 | 262 | 0.8213 | 0.4579 | 0.8213 | 0.9062 |
| No log | 4.4746 | 264 | 0.7700 | 0.4579 | 0.7700 | 0.8775 |
| No log | 4.5085 | 266 | 0.7815 | 0.4523 | 0.7815 | 0.8840 |
| No log | 4.5424 | 268 | 0.6997 | 0.4531 | 0.6997 | 0.8365 |
| No log | 4.5763 | 270 | 0.6442 | 0.4227 | 0.6442 | 0.8026 |
| No log | 4.6102 | 272 | 0.5964 | 0.4278 | 0.5964 | 0.7723 |
| No log | 4.6441 | 274 | 0.6215 | 0.4015 | 0.6215 | 0.7884 |
| No log | 4.6780 | 276 | 0.6080 | 0.3765 | 0.6080 | 0.7798 |
| No log | 4.7119 | 278 | 0.5863 | 0.3998 | 0.5863 | 0.7657 |
| No log | 4.7458 | 280 | 0.5943 | 0.4679 | 0.5943 | 0.7709 |
| No log | 4.7797 | 282 | 0.6017 | 0.4383 | 0.6017 | 0.7757 |
| No log | 4.8136 | 284 | 0.6141 | 0.4176 | 0.6141 | 0.7837 |
| No log | 4.8475 | 286 | 0.6245 | 0.4499 | 0.6245 | 0.7902 |
| No log | 4.8814 | 288 | 0.6176 | 0.4807 | 0.6176 | 0.7859 |
| No log | 4.9153 | 290 | 0.6299 | 0.4458 | 0.6299 | 0.7936 |
| No log | 4.9492 | 292 | 0.7054 | 0.4557 | 0.7054 | 0.8399 |
| No log | 4.9831 | 294 | 0.7136 | 0.4664 | 0.7136 | 0.8448 |
| No log | 5.0169 | 296 | 0.6757 | 0.4381 | 0.6757 | 0.8220 |
| No log | 5.0508 | 298 | 0.6509 | 0.4846 | 0.6509 | 0.8068 |
| No log | 5.0847 | 300 | 0.6313 | 0.4673 | 0.6313 | 0.7945 |
| No log | 5.1186 | 302 | 0.6295 | 0.4693 | 0.6295 | 0.7934 |
| No log | 5.1525 | 304 | 0.6293 | 0.4578 | 0.6293 | 0.7933 |
| No log | 5.1864 | 306 | 0.6420 | 0.4416 | 0.6420 | 0.8012 |
| No log | 5.2203 | 308 | 0.7168 | 0.4810 | 0.7168 | 0.8466 |
| No log | 5.2542 | 310 | 0.7265 | 0.4579 | 0.7265 | 0.8524 |
| No log | 5.2881 | 312 | 0.6593 | 0.4516 | 0.6593 | 0.8120 |
| No log | 5.3220 | 314 | 0.6216 | 0.4036 | 0.6216 | 0.7884 |
| No log | 5.3559 | 316 | 0.6288 | 0.4029 | 0.6288 | 0.7930 |
| No log | 5.3898 | 318 | 0.6443 | 0.4682 | 0.6443 | 0.8027 |
| No log | 5.4237 | 320 | 0.6501 | 0.4578 | 0.6501 | 0.8063 |
| No log | 5.4576 | 322 | 0.6482 | 0.4740 | 0.6482 | 0.8051 |
| No log | 5.4915 | 324 | 0.6157 | 0.4134 | 0.6157 | 0.7847 |
| No log | 5.5254 | 326 | 0.6020 | 0.4150 | 0.6020 | 0.7759 |
| No log | 5.5593 | 328 | 0.5992 | 0.3806 | 0.5992 | 0.7741 |
| No log | 5.5932 | 330 | 0.5935 | 0.4808 | 0.5935 | 0.7704 |
| No log | 5.6271 | 332 | 0.5974 | 0.4925 | 0.5974 | 0.7729 |
| No log | 5.6610 | 334 | 0.6415 | 0.5098 | 0.6415 | 0.8010 |
| No log | 5.6949 | 336 | 0.6185 | 0.4956 | 0.6185 | 0.7865 |
| No log | 5.7288 | 338 | 0.5939 | 0.4842 | 0.5939 | 0.7706 |
| No log | 5.7627 | 340 | 0.5816 | 0.4157 | 0.5816 | 0.7626 |
| No log | 5.7966 | 342 | 0.6050 | 0.3829 | 0.6050 | 0.7778 |
| No log | 5.8305 | 344 | 0.5903 | 0.4521 | 0.5903 | 0.7683 |
| No log | 5.8644 | 346 | 0.6224 | 0.5026 | 0.6224 | 0.7889 |
| No log | 5.8983 | 348 | 0.6256 | 0.5039 | 0.6256 | 0.7910 |
| No log | 5.9322 | 350 | 0.6195 | 0.4572 | 0.6195 | 0.7871 |
| No log | 5.9661 | 352 | 0.6777 | 0.4649 | 0.6777 | 0.8232 |
| No log | 6.0 | 354 | 0.8010 | 0.4884 | 0.8010 | 0.8950 |
| No log | 6.0339 | 356 | 0.8134 | 0.4784 | 0.8134 | 0.9019 |
| No log | 6.0678 | 358 | 0.7095 | 0.4522 | 0.7095 | 0.8423 |
| No log | 6.1017 | 360 | 0.6367 | 0.4440 | 0.6367 | 0.7979 |
| No log | 6.1356 | 362 | 0.6759 | 0.5220 | 0.6759 | 0.8221 |
| No log | 6.1695 | 364 | 0.6883 | 0.5230 | 0.6883 | 0.8297 |
| No log | 6.2034 | 366 | 0.6466 | 0.4729 | 0.6466 | 0.8041 |
| No log | 6.2373 | 368 | 0.6219 | 0.4422 | 0.6219 | 0.7886 |
| No log | 6.2712 | 370 | 0.6174 | 0.4460 | 0.6174 | 0.7858 |
| No log | 6.3051 | 372 | 0.6144 | 0.4398 | 0.6144 | 0.7838 |
| No log | 6.3390 | 374 | 0.6120 | 0.4521 | 0.6120 | 0.7823 |
| No log | 6.3729 | 376 | 0.6093 | 0.4570 | 0.6093 | 0.7806 |
| No log | 6.4068 | 378 | 0.6226 | 0.4540 | 0.6226 | 0.7890 |
| No log | 6.4407 | 380 | 0.6624 | 0.4716 | 0.6624 | 0.8139 |
| No log | 6.4746 | 382 | 0.6675 | 0.4270 | 0.6675 | 0.8170 |
| No log | 6.5085 | 384 | 0.6181 | 0.3848 | 0.6181 | 0.7862 |
| No log | 6.5424 | 386 | 0.5847 | 0.3426 | 0.5847 | 0.7647 |
| No log | 6.5763 | 388 | 0.5801 | 0.4841 | 0.5801 | 0.7616 |
| No log | 6.6102 | 390 | 0.5851 | 0.4906 | 0.5851 | 0.7649 |
| No log | 6.6441 | 392 | 0.6153 | 0.4866 | 0.6153 | 0.7844 |
| No log | 6.6780 | 394 | 0.6139 | 0.5018 | 0.6139 | 0.7835 |
| No log | 6.7119 | 396 | 0.6068 | 0.5290 | 0.6068 | 0.7790 |
| No log | 6.7458 | 398 | 0.6110 | 0.4687 | 0.6110 | 0.7817 |
| No log | 6.7797 | 400 | 0.6028 | 0.4284 | 0.6028 | 0.7764 |
| No log | 6.8136 | 402 | 0.5944 | 0.4042 | 0.5944 | 0.7710 |
| No log | 6.8475 | 404 | 0.5969 | 0.4319 | 0.5969 | 0.7726 |
| No log | 6.8814 | 406 | 0.6161 | 0.4480 | 0.6161 | 0.7849 |
| No log | 6.9153 | 408 | 0.6922 | 0.4825 | 0.6922 | 0.8320 |
| No log | 6.9492 | 410 | 0.6787 | 0.4948 | 0.6787 | 0.8239 |
| No log | 6.9831 | 412 | 0.6222 | 0.4482 | 0.6222 | 0.7888 |
| No log | 7.0169 | 414 | 0.6365 | 0.4706 | 0.6365 | 0.7978 |
| No log | 7.0508 | 416 | 0.7240 | 0.4915 | 0.7240 | 0.8509 |
| No log | 7.0847 | 418 | 0.7256 | 0.5048 | 0.7256 | 0.8518 |
| No log | 7.1186 | 420 | 0.6445 | 0.4816 | 0.6445 | 0.8028 |
| No log | 7.1525 | 422 | 0.6172 | 0.4788 | 0.6172 | 0.7856 |
| No log | 7.1864 | 424 | 0.6290 | 0.4921 | 0.6290 | 0.7931 |
| No log | 7.2203 | 426 | 0.6963 | 0.5303 | 0.6963 | 0.8345 |
| No log | 7.2542 | 428 | 0.8245 | 0.3773 | 0.8245 | 0.9080 |
| No log | 7.2881 | 430 | 0.8196 | 0.3773 | 0.8196 | 0.9053 |
| No log | 7.3220 | 432 | 0.7598 | 0.4345 | 0.7598 | 0.8717 |
| No log | 7.3559 | 434 | 0.6501 | 0.5567 | 0.6501 | 0.8063 |
| No log | 7.3898 | 436 | 0.5816 | 0.5299 | 0.5816 | 0.7626 |
| No log | 7.4237 | 438 | 0.5829 | 0.5409 | 0.5829 | 0.7635 |
| No log | 7.4576 | 440 | 0.6012 | 0.5314 | 0.6012 | 0.7754 |
| No log | 7.4915 | 442 | 0.6652 | 0.5143 | 0.6652 | 0.8156 |
| No log | 7.5254 | 444 | 0.6704 | 0.5143 | 0.6704 | 0.8188 |
| No log | 7.5593 | 446 | 0.6131 | 0.5514 | 0.6131 | 0.7830 |
| No log | 7.5932 | 448 | 0.5898 | 0.4963 | 0.5898 | 0.7680 |
| No log | 7.6271 | 450 | 0.6421 | 0.4221 | 0.6421 | 0.8013 |
| No log | 7.6610 | 452 | 0.6541 | 0.4131 | 0.6541 | 0.8087 |
| No log | 7.6949 | 454 | 0.6077 | 0.4619 | 0.6077 | 0.7796 |
| No log | 7.7288 | 456 | 0.5903 | 0.4783 | 0.5903 | 0.7683 |
| No log | 7.7627 | 458 | 0.7189 | 0.4959 | 0.7189 | 0.8479 |
| No log | 7.7966 | 460 | 0.8165 | 0.4250 | 0.8165 | 0.9036 |
| No log | 7.8305 | 462 | 0.7612 | 0.4290 | 0.7612 | 0.8725 |
| No log | 7.8644 | 464 | 0.6345 | 0.5071 | 0.6345 | 0.7965 |
| No log | 7.8983 | 466 | 0.5708 | 0.5136 | 0.5708 | 0.7555 |
| No log | 7.9322 | 468 | 0.5614 | 0.4876 | 0.5614 | 0.7493 |
| No log | 7.9661 | 470 | 0.5710 | 0.5131 | 0.5710 | 0.7556 |
| No log | 8.0 | 472 | 0.6259 | 0.5158 | 0.6259 | 0.7911 |
| No log | 8.0339 | 474 | 0.7018 | 0.5192 | 0.7018 | 0.8377 |
| No log | 8.0678 | 476 | 0.6813 | 0.5302 | 0.6813 | 0.8254 |
| No log | 8.1017 | 478 | 0.6114 | 0.4985 | 0.6114 | 0.7819 |
| No log | 8.1356 | 480 | 0.5791 | 0.5447 | 0.5791 | 0.7610 |
| No log | 8.1695 | 482 | 0.5797 | 0.5357 | 0.5797 | 0.7614 |
| No log | 8.2034 | 484 | 0.6036 | 0.4828 | 0.6036 | 0.7769 |
| No log | 8.2373 | 486 | 0.6574 | 0.5173 | 0.6574 | 0.8108 |
| No log | 8.2712 | 488 | 0.6894 | 0.5158 | 0.6894 | 0.8303 |
| No log | 8.3051 | 490 | 0.7148 | 0.5177 | 0.7148 | 0.8454 |
| No log | 8.3390 | 492 | 0.6801 | 0.5158 | 0.6801 | 0.8247 |
| No log | 8.3729 | 494 | 0.6156 | 0.4833 | 0.6156 | 0.7846 |
| No log | 8.4068 | 496 | 0.5827 | 0.4948 | 0.5827 | 0.7633 |
| No log | 8.4407 | 498 | 0.5896 | 0.3781 | 0.5896 | 0.7679 |
| 0.3742 | 8.4746 | 500 | 0.5902 | 0.3905 | 0.5902 | 0.7683 |
| 0.3742 | 8.5085 | 502 | 0.5993 | 0.4837 | 0.5993 | 0.7742 |
| 0.3742 | 8.5424 | 504 | 0.6062 | 0.4998 | 0.6062 | 0.7786 |
| 0.3742 | 8.5763 | 506 | 0.6092 | 0.5123 | 0.6092 | 0.7805 |
| 0.3742 | 8.6102 | 508 | 0.6054 | 0.4489 | 0.6054 | 0.7781 |
| 0.3742 | 8.6441 | 510 | 0.6295 | 0.4877 | 0.6295 | 0.7934 |
| 0.3742 | 8.6780 | 512 | 0.6631 | 0.4187 | 0.6631 | 0.8143 |
| 0.3742 | 8.7119 | 514 | 0.6665 | 0.4214 | 0.6665 | 0.8164 |
| 0.3742 | 8.7458 | 516 | 0.6354 | 0.4539 | 0.6354 | 0.7971 |
| 0.3742 | 8.7797 | 518 | 0.6137 | 0.4711 | 0.6137 | 0.7834 |
| 0.3742 | 8.8136 | 520 | 0.5911 | 0.5396 | 0.5911 | 0.7688 |
| 0.3742 | 8.8475 | 522 | 0.6211 | 0.5035 | 0.6211 | 0.7881 |
| 0.3742 | 8.8814 | 524 | 0.7404 | 0.4629 | 0.7404 | 0.8605 |
| 0.3742 | 8.9153 | 526 | 0.7928 | 0.4108 | 0.7928 | 0.8904 |
| 0.3742 | 8.9492 | 528 | 0.7446 | 0.4538 | 0.7446 | 0.8629 |
| 0.3742 | 8.9831 | 530 | 0.6457 | 0.5147 | 0.6457 | 0.8036 |
| 0.3742 | 9.0169 | 532 | 0.5852 | 0.5248 | 0.5852 | 0.7650 |
| 0.3742 | 9.0508 | 534 | 0.5808 | 0.4884 | 0.5808 | 0.7621 |
| 0.3742 | 9.0847 | 536 | 0.5865 | 0.5155 | 0.5865 | 0.7658 |
| 0.3742 | 9.1186 | 538 | 0.6318 | 0.5237 | 0.6318 | 0.7949 |
| 0.3742 | 9.1525 | 540 | 0.6793 | 0.4758 | 0.6793 | 0.8242 |
| 0.3742 | 9.1864 | 542 | 0.7238 | 0.4568 | 0.7238 | 0.8508 |
| 0.3742 | 9.2203 | 544 | 0.6955 | 0.4444 | 0.6955 | 0.8340 |
| 0.3742 | 9.2542 | 546 | 0.6120 | 0.4930 | 0.6120 | 0.7823 |
| 0.3742 | 9.2881 | 548 | 0.5883 | 0.5198 | 0.5883 | 0.7670 |
| 0.3742 | 9.3220 | 550 | 0.5969 | 0.5334 | 0.5969 | 0.7726 |
| 0.3742 | 9.3559 | 552 | 0.6477 | 0.5118 | 0.6477 | 0.8048 |
| 0.3742 | 9.3898 | 554 | 0.6999 | 0.4560 | 0.6999 | 0.8366 |
| 0.3742 | 9.4237 | 556 | 0.6612 | 0.5329 | 0.6612 | 0.8131 |
| 0.3742 | 9.4576 | 558 | 0.5937 | 0.5487 | 0.5937 | 0.7705 |
| 0.3742 | 9.4915 | 560 | 0.5724 | 0.4827 | 0.5724 | 0.7566 |
| 0.3742 | 9.5254 | 562 | 0.5785 | 0.4910 | 0.5785 | 0.7606 |
| 0.3742 | 9.5593 | 564 | 0.5883 | 0.4611 | 0.5883 | 0.7670 |
| 0.3742 | 9.5932 | 566 | 0.5856 | 0.4747 | 0.5856 | 0.7652 |
| 0.3742 | 9.6271 | 568 | 0.6184 | 0.5233 | 0.6184 | 0.7864 |
| 0.3742 | 9.6610 | 570 | 0.6614 | 0.5189 | 0.6614 | 0.8133 |
| 0.3742 | 9.6949 | 572 | 0.6453 | 0.4999 | 0.6453 | 0.8033 |
| 0.3742 | 9.7288 | 574 | 0.6140 | 0.5434 | 0.6140 | 0.7836 |
| 0.3742 | 9.7627 | 576 | 0.6231 | 0.4926 | 0.6231 | 0.7893 |
| 0.3742 | 9.7966 | 578 | 0.6571 | 0.4916 | 0.6571 | 0.8106 |
| 0.3742 | 9.8305 | 580 | 0.6343 | 0.4722 | 0.6343 | 0.7964 |
| 0.3742 | 9.8644 | 582 | 0.6065 | 0.4923 | 0.6065 | 0.7788 |
| 0.3742 | 9.8983 | 584 | 0.6180 | 0.4742 | 0.6180 | 0.7861 |
| 0.3742 | 9.9322 | 586 | 0.6712 | 0.4695 | 0.6712 | 0.8193 |
| 0.3742 | 9.9661 | 588 | 0.7290 | 0.4787 | 0.7290 | 0.8538 |
| 0.3742 | 10.0 | 590 | 0.7491 | 0.4650 | 0.7491 | 0.8655 |
| 0.3742 | 10.0339 | 592 | 0.7263 | 0.4659 | 0.7263 | 0.8522 |
| 0.3742 | 10.0678 | 594 | 0.6702 | 0.5251 | 0.6702 | 0.8187 |
| 0.3742 | 10.1017 | 596 | 0.6595 | 0.5351 | 0.6595 | 0.8121 |
| 0.3742 | 10.1356 | 598 | 0.6585 | 0.4904 | 0.6585 | 0.8115 |
| 0.3742 | 10.1695 | 600 | 0.6765 | 0.4783 | 0.6765 | 0.8225 |
| 0.3742 | 10.2034 | 602 | 0.6808 | 0.4394 | 0.6808 | 0.8251 |
| 0.3742 | 10.2373 | 604 | 0.6815 | 0.4096 | 0.6815 | 0.8255 |
| 0.3742 | 10.2712 | 606 | 0.6668 | 0.4283 | 0.6668 | 0.8166 |
| 0.3742 | 10.3051 | 608 | 0.6261 | 0.4877 | 0.6261 | 0.7912 |
| 0.3742 | 10.3390 | 610 | 0.6318 | 0.4413 | 0.6318 | 0.7948 |
| 0.3742 | 10.3729 | 612 | 0.6476 | 0.4626 | 0.6476 | 0.8048 |
| 0.3742 | 10.4068 | 614 | 0.6388 | 0.4407 | 0.6388 | 0.7992 |
| 0.3742 | 10.4407 | 616 | 0.6333 | 0.4539 | 0.6333 | 0.7958 |
| 0.3742 | 10.4746 | 618 | 0.6294 | 0.4416 | 0.6294 | 0.7934 |
| 0.3742 | 10.5085 | 620 | 0.6251 | 0.4438 | 0.6251 | 0.7906 |
| 0.3742 | 10.5424 | 622 | 0.6119 | 0.3935 | 0.6119 | 0.7822 |
| 0.3742 | 10.5763 | 624 | 0.6131 | 0.4082 | 0.6131 | 0.7830 |
| 0.3742 | 10.6102 | 626 | 0.6237 | 0.3958 | 0.6237 | 0.7898 |
| 0.3742 | 10.6441 | 628 | 0.6435 | 0.4252 | 0.6435 | 0.8022 |
| 0.3742 | 10.6780 | 630 | 0.6649 | 0.4222 | 0.6649 | 0.8154 |
| 0.3742 | 10.7119 | 632 | 0.6542 | 0.4579 | 0.6542 | 0.8088 |
| 0.3742 | 10.7458 | 634 | 0.6224 | 0.4349 | 0.6224 | 0.7889 |
| 0.3742 | 10.7797 | 636 | 0.6125 | 0.4549 | 0.6125 | 0.7826 |
| 0.3742 | 10.8136 | 638 | 0.6063 | 0.4004 | 0.6063 | 0.7786 |
| 0.3742 | 10.8475 | 640 | 0.6026 | 0.4412 | 0.6026 | 0.7763 |
| 0.3742 | 10.8814 | 642 | 0.5971 | 0.4454 | 0.5971 | 0.7727 |
| 0.3742 | 10.9153 | 644 | 0.5979 | 0.4541 | 0.5979 | 0.7733 |
| 0.3742 | 10.9492 | 646 | 0.5891 | 0.4398 | 0.5891 | 0.7675 |
| 0.3742 | 10.9831 | 648 | 0.5801 | 0.4844 | 0.5801 | 0.7617 |
| 0.3742 | 11.0169 | 650 | 0.5838 | 0.5122 | 0.5838 | 0.7641 |
| 0.3742 | 11.0508 | 652 | 0.6248 | 0.5615 | 0.6248 | 0.7904 |
| 0.3742 | 11.0847 | 654 | 0.7645 | 0.4710 | 0.7645 | 0.8744 |
| 0.3742 | 11.1186 | 656 | 0.9152 | 0.3868 | 0.9152 | 0.9567 |
| 0.3742 | 11.1525 | 658 | 0.9535 | 0.3497 | 0.9535 | 0.9765 |
| 0.3742 | 11.1864 | 660 | 0.8803 | 0.3921 | 0.8803 | 0.9382 |
| 0.3742 | 11.2203 | 662 | 0.7806 | 0.4437 | 0.7806 | 0.8835 |
| 0.3742 | 11.2542 | 664 | 0.6500 | 0.5398 | 0.6500 | 0.8062 |
| 0.3742 | 11.2881 | 666 | 0.5876 | 0.5625 | 0.5876 | 0.7665 |
| 0.3742 | 11.3220 | 668 | 0.5857 | 0.5625 | 0.5857 | 0.7653 |
| 0.3742 | 11.3559 | 670 | 0.6102 | 0.5683 | 0.6102 | 0.7812 |
| 0.3742 | 11.3898 | 672 | 0.6804 | 0.5423 | 0.6804 | 0.8249 |
| 0.3742 | 11.4237 | 674 | 0.7725 | 0.4654 | 0.7725 | 0.8789 |
| 0.3742 | 11.4576 | 676 | 0.8085 | 0.4392 | 0.8085 | 0.8991 |
| 0.3742 | 11.4915 | 678 | 0.7631 | 0.4239 | 0.7631 | 0.8736 |
| 0.3742 | 11.5254 | 680 | 0.6930 | 0.4136 | 0.6930 | 0.8325 |
| 0.3742 | 11.5593 | 682 | 0.6677 | 0.4598 | 0.6677 | 0.8171 |
| 0.3742 | 11.5932 | 684 | 0.6682 | 0.4685 | 0.6682 | 0.8174 |
| 0.3742 | 11.6271 | 686 | 0.7061 | 0.4925 | 0.7061 | 0.8403 |
| 0.3742 | 11.6610 | 688 | 0.7086 | 0.4793 | 0.7086 | 0.8418 |
| 0.3742 | 11.6949 | 690 | 0.6720 | 0.5200 | 0.6720 | 0.8197 |
| 0.3742 | 11.7288 | 692 | 0.6325 | 0.5019 | 0.6325 | 0.7953 |
| 0.3742 | 11.7627 | 694 | 0.6226 | 0.4998 | 0.6226 | 0.7890 |
| 0.3742 | 11.7966 | 696 | 0.6334 | 0.5347 | 0.6334 | 0.7959 |
| 0.3742 | 11.8305 | 698 | 0.6461 | 0.5006 | 0.6461 | 0.8038 |
| 0.3742 | 11.8644 | 700 | 0.6405 | 0.5336 | 0.6405 | 0.8003 |
| 0.3742 | 11.8983 | 702 | 0.6450 | 0.4781 | 0.6450 | 0.8031 |
| 0.3742 | 11.9322 | 704 | 0.6486 | 0.4519 | 0.6486 | 0.8054 |
| 0.3742 | 11.9661 | 706 | 0.6481 | 0.4878 | 0.6481 | 0.8051 |
| 0.3742 | 12.0 | 708 | 0.6390 | 0.5206 | 0.6390 | 0.7994 |
| 0.3742 | 12.0339 | 710 | 0.6954 | 0.5230 | 0.6954 | 0.8339 |
| 0.3742 | 12.0678 | 712 | 0.7365 | 0.4913 | 0.7365 | 0.8582 |
| 0.3742 | 12.1017 | 714 | 0.7072 | 0.4726 | 0.7072 | 0.8409 |
| 0.3742 | 12.1356 | 716 | 0.6519 | 0.4459 | 0.6519 | 0.8074 |
| 0.3742 | 12.1695 | 718 | 0.6112 | 0.4714 | 0.6112 | 0.7818 |
| 0.3742 | 12.2034 | 720 | 0.6113 | 0.4781 | 0.6113 | 0.7819 |
| 0.3742 | 12.2373 | 722 | 0.6359 | 0.4375 | 0.6359 | 0.7974 |
| 0.3742 | 12.2712 | 724 | 0.6797 | 0.4674 | 0.6797 | 0.8244 |
| 0.3742 | 12.3051 | 726 | 0.7603 | 0.3992 | 0.7603 | 0.8719 |
| 0.3742 | 12.3390 | 728 | 0.7811 | 0.4862 | 0.7811 | 0.8838 |
| 0.3742 | 12.3729 | 730 | 0.7230 | 0.4618 | 0.7230 | 0.8503 |
| 0.3742 | 12.4068 | 732 | 0.6736 | 0.4575 | 0.6736 | 0.8207 |
| 0.3742 | 12.4407 | 734 | 0.6627 | 0.5083 | 0.6627 | 0.8140 |
| 0.3742 | 12.4746 | 736 | 0.6226 | 0.5014 | 0.6226 | 0.7891 |
| 0.3742 | 12.5085 | 738 | 0.6193 | 0.5362 | 0.6193 | 0.7870 |
| 0.3742 | 12.5424 | 740 | 0.6673 | 0.5083 | 0.6673 | 0.8169 |
| 0.3742 | 12.5763 | 742 | 0.7748 | 0.4346 | 0.7748 | 0.8802 |
| 0.3742 | 12.6102 | 744 | 0.8359 | 0.4813 | 0.8359 | 0.9143 |
| 0.3742 | 12.6441 | 746 | 0.7885 | 0.4116 | 0.7885 | 0.8880 |
| 0.3742 | 12.6780 | 748 | 0.7043 | 0.4074 | 0.7043 | 0.8392 |
| 0.3742 | 12.7119 | 750 | 0.6857 | 0.3907 | 0.6857 | 0.8280 |
| 0.3742 | 12.7458 | 752 | 0.6625 | 0.3733 | 0.6625 | 0.8140 |
| 0.3742 | 12.7797 | 754 | 0.6589 | 0.3733 | 0.6589 | 0.8118 |
| 0.3742 | 12.8136 | 756 | 0.6701 | 0.4074 | 0.6701 | 0.8186 |
| 0.3742 | 12.8475 | 758 | 0.6657 | 0.3907 | 0.6657 | 0.8159 |
| 0.3742 | 12.8814 | 760 | 0.6350 | 0.4112 | 0.6350 | 0.7969 |
| 0.3742 | 12.9153 | 762 | 0.6341 | 0.4101 | 0.6341 | 0.7963 |
| 0.3742 | 12.9492 | 764 | 0.6836 | 0.4522 | 0.6836 | 0.8268 |
| 0.3742 | 12.9831 | 766 | 0.6856 | 0.4935 | 0.6856 | 0.8280 |
| 0.3742 | 13.0169 | 768 | 0.6397 | 0.4842 | 0.6397 | 0.7998 |
| 0.3742 | 13.0508 | 770 | 0.5950 | 0.4874 | 0.5950 | 0.7713 |
| 0.3742 | 13.0847 | 772 | 0.5875 | 0.5503 | 0.5875 | 0.7665 |
| 0.3742 | 13.1186 | 774 | 0.5872 | 0.5484 | 0.5872 | 0.7663 |
| 0.3742 | 13.1525 | 776 | 0.5959 | 0.5323 | 0.5959 | 0.7719 |
| 0.3742 | 13.1864 | 778 | 0.6663 | 0.5007 | 0.6663 | 0.8163 |
| 0.3742 | 13.2203 | 780 | 0.7393 | 0.4969 | 0.7393 | 0.8598 |
| 0.3742 | 13.2542 | 782 | 0.7082 | 0.4969 | 0.7082 | 0.8415 |
| 0.3742 | 13.2881 | 784 | 0.6598 | 0.4589 | 0.6598 | 0.8123 |
| 0.3742 | 13.3220 | 786 | 0.6112 | 0.4261 | 0.6112 | 0.7818 |
| 0.3742 | 13.3559 | 788 | 0.5781 | 0.4533 | 0.5781 | 0.7603 |
| 0.3742 | 13.3898 | 790 | 0.5725 | 0.4668 | 0.5725 | 0.7566 |
| 0.3742 | 13.4237 | 792 | 0.5823 | 0.4712 | 0.5823 | 0.7631 |
| 0.3742 | 13.4576 | 794 | 0.5983 | 0.5122 | 0.5983 | 0.7735 |
| 0.3742 | 13.4915 | 796 | 0.5988 | 0.5122 | 0.5988 | 0.7738 |
| 0.3742 | 13.5254 | 798 | 0.6210 | 0.4783 | 0.6210 | 0.7881 |
| 0.3742 | 13.5593 | 800 | 0.6163 | 0.4794 | 0.6163 | 0.7850 |
| 0.3742 | 13.5932 | 802 | 0.5988 | 0.4531 | 0.5988 | 0.7738 |
| 0.3742 | 13.6271 | 804 | 0.5855 | 0.4531 | 0.5855 | 0.7652 |
| 0.3742 | 13.6610 | 806 | 0.5785 | 0.4842 | 0.5785 | 0.7606 |
| 0.3742 | 13.6949 | 808 | 0.5831 | 0.4442 | 0.5831 | 0.7636 |
| 0.3742 | 13.7288 | 810 | 0.5992 | 0.4839 | 0.5992 | 0.7741 |
| 0.3742 | 13.7627 | 812 | 0.6030 | 0.4839 | 0.6030 | 0.7766 |
| 0.3742 | 13.7966 | 814 | 0.5894 | 0.4490 | 0.5894 | 0.7677 |
| 0.3742 | 13.8305 | 816 | 0.5914 | 0.4490 | 0.5914 | 0.7690 |
| 0.3742 | 13.8644 | 818 | 0.5950 | 0.4490 | 0.5950 | 0.7714 |
| 0.3742 | 13.8983 | 820 | 0.6106 | 0.4746 | 0.6106 | 0.7814 |
| 0.3742 | 13.9322 | 822 | 0.6293 | 0.4754 | 0.6293 | 0.7933 |
| 0.3742 | 13.9661 | 824 | 0.6451 | 0.4611 | 0.6451 | 0.8032 |
| 0.3742 | 14.0 | 826 | 0.6494 | 0.4578 | 0.6494 | 0.8059 |
| 0.3742 | 14.0339 | 828 | 0.6249 | 0.4187 | 0.6249 | 0.7905 |
| 0.3742 | 14.0678 | 830 | 0.6007 | 0.4815 | 0.6007 | 0.7751 |
| 0.3742 | 14.1017 | 832 | 0.6217 | 0.4193 | 0.6217 | 0.7885 |
| 0.3742 | 14.1356 | 834 | 0.6424 | 0.4374 | 0.6424 | 0.8015 |
| 0.3742 | 14.1695 | 836 | 0.6151 | 0.4528 | 0.6151 | 0.7843 |
| 0.3742 | 14.2034 | 838 | 0.5923 | 0.5188 | 0.5923 | 0.7696 |
| 0.3742 | 14.2373 | 840 | 0.6380 | 0.4634 | 0.6380 | 0.7988 |
| 0.3742 | 14.2712 | 842 | 0.6802 | 0.4987 | 0.6802 | 0.8247 |
| 0.3742 | 14.3051 | 844 | 0.6805 | 0.5084 | 0.6805 | 0.8249 |
| 0.3742 | 14.3390 | 846 | 0.6955 | 0.4987 | 0.6955 | 0.8340 |
| 0.3742 | 14.3729 | 848 | 0.6434 | 0.4605 | 0.6434 | 0.8021 |
| 0.3742 | 14.4068 | 850 | 0.5940 | 0.4357 | 0.5940 | 0.7707 |
| 0.3742 | 14.4407 | 852 | 0.5861 | 0.4783 | 0.5861 | 0.7655 |
| 0.3742 | 14.4746 | 854 | 0.5856 | 0.4424 | 0.5856 | 0.7652 |
| 0.3742 | 14.5085 | 856 | 0.6035 | 0.4516 | 0.6035 | 0.7769 |
| 0.3742 | 14.5424 | 858 | 0.6451 | 0.4469 | 0.6451 | 0.8032 |
| 0.3742 | 14.5763 | 860 | 0.6481 | 0.4469 | 0.6481 | 0.8051 |
| 0.3742 | 14.6102 | 862 | 0.6175 | 0.4695 | 0.6175 | 0.7858 |
| 0.3742 | 14.6441 | 864 | 0.5913 | 0.4909 | 0.5913 | 0.7690 |
| 0.3742 | 14.6780 | 866 | 0.5823 | 0.4644 | 0.5823 | 0.7631 |
| 0.3742 | 14.7119 | 868 | 0.5905 | 0.5006 | 0.5905 | 0.7684 |
| 0.3742 | 14.7458 | 870 | 0.5986 | 0.4712 | 0.5986 | 0.7737 |
| 0.3742 | 14.7797 | 872 | 0.5790 | 0.5181 | 0.5790 | 0.7609 |
| 0.3742 | 14.8136 | 874 | 0.5746 | 0.5282 | 0.5746 | 0.7581 |
| 0.3742 | 14.8475 | 876 | 0.5925 | 0.5321 | 0.5925 | 0.7697 |
| 0.3742 | 14.8814 | 878 | 0.6330 | 0.4714 | 0.6330 | 0.7956 |
| 0.3742 | 14.9153 | 880 | 0.6533 | 0.4895 | 0.6533 | 0.8083 |
| 0.3742 | 14.9492 | 882 | 0.6274 | 0.4850 | 0.6274 | 0.7921 |
| 0.3742 | 14.9831 | 884 | 0.6142 | 0.5117 | 0.6142 | 0.7837 |
| 0.3742 | 15.0169 | 886 | 0.6155 | 0.5190 | 0.6155 | 0.7845 |
| 0.3742 | 15.0508 | 888 | 0.6017 | 0.5472 | 0.6017 | 0.7757 |
| 0.3742 | 15.0847 | 890 | 0.6124 | 0.5397 | 0.6124 | 0.7825 |
| 0.3742 | 15.1186 | 892 | 0.6208 | 0.5307 | 0.6208 | 0.7879 |
| 0.3742 | 15.1525 | 894 | 0.6130 | 0.5307 | 0.6130 | 0.7829 |
| 0.3742 | 15.1864 | 896 | 0.6133 | 0.5307 | 0.6133 | 0.7832 |
| 0.3742 | 15.2203 | 898 | 0.6103 | 0.5253 | 0.6103 | 0.7812 |
| 0.3742 | 15.2542 | 900 | 0.6169 | 0.5261 | 0.6169 | 0.7854 |
| 0.3742 | 15.2881 | 902 | 0.6092 | 0.5261 | 0.6092 | 0.7805 |
| 0.3742 | 15.3220 | 904 | 0.6049 | 0.5144 | 0.6049 | 0.7777 |
| 0.3742 | 15.3559 | 906 | 0.6190 | 0.5077 | 0.6190 | 0.7867 |
| 0.3742 | 15.3898 | 908 | 0.6285 | 0.5243 | 0.6285 | 0.7928 |
| 0.3742 | 15.4237 | 910 | 0.6431 | 0.5620 | 0.6431 | 0.8019 |
| 0.3742 | 15.4576 | 912 | 0.6844 | 0.5057 | 0.6844 | 0.8273 |
| 0.3742 | 15.4915 | 914 | 0.7018 | 0.5043 | 0.7018 | 0.8377 |
| 0.3742 | 15.5254 | 916 | 0.6662 | 0.5068 | 0.6662 | 0.8162 |
| 0.3742 | 15.5593 | 918 | 0.6597 | 0.5068 | 0.6597 | 0.8122 |
| 0.3742 | 15.5932 | 920 | 0.6547 | 0.4884 | 0.6547 | 0.8091 |
| 0.3742 | 15.6271 | 922 | 0.6340 | 0.4930 | 0.6340 | 0.7962 |
| 0.3742 | 15.6610 | 924 | 0.6111 | 0.4724 | 0.6111 | 0.7817 |
| 0.3742 | 15.6949 | 926 | 0.5918 | 0.4578 | 0.5918 | 0.7693 |
| 0.3742 | 15.7288 | 928 | 0.5899 | 0.4816 | 0.5899 | 0.7680 |
| 0.3742 | 15.7627 | 930 | 0.5993 | 0.5096 | 0.5993 | 0.7741 |
| 0.3742 | 15.7966 | 932 | 0.6248 | 0.5113 | 0.6248 | 0.7904 |
| 0.3742 | 15.8305 | 934 | 0.6389 | 0.5128 | 0.6389 | 0.7993 |
| 0.3742 | 15.8644 | 936 | 0.6344 | 0.4857 | 0.6344 | 0.7965 |
| 0.3742 | 15.8983 | 938 | 0.6210 | 0.5162 | 0.6210 | 0.7881 |
| 0.3742 | 15.9322 | 940 | 0.6144 | 0.4728 | 0.6144 | 0.7838 |
| 0.3742 | 15.9661 | 942 | 0.5873 | 0.3982 | 0.5873 | 0.7664 |
| 0.3742 | 16.0 | 944 | 0.5795 | 0.3878 | 0.5795 | 0.7612 |
| 0.3742 | 16.0339 | 946 | 0.5782 | 0.3916 | 0.5782 | 0.7604 |
| 0.3742 | 16.0678 | 948 | 0.5850 | 0.4273 | 0.5850 | 0.7648 |
| 0.3742 | 16.1017 | 950 | 0.5998 | 0.4724 | 0.5998 | 0.7745 |
| 0.3742 | 16.1356 | 952 | 0.6271 | 0.4907 | 0.6271 | 0.7919 |
| 0.3742 | 16.1695 | 954 | 0.6676 | 0.4799 | 0.6676 | 0.8171 |
| 0.3742 | 16.2034 | 956 | 0.6535 | 0.4912 | 0.6535 | 0.8084 |
| 0.3742 | 16.2373 | 958 | 0.6461 | 0.4924 | 0.6461 | 0.8038 |
| 0.3742 | 16.2712 | 960 | 0.6468 | 0.5076 | 0.6468 | 0.8042 |
| 0.3742 | 16.3051 | 962 | 0.6233 | 0.4960 | 0.6233 | 0.7895 |
| 0.3742 | 16.3390 | 964 | 0.6155 | 0.4390 | 0.6155 | 0.7845 |
| 0.3742 | 16.3729 | 966 | 0.6053 | 0.4183 | 0.6053 | 0.7780 |
| 0.3742 | 16.4068 | 968 | 0.5959 | 0.4289 | 0.5959 | 0.7720 |
| 0.3742 | 16.4407 | 970 | 0.5892 | 0.3992 | 0.5892 | 0.7676 |
| 0.3742 | 16.4746 | 972 | 0.5878 | 0.3750 | 0.5878 | 0.7667 |
| 0.3742 | 16.5085 | 974 | 0.5987 | 0.4088 | 0.5987 | 0.7737 |
| 0.3742 | 16.5424 | 976 | 0.6297 | 0.4046 | 0.6297 | 0.7936 |
| 0.3742 | 16.5763 | 978 | 0.7015 | 0.4426 | 0.7015 | 0.8376 |
| 0.3742 | 16.6102 | 980 | 0.7986 | 0.4345 | 0.7986 | 0.8936 |
| 0.3742 | 16.6441 | 982 | 0.8505 | 0.4530 | 0.8505 | 0.9223 |
| 0.3742 | 16.6780 | 984 | 0.8661 | 0.4522 | 0.8661 | 0.9307 |
| 0.3742 | 16.7119 | 986 | 0.8286 | 0.4633 | 0.8286 | 0.9103 |
| 0.3742 | 16.7458 | 988 | 0.7841 | 0.4441 | 0.7841 | 0.8855 |
| 0.3742 | 16.7797 | 990 | 0.7511 | 0.4337 | 0.7511 | 0.8666 |
| 0.3742 | 16.8136 | 992 | 0.6991 | 0.4381 | 0.6991 | 0.8361 |
| 0.3742 | 16.8475 | 994 | 0.6421 | 0.4324 | 0.6421 | 0.8013 |
| 0.3742 | 16.8814 | 996 | 0.6367 | 0.4515 | 0.6367 | 0.7980 |
| 0.3742 | 16.9153 | 998 | 0.6562 | 0.4381 | 0.6562 | 0.8101 |
| 0.0688 | 16.9492 | 1000 | 0.6999 | 0.4568 | 0.6999 | 0.8366 |
| 0.0688 | 16.9831 | 1002 | 0.7397 | 0.4731 | 0.7397 | 0.8600 |
| 0.0688 | 17.0169 | 1004 | 0.7376 | 0.4731 | 0.7376 | 0.8589 |
| 0.0688 | 17.0508 | 1006 | 0.6975 | 0.4690 | 0.6975 | 0.8352 |
| 0.0688 | 17.0847 | 1008 | 0.6399 | 0.4837 | 0.6399 | 0.7999 |
| 0.0688 | 17.1186 | 1010 | 0.6059 | 0.4539 | 0.6059 | 0.7784 |
| 0.0688 | 17.1525 | 1012 | 0.6169 | 0.4515 | 0.6169 | 0.7855 |
| 0.0688 | 17.1864 | 1014 | 0.6328 | 0.4548 | 0.6328 | 0.7955 |
| 0.0688 | 17.2203 | 1016 | 0.6471 | 0.4668 | 0.6471 | 0.8044 |
| 0.0688 | 17.2542 | 1018 | 0.6186 | 0.4455 | 0.6186 | 0.7865 |
| 0.0688 | 17.2881 | 1020 | 0.5951 | 0.4805 | 0.5951 | 0.7714 |
| 0.0688 | 17.3220 | 1022 | 0.6025 | 0.4704 | 0.6025 | 0.7762 |
| 0.0688 | 17.3559 | 1024 | 0.6019 | 0.4794 | 0.6019 | 0.7758 |
| 0.0688 | 17.3898 | 1026 | 0.5891 | 0.4857 | 0.5891 | 0.7676 |
| 0.0688 | 17.4237 | 1028 | 0.5924 | 0.4857 | 0.5924 | 0.7697 |
| 0.0688 | 17.4576 | 1030 | 0.6075 | 0.4921 | 0.6075 | 0.7794 |
| 0.0688 | 17.4915 | 1032 | 0.6063 | 0.4857 | 0.6063 | 0.7786 |
| 0.0688 | 17.5254 | 1034 | 0.6025 | 0.4950 | 0.6025 | 0.7762 |
| 0.0688 | 17.5593 | 1036 | 0.6076 | 0.5289 | 0.6076 | 0.7795 |
| 0.0688 | 17.5932 | 1038 | 0.6015 | 0.5006 | 0.6015 | 0.7755 |
| 0.0688 | 17.6271 | 1040 | 0.5847 | 0.4514 | 0.5847 | 0.7647 |
| 0.0688 | 17.6610 | 1042 | 0.5782 | 0.4614 | 0.5782 | 0.7604 |
| 0.0688 | 17.6949 | 1044 | 0.5890 | 0.4603 | 0.5890 | 0.7675 |
| 0.0688 | 17.7288 | 1046 | 0.6114 | 0.4637 | 0.6114 | 0.7820 |
| 0.0688 | 17.7627 | 1048 | 0.6075 | 0.4637 | 0.6075 | 0.7794 |
| 0.0688 | 17.7966 | 1050 | 0.5894 | 0.4521 | 0.5894 | 0.7677 |
| 0.0688 | 17.8305 | 1052 | 0.5982 | 0.5384 | 0.5982 | 0.7734 |
| 0.0688 | 17.8644 | 1054 | 0.6251 | 0.4824 | 0.6251 | 0.7907 |
| 0.0688 | 17.8983 | 1056 | 0.6211 | 0.5060 | 0.6211 | 0.7881 |
| 0.0688 | 17.9322 | 1058 | 0.6035 | 0.5103 | 0.6035 | 0.7769 |
| 0.0688 | 17.9661 | 1060 | 0.5846 | 0.5630 | 0.5846 | 0.7646 |
| 0.0688 | 18.0 | 1062 | 0.5784 | 0.5507 | 0.5784 | 0.7605 |
| 0.0688 | 18.0339 | 1064 | 0.5922 | 0.4958 | 0.5922 | 0.7695 |
| 0.0688 | 18.0678 | 1066 | 0.5961 | 0.4779 | 0.5961 | 0.7721 |
| 0.0688 | 18.1017 | 1068 | 0.5831 | 0.5146 | 0.5831 | 0.7636 |
| 0.0688 | 18.1356 | 1070 | 0.5780 | 0.5843 | 0.5780 | 0.7602 |
| 0.0688 | 18.1695 | 1072 | 0.5770 | 0.5842 | 0.5770 | 0.7596 |
| 0.0688 | 18.2034 | 1074 | 0.5695 | 0.5493 | 0.5695 | 0.7547 |
| 0.0688 | 18.2373 | 1076 | 0.5642 | 0.5012 | 0.5642 | 0.7511 |
| 0.0688 | 18.2712 | 1078 | 0.5862 | 0.4769 | 0.5862 | 0.7657 |
| 0.0688 | 18.3051 | 1080 | 0.6085 | 0.4507 | 0.6085 | 0.7800 |
| 0.0688 | 18.3390 | 1082 | 0.6239 | 0.4434 | 0.6239 | 0.7898 |
| 0.0688 | 18.3729 | 1084 | 0.6444 | 0.4427 | 0.6444 | 0.8028 |
| 0.0688 | 18.4068 | 1086 | 0.6201 | 0.4787 | 0.6200 | 0.7874 |
| 0.0688 | 18.4407 | 1088 | 0.5929 | 0.4999 | 0.5929 | 0.7700 |
| 0.0688 | 18.4746 | 1090 | 0.5672 | 0.5347 | 0.5672 | 0.7531 |
| 0.0688 | 18.5085 | 1092 | 0.5631 | 0.5718 | 0.5631 | 0.7504 |
| 0.0688 | 18.5424 | 1094 | 0.5755 | 0.5815 | 0.5755 | 0.7586 |
| 0.0688 | 18.5763 | 1096 | 0.5859 | 0.5676 | 0.5859 | 0.7654 |
| 0.0688 | 18.6102 | 1098 | 0.6115 | 0.5712 | 0.6115 | 0.7820 |
| 0.0688 | 18.6441 | 1100 | 0.6229 | 0.5640 | 0.6229 | 0.7892 |
| 0.0688 | 18.6780 | 1102 | 0.6227 | 0.5860 | 0.6227 | 0.7891 |
| 0.0688 | 18.7119 | 1104 | 0.6254 | 0.624 | 0.6254 | 0.7908 |
| 0.0688 | 18.7458 | 1106 | 0.6341 | 0.5816 | 0.6341 | 0.7963 |
| 0.0688 | 18.7797 | 1108 | 0.6333 | 0.5816 | 0.6333 | 0.7958 |
| 0.0688 | 18.8136 | 1110 | 0.6140 | 0.5953 | 0.6140 | 0.7836 |
| 0.0688 | 18.8475 | 1112 | 0.6343 | 0.5281 | 0.6343 | 0.7964 |
| 0.0688 | 18.8814 | 1114 | 0.7013 | 0.5125 | 0.7013 | 0.8374 |
| 0.0688 | 18.9153 | 1116 | 0.7885 | 0.4590 | 0.7885 | 0.8880 |
| 0.0688 | 18.9492 | 1118 | 0.7920 | 0.4267 | 0.7920 | 0.8899 |
| 0.0688 | 18.9831 | 1120 | 0.7500 | 0.4846 | 0.7500 | 0.8660 |
| 0.0688 | 19.0169 | 1122 | 0.6853 | 0.4687 | 0.6853 | 0.8278 |
| 0.0688 | 19.0508 | 1124 | 0.6170 | 0.4815 | 0.6170 | 0.7855 |
| 0.0688 | 19.0847 | 1126 | 0.5854 | 0.5642 | 0.5854 | 0.7651 |
| 0.0688 | 19.1186 | 1128 | 0.5962 | 0.5682 | 0.5962 | 0.7722 |
| 0.0688 | 19.1525 | 1130 | 0.6283 | 0.5178 | 0.6283 | 0.7927 |
| 0.0688 | 19.1864 | 1132 | 0.7183 | 0.4982 | 0.7183 | 0.8475 |
| 0.0688 | 19.2203 | 1134 | 0.7741 | 0.5055 | 0.7741 | 0.8798 |
| 0.0688 | 19.2542 | 1136 | 0.7745 | 0.5055 | 0.7745 | 0.8801 |
| 0.0688 | 19.2881 | 1138 | 0.7280 | 0.5296 | 0.7280 | 0.8532 |
| 0.0688 | 19.3220 | 1140 | 0.6471 | 0.5198 | 0.6471 | 0.8044 |
| 0.0688 | 19.3559 | 1142 | 0.6004 | 0.4655 | 0.6004 | 0.7748 |
| 0.0688 | 19.3898 | 1144 | 0.5858 | 0.4641 | 0.5858 | 0.7654 |
| 0.0688 | 19.4237 | 1146 | 0.6048 | 0.4508 | 0.6048 | 0.7777 |
| 0.0688 | 19.4576 | 1148 | 0.6611 | 0.5034 | 0.6611 | 0.8131 |
| 0.0688 | 19.4915 | 1150 | 0.7651 | 0.4846 | 0.7651 | 0.8747 |
| 0.0688 | 19.5254 | 1152 | 0.8086 | 0.4871 | 0.8086 | 0.8992 |
| 0.0688 | 19.5593 | 1154 | 0.7747 | 0.4835 | 0.7747 | 0.8802 |
| 0.0688 | 19.5932 | 1156 | 0.7153 | 0.5034 | 0.7153 | 0.8458 |
| 0.0688 | 19.6271 | 1158 | 0.6538 | 0.4515 | 0.6538 | 0.8086 |
| 0.0688 | 19.6610 | 1160 | 0.6188 | 0.5033 | 0.6188 | 0.7866 |
| 0.0688 | 19.6949 | 1162 | 0.5943 | 0.5434 | 0.5943 | 0.7709 |
| 0.0688 | 19.7288 | 1164 | 0.5820 | 0.5768 | 0.5820 | 0.7629 |
| 0.0688 | 19.7627 | 1166 | 0.5688 | 0.5165 | 0.5688 | 0.7542 |
| 0.0688 | 19.7966 | 1168 | 0.5513 | 0.5440 | 0.5513 | 0.7425 |
| 0.0688 | 19.8305 | 1170 | 0.5548 | 0.5284 | 0.5548 | 0.7449 |
| 0.0688 | 19.8644 | 1172 | 0.5588 | 0.5148 | 0.5588 | 0.7475 |
| 0.0688 | 19.8983 | 1174 | 0.5786 | 0.4564 | 0.5786 | 0.7607 |
| 0.0688 | 19.9322 | 1176 | 0.5941 | 0.4434 | 0.5941 | 0.7708 |
| 0.0688 | 19.9661 | 1178 | 0.6011 | 0.4434 | 0.6011 | 0.7753 |
| 0.0688 | 20.0 | 1180 | 0.5806 | 0.4630 | 0.5806 | 0.7620 |
| 0.0688 | 20.0339 | 1182 | 0.5661 | 0.4409 | 0.5661 | 0.7524 |
| 0.0688 | 20.0678 | 1184 | 0.5504 | 0.4853 | 0.5504 | 0.7419 |
| 0.0688 | 20.1017 | 1186 | 0.5375 | 0.5291 | 0.5375 | 0.7332 |
| 0.0688 | 20.1356 | 1188 | 0.5358 | 0.5697 | 0.5358 | 0.7320 |
| 0.0688 | 20.1695 | 1190 | 0.5576 | 0.4783 | 0.5576 | 0.7467 |
| 0.0688 | 20.2034 | 1192 | 0.5614 | 0.4691 | 0.5614 | 0.7493 |
| 0.0688 | 20.2373 | 1194 | 0.5652 | 0.5397 | 0.5652 | 0.7518 |
| 0.0688 | 20.2712 | 1196 | 0.5930 | 0.4932 | 0.5930 | 0.7701 |
| 0.0688 | 20.3051 | 1198 | 0.6207 | 0.5158 | 0.6207 | 0.7879 |
| 0.0688 | 20.3390 | 1200 | 0.6107 | 0.4754 | 0.6107 | 0.7815 |
| 0.0688 | 20.3729 | 1202 | 0.5663 | 0.5121 | 0.5663 | 0.7526 |
| 0.0688 | 20.4068 | 1204 | 0.5480 | 0.4630 | 0.5480 | 0.7403 |
| 0.0688 | 20.4407 | 1206 | 0.5471 | 0.4723 | 0.5471 | 0.7397 |
| 0.0688 | 20.4746 | 1208 | 0.5466 | 0.4900 | 0.5466 | 0.7394 |
| 0.0688 | 20.5085 | 1210 | 0.5494 | 0.5188 | 0.5494 | 0.7412 |
| 0.0688 | 20.5424 | 1212 | 0.5928 | 0.4833 | 0.5928 | 0.7700 |
| 0.0688 | 20.5763 | 1214 | 0.7070 | 0.4776 | 0.7070 | 0.8408 |
| 0.0688 | 20.6102 | 1216 | 0.8177 | 0.4908 | 0.8177 | 0.9043 |
| 0.0688 | 20.6441 | 1218 | 0.8847 | 0.4728 | 0.8847 | 0.9406 |
| 0.0688 | 20.6780 | 1220 | 0.8832 | 0.4851 | 0.8832 | 0.9398 |
| 0.0688 | 20.7119 | 1222 | 0.8152 | 0.4701 | 0.8152 | 0.9029 |
| 0.0688 | 20.7458 | 1224 | 0.7643 | 0.4739 | 0.7643 | 0.8743 |
| 0.0688 | 20.7797 | 1226 | 0.7091 | 0.4809 | 0.7091 | 0.8421 |
| 0.0688 | 20.8136 | 1228 | 0.6890 | 0.5062 | 0.6890 | 0.8300 |
| 0.0688 | 20.8475 | 1230 | 0.6909 | 0.5046 | 0.6909 | 0.8312 |
| 0.0688 | 20.8814 | 1232 | 0.6553 | 0.5362 | 0.6553 | 0.8095 |
| 0.0688 | 20.9153 | 1234 | 0.6212 | 0.5441 | 0.6212 | 0.7882 |
| 0.0688 | 20.9492 | 1236 | 0.5811 | 0.5663 | 0.5811 | 0.7623 |
| 0.0688 | 20.9831 | 1238 | 0.5607 | 0.5851 | 0.5607 | 0.7488 |
| 0.0688 | 21.0169 | 1240 | 0.5425 | 0.5547 | 0.5425 | 0.7366 |
| 0.0688 | 21.0508 | 1242 | 0.5343 | 0.5506 | 0.5343 | 0.7309 |
| 0.0688 | 21.0847 | 1244 | 0.5425 | 0.5668 | 0.5425 | 0.7365 |
| 0.0688 | 21.1186 | 1246 | 0.5700 | 0.5725 | 0.5700 | 0.7550 |
| 0.0688 | 21.1525 | 1248 | 0.5868 | 0.5463 | 0.5868 | 0.7660 |
| 0.0688 | 21.1864 | 1250 | 0.6167 | 0.5310 | 0.6167 | 0.7853 |
| 0.0688 | 21.2203 | 1252 | 0.6533 | 0.5103 | 0.6533 | 0.8083 |
| 0.0688 | 21.2542 | 1254 | 0.6629 | 0.5020 | 0.6629 | 0.8142 |
| 0.0688 | 21.2881 | 1256 | 0.6226 | 0.5072 | 0.6226 | 0.7890 |
| 0.0688 | 21.3220 | 1258 | 0.5886 | 0.4794 | 0.5886 | 0.7672 |
| 0.0688 | 21.3559 | 1260 | 0.5911 | 0.4552 | 0.5911 | 0.7688 |
| 0.0688 | 21.3898 | 1262 | 0.6194 | 0.5087 | 0.6194 | 0.7870 |
| 0.0688 | 21.4237 | 1264 | 0.6461 | 0.4987 | 0.6461 | 0.8038 |
| 0.0688 | 21.4576 | 1266 | 0.6607 | 0.4901 | 0.6607 | 0.8128 |
| 0.0688 | 21.4915 | 1268 | 0.6342 | 0.5242 | 0.6342 | 0.7964 |
| 0.0688 | 21.5254 | 1270 | 0.5914 | 0.4973 | 0.5914 | 0.7690 |
| 0.0688 | 21.5593 | 1272 | 0.5663 | 0.5539 | 0.5663 | 0.7525 |
| 0.0688 | 21.5932 | 1274 | 0.5573 | 0.5610 | 0.5573 | 0.7465 |
| 0.0688 | 21.6271 | 1276 | 0.5555 | 0.5717 | 0.5555 | 0.7453 |
| 0.0688 | 21.6610 | 1278 | 0.5593 | 0.5378 | 0.5593 | 0.7479 |
| 0.0688 | 21.6949 | 1280 | 0.5734 | 0.4978 | 0.5734 | 0.7572 |
| 0.0688 | 21.7288 | 1282 | 0.5807 | 0.4721 | 0.5807 | 0.7620 |
| 0.0688 | 21.7627 | 1284 | 0.5672 | 0.4709 | 0.5672 | 0.7531 |
| 0.0688 | 21.7966 | 1286 | 0.5683 | 0.4320 | 0.5683 | 0.7539 |
| 0.0688 | 21.8305 | 1288 | 0.5905 | 0.4662 | 0.5905 | 0.7685 |
| 0.0688 | 21.8644 | 1290 | 0.6337 | 0.4665 | 0.6337 | 0.7960 |
| 0.0688 | 21.8983 | 1292 | 0.6521 | 0.4804 | 0.6521 | 0.8075 |
| 0.0688 | 21.9322 | 1294 | 0.6474 | 0.4804 | 0.6474 | 0.8046 |
| 0.0688 | 21.9661 | 1296 | 0.6005 | 0.4733 | 0.6005 | 0.7749 |
| 0.0688 | 22.0 | 1298 | 0.5691 | 0.5216 | 0.5691 | 0.7544 |
| 0.0688 | 22.0339 | 1300 | 0.5630 | 0.5403 | 0.5630 | 0.7503 |
| 0.0688 | 22.0678 | 1302 | 0.5609 | 0.5100 | 0.5609 | 0.7490 |
| 0.0688 | 22.1017 | 1304 | 0.5585 | 0.5293 | 0.5585 | 0.7473 |
| 0.0688 | 22.1356 | 1306 | 0.5707 | 0.5064 | 0.5707 | 0.7554 |
| 0.0688 | 22.1695 | 1308 | 0.5908 | 0.4787 | 0.5908 | 0.7686 |
| 0.0688 | 22.2034 | 1310 | 0.5947 | 0.4539 | 0.5947 | 0.7712 |
| 0.0688 | 22.2373 | 1312 | 0.6004 | 0.4684 | 0.6004 | 0.7748 |
| 0.0688 | 22.2712 | 1314 | 0.5847 | 0.4705 | 0.5847 | 0.7646 |
| 0.0688 | 22.3051 | 1316 | 0.5579 | 0.5486 | 0.5579 | 0.7470 |
| 0.0688 | 22.3390 | 1318 | 0.5393 | 0.5650 | 0.5393 | 0.7344 |
| 0.0688 | 22.3729 | 1320 | 0.5435 | 0.5733 | 0.5435 | 0.7373 |
| 0.0688 | 22.4068 | 1322 | 0.5754 | 0.5351 | 0.5754 | 0.7586 |
| 0.0688 | 22.4407 | 1324 | 0.6354 | 0.5072 | 0.6354 | 0.7971 |
| 0.0688 | 22.4746 | 1326 | 0.6615 | 0.5405 | 0.6615 | 0.8133 |
| 0.0688 | 22.5085 | 1328 | 0.6477 | 0.5132 | 0.6477 | 0.8048 |
| 0.0688 | 22.5424 | 1330 | 0.6418 | 0.5087 | 0.6418 | 0.8011 |
| 0.0688 | 22.5763 | 1332 | 0.6095 | 0.4804 | 0.6095 | 0.7807 |
| 0.0688 | 22.6102 | 1334 | 0.5686 | 0.5027 | 0.5686 | 0.7540 |
| 0.0688 | 22.6441 | 1336 | 0.5534 | 0.5537 | 0.5534 | 0.7439 |
| 0.0688 | 22.6780 | 1338 | 0.5521 | 0.5605 | 0.5521 | 0.7430 |
| 0.0688 | 22.7119 | 1340 | 0.5485 | 0.5288 | 0.5485 | 0.7406 |
| 0.0688 | 22.7458 | 1342 | 0.5557 | 0.5943 | 0.5557 | 0.7454 |
| 0.0688 | 22.7797 | 1344 | 0.5639 | 0.5943 | 0.5639 | 0.7509 |
| 0.0688 | 22.8136 | 1346 | 0.5736 | 0.5879 | 0.5736 | 0.7574 |
| 0.0688 | 22.8475 | 1348 | 0.5728 | 0.5881 | 0.5728 | 0.7568 |
| 0.0688 | 22.8814 | 1350 | 0.5684 | 0.5602 | 0.5684 | 0.7539 |
| 0.0688 | 22.9153 | 1352 | 0.5604 | 0.5692 | 0.5604 | 0.7486 |
| 0.0688 | 22.9492 | 1354 | 0.5632 | 0.5428 | 0.5632 | 0.7504 |
| 0.0688 | 22.9831 | 1356 | 0.5759 | 0.5333 | 0.5759 | 0.7589 |
| 0.0688 | 23.0169 | 1358 | 0.5763 | 0.5121 | 0.5763 | 0.7592 |
| 0.0688 | 23.0508 | 1360 | 0.5853 | 0.5048 | 0.5853 | 0.7650 |
| 0.0688 | 23.0847 | 1362 | 0.5941 | 0.5048 | 0.5941 | 0.7708 |
| 0.0688 | 23.1186 | 1364 | 0.6011 | 0.5048 | 0.6011 | 0.7753 |
| 0.0688 | 23.1525 | 1366 | 0.5912 | 0.4788 | 0.5912 | 0.7689 |
| 0.0688 | 23.1864 | 1368 | 0.5868 | 0.5056 | 0.5868 | 0.7660 |
| 0.0688 | 23.2203 | 1370 | 0.5868 | 0.5006 | 0.5868 | 0.7660 |
| 0.0688 | 23.2542 | 1372 | 0.5740 | 0.5130 | 0.5740 | 0.7576 |
| 0.0688 | 23.2881 | 1374 | 0.5735 | 0.5570 | 0.5735 | 0.7573 |
| 0.0688 | 23.3220 | 1376 | 0.5803 | 0.5866 | 0.5803 | 0.7618 |
| 0.0688 | 23.3559 | 1378 | 0.5817 | 0.5751 | 0.5817 | 0.7627 |
| 0.0688 | 23.3898 | 1380 | 0.5733 | 0.6093 | 0.5733 | 0.7572 |
| 0.0688 | 23.4237 | 1382 | 0.5714 | 0.5628 | 0.5714 | 0.7559 |
| 0.0688 | 23.4576 | 1384 | 0.5765 | 0.5693 | 0.5765 | 0.7593 |
| 0.0688 | 23.4915 | 1386 | 0.5940 | 0.5312 | 0.5940 | 0.7707 |
| 0.0688 | 23.5254 | 1388 | 0.5898 | 0.5391 | 0.5898 | 0.7680 |
| 0.0688 | 23.5593 | 1390 | 0.5604 | 0.5536 | 0.5604 | 0.7486 |
| 0.0688 | 23.5932 | 1392 | 0.5415 | 0.5122 | 0.5415 | 0.7359 |
| 0.0688 | 23.6271 | 1394 | 0.5371 | 0.5524 | 0.5371 | 0.7329 |
| 0.0688 | 23.6610 | 1396 | 0.5521 | 0.5409 | 0.5521 | 0.7430 |
| 0.0688 | 23.6949 | 1398 | 0.5866 | 0.4890 | 0.5866 | 0.7659 |
| 0.0688 | 23.7288 | 1400 | 0.5892 | 0.4928 | 0.5892 | 0.7676 |
| 0.0688 | 23.7627 | 1402 | 0.5736 | 0.5161 | 0.5736 | 0.7574 |
| 0.0688 | 23.7966 | 1404 | 0.5752 | 0.5025 | 0.5752 | 0.7584 |
| 0.0688 | 23.8305 | 1406 | 0.5832 | 0.4822 | 0.5832 | 0.7637 |
| 0.0688 | 23.8644 | 1408 | 0.5855 | 0.4984 | 0.5855 | 0.7652 |
| 0.0688 | 23.8983 | 1410 | 0.5871 | 0.5427 | 0.5871 | 0.7662 |
| 0.0688 | 23.9322 | 1412 | 0.5939 | 0.5655 | 0.5939 | 0.7706 |
| 0.0688 | 23.9661 | 1414 | 0.6054 | 0.5867 | 0.6054 | 0.7781 |
| 0.0688 | 24.0 | 1416 | 0.6241 | 0.5416 | 0.6241 | 0.7900 |
| 0.0688 | 24.0339 | 1418 | 0.6045 | 0.5512 | 0.6045 | 0.7775 |
| 0.0688 | 24.0678 | 1420 | 0.5856 | 0.5666 | 0.5856 | 0.7652 |
| 0.0688 | 24.1017 | 1422 | 0.5793 | 0.5323 | 0.5793 | 0.7611 |
| 0.0688 | 24.1356 | 1424 | 0.5948 | 0.4985 | 0.5948 | 0.7712 |
| 0.0688 | 24.1695 | 1426 | 0.6258 | 0.4787 | 0.6258 | 0.7911 |
| 0.0688 | 24.2034 | 1428 | 0.6753 | 0.4585 | 0.6753 | 0.8217 |
| 0.0688 | 24.2373 | 1430 | 0.7012 | 0.4585 | 0.7012 | 0.8374 |
| 0.0688 | 24.2712 | 1432 | 0.7137 | 0.4337 | 0.7137 | 0.8448 |
| 0.0688 | 24.3051 | 1434 | 0.7110 | 0.4337 | 0.7110 | 0.8432 |
| 0.0688 | 24.3390 | 1436 | 0.7173 | 0.4337 | 0.7173 | 0.8470 |
| 0.0688 | 24.3729 | 1438 | 0.7320 | 0.4780 | 0.7320 | 0.8556 |
| 0.0688 | 24.4068 | 1440 | 0.7503 | 0.4620 | 0.7503 | 0.8662 |
| 0.0688 | 24.4407 | 1442 | 0.7398 | 0.4824 | 0.7398 | 0.8601 |
| 0.0688 | 24.4746 | 1444 | 0.7525 | 0.4813 | 0.7525 | 0.8674 |
| 0.0688 | 24.5085 | 1446 | 0.7133 | 0.4668 | 0.7133 | 0.8446 |
| 0.0688 | 24.5424 | 1448 | 0.6904 | 0.4707 | 0.6904 | 0.8309 |
| 0.0688 | 24.5763 | 1450 | 0.6765 | 0.4798 | 0.6765 | 0.8225 |
| 0.0688 | 24.6102 | 1452 | 0.6908 | 0.4798 | 0.6908 | 0.8312 |
| 0.0688 | 24.6441 | 1454 | 0.7249 | 0.4898 | 0.7249 | 0.8514 |
| 0.0688 | 24.6780 | 1456 | 0.7405 | 0.4772 | 0.7405 | 0.8605 |
| 0.0688 | 24.7119 | 1458 | 0.7419 | 0.4581 | 0.7419 | 0.8613 |
| 0.0688 | 24.7458 | 1460 | 0.7254 | 0.4749 | 0.7254 | 0.8517 |
| 0.0688 | 24.7797 | 1462 | 0.6696 | 0.5260 | 0.6696 | 0.8183 |
| 0.0688 | 24.8136 | 1464 | 0.6295 | 0.5377 | 0.6295 | 0.7934 |
| 0.0688 | 24.8475 | 1466 | 0.6306 | 0.5589 | 0.6306 | 0.7941 |
| 0.0688 | 24.8814 | 1468 | 0.6366 | 0.5589 | 0.6366 | 0.7979 |
| 0.0688 | 24.9153 | 1470 | 0.6203 | 0.5798 | 0.6203 | 0.7876 |
| 0.0688 | 24.9492 | 1472 | 0.6111 | 0.5838 | 0.6111 | 0.7817 |
| 0.0688 | 24.9831 | 1474 | 0.5983 | 0.5991 | 0.5983 | 0.7735 |
| 0.0688 | 25.0169 | 1476 | 0.5922 | 0.5768 | 0.5922 | 0.7695 |
| 0.0688 | 25.0508 | 1478 | 0.6087 | 0.5650 | 0.6087 | 0.7802 |
| 0.0688 | 25.0847 | 1480 | 0.6378 | 0.5006 | 0.6378 | 0.7986 |
| 0.0688 | 25.1186 | 1482 | 0.6557 | 0.5090 | 0.6557 | 0.8097 |
| 0.0688 | 25.1525 | 1484 | 0.6557 | 0.5238 | 0.6557 | 0.8098 |
| 0.0688 | 25.1864 | 1486 | 0.6671 | 0.5388 | 0.6671 | 0.8167 |
| 0.0688 | 25.2203 | 1488 | 0.6794 | 0.5512 | 0.6794 | 0.8243 |
| 0.0688 | 25.2542 | 1490 | 0.6946 | 0.4657 | 0.6946 | 0.8334 |
| 0.0688 | 25.2881 | 1492 | 0.6945 | 0.4412 | 0.6945 | 0.8334 |
| 0.0688 | 25.3220 | 1494 | 0.6800 | 0.4257 | 0.6800 | 0.8246 |
| 0.0688 | 25.3559 | 1496 | 0.6585 | 0.4452 | 0.6585 | 0.8115 |
| 0.0688 | 25.3898 | 1498 | 0.6735 | 0.4751 | 0.6735 | 0.8207 |
| 0.051 | 25.4237 | 1500 | 0.7019 | 0.5156 | 0.7019 | 0.8378 |
| 0.051 | 25.4576 | 1502 | 0.7023 | 0.5465 | 0.7023 | 0.8380 |
| 0.051 | 25.4915 | 1504 | 0.6540 | 0.5178 | 0.6540 | 0.8087 |
| 0.051 | 25.5254 | 1506 | 0.5986 | 0.5472 | 0.5986 | 0.7737 |
| 0.051 | 25.5593 | 1508 | 0.5904 | 0.6104 | 0.5904 | 0.7684 |
| 0.051 | 25.5932 | 1510 | 0.5842 | 0.6043 | 0.5842 | 0.7644 |
| 0.051 | 25.6271 | 1512 | 0.5952 | 0.6099 | 0.5952 | 0.7715 |
| 0.051 | 25.6610 | 1514 | 0.6068 | 0.5820 | 0.6068 | 0.7790 |
| 0.051 | 25.6949 | 1516 | 0.6090 | 0.5472 | 0.6090 | 0.7804 |
| 0.051 | 25.7288 | 1518 | 0.5929 | 0.5490 | 0.5929 | 0.7700 |
| 0.051 | 25.7627 | 1520 | 0.5754 | 0.5605 | 0.5754 | 0.7585 |
| 0.051 | 25.7966 | 1522 | 0.5611 | 0.5487 | 0.5611 | 0.7491 |
| 0.051 | 25.8305 | 1524 | 0.5637 | 0.5397 | 0.5637 | 0.7508 |
| 0.051 | 25.8644 | 1526 | 0.5930 | 0.5563 | 0.5930 | 0.7701 |
| 0.051 | 25.8983 | 1528 | 0.6057 | 0.5521 | 0.6057 | 0.7783 |
| 0.051 | 25.9322 | 1530 | 0.6095 | 0.5440 | 0.6095 | 0.7807 |
| 0.051 | 25.9661 | 1532 | 0.5898 | 0.5536 | 0.5898 | 0.7680 |
| 0.051 | 26.0 | 1534 | 0.5781 | 0.5428 | 0.5781 | 0.7604 |
| 0.051 | 26.0339 | 1536 | 0.5740 | 0.4951 | 0.5740 | 0.7577 |
| 0.051 | 26.0678 | 1538 | 0.5802 | 0.5013 | 0.5802 | 0.7617 |
| 0.051 | 26.1017 | 1540 | 0.5976 | 0.5069 | 0.5976 | 0.7730 |
| 0.051 | 26.1356 | 1542 | 0.6146 | 0.5257 | 0.6146 | 0.7839 |
| 0.051 | 26.1695 | 1544 | 0.6260 | 0.5303 | 0.6260 | 0.7912 |
| 0.051 | 26.2034 | 1546 | 0.6064 | 0.5257 | 0.6064 | 0.7787 |
| 0.051 | 26.2373 | 1548 | 0.5797 | 0.5048 | 0.5797 | 0.7614 |
| 0.051 | 26.2712 | 1550 | 0.5579 | 0.4658 | 0.5579 | 0.7469 |
| 0.051 | 26.3051 | 1552 | 0.5481 | 0.4807 | 0.5481 | 0.7403 |
| 0.051 | 26.3390 | 1554 | 0.5522 | 0.4847 | 0.5522 | 0.7431 |
| 0.051 | 26.3729 | 1556 | 0.5617 | 0.4874 | 0.5617 | 0.7495 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
pepijn223/phrivi-finetuned
|
pepijn223
| 2025-01-15T16:08:49Z
| 53
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-01-09T16:54:17Z
|
---
library_name: transformers
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: phrivi-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrivi-finetuned
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nbninh/fbc7e776-e353-4000-a7aa-e8927900b5ed
|
nbninh
| 2025-01-15T16:08:41Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:09:23Z
|
---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fbc7e776-e353-4000-a7aa-e8927900b5ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6e6190eb26c4eb66_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6e6190eb26c4eb66_train_data.json
type:
field_input: user_input
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/fbc7e776-e353-4000-a7aa-e8927900b5ed
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6e6190eb26c4eb66_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22a7e7a8-f3c8-4b2e-bae8-382fa9d6d294
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22a7e7a8-f3c8-4b2e-bae8-382fa9d6d294
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fbc7e776-e353-4000-a7aa-e8927900b5ed
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1759 | 0.0109 | 200 | 1.1869 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
akhooli/setfit_ar_hs_mb
|
akhooli
| 2025-01-15T16:07:49Z
| 9
| 0
|
setfit
|
[
"setfit",
"safetensors",
"modernbert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:akhooli/sbert-nli-500k-triplets-MB",
"base_model:finetune:akhooli/sbert-nli-500k-triplets-MB",
"model-index",
"region:us"
] |
text-classification
| 2025-01-15T16:07:32Z
|
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: يا حاقد ع الاسلام السياسي
- text: 'بلد مخيف، صار القتل بحجه الشرف متل قتل بعوضة، واللي بيخوف اكتر من اللي واقف
مكتف ايديه ومش مساعد. وين كنآ، ووين وصلنآ، لمتى حنضل عايشين وساكتين!
'
- text: "من خلال المتابعة ..يتضح أن أكثر اللاعبين الذين يتم تسويقهم هم لاعبي امريكا\
\ الجنوبية وأقلهم الافارقة. \nمن خلال الواقع ..أكثر اللاعبين تهاونا ولعب على\
\ الواقف في آخر ٦ شهور من عقودهم هم لاعبي امريكا الجنوبية ."
- text: ' علم الحزب يا فهمانه ما حطوا لانه عم يحكي وطنيا ومشان ماحدا متلك يعترض. اذا
حطوا بتعترضي واذا ما حطوا كمان بتعترضي.'
- text: "شيوعي \nعلماني \nمسيحي\nانصار سنه \nصوفي \nيمثلك التجمع \nلا يمثلك التجمع\
\ \nاهلا بكم جميعا فنحن نريد بناء وطن ❤"
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: akhooli/sbert-nli-500k-triplets-MB
model-index:
- name: SetFit with akhooli/sbert-nli-500k-triplets-MB
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7956709956709956
name: Accuracy
---
# SetFit with akhooli/sbert-nli-500k-triplets-MB
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [akhooli/sbert-nli-500k-triplets-MB](https://huggingface.co/akhooli/sbert-nli-500k-triplets-MB) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [akhooli/sbert-nli-500k-triplets-MB](https://huggingface.co/akhooli/sbert-nli-500k-triplets-MB)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 8192 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| positive | <ul><li>' سبحان الله الفلسطينيين شعب خاين في كل مكان \nلاحول ولا قوة إلا بالله'</li><li>'يا بيك عّم تخبرنا عن شي ما فينا تعملو نحن ماًعندنا نواب ولا وزراء بمثلونا بالدولة الا اذا زهقان وعبالك ليك'</li><li>'جوز كذابين منافقين…'</li></ul> |
| negative | <ul><li>'ربي لا تجعلني أسيء الظن بأحد ولا تجعل في قلبي شيئا على أحد ، اللهم أسألك قلباً نقياً صافيا'</li><li>'هشام حداد عامل فيها جون ستيوارت'</li><li>' بحياة اختك من وين بتجيبي اخبارك؟؟ من صغري وانا عبالي كون… LINK'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7957 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("akhooli/setfit_ar_hs_mb")
# Run inference
preds = model("يا حاقد ع الاسلام السياسي")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 18.8388 | 185 |
| Label | Training Sample Count |
|:---------|:----------------------|
| negative | 5200 |
| positive | 4943 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: 6000
- sampling_strategy: undersampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- run_name: setfit_hate_52k_mb_6k
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3373 | - |
| 0.0333 | 100 | 0.2955 | - |
| 0.0667 | 200 | 0.2535 | - |
| 0.1 | 300 | 0.2373 | - |
| 0.1333 | 400 | 0.2228 | - |
| 0.1667 | 500 | 0.1956 | - |
| 0.2 | 600 | 0.1768 | - |
| 0.2333 | 700 | 0.1489 | - |
| 0.2667 | 800 | 0.122 | - |
| 0.3 | 900 | 0.1045 | - |
| 0.3333 | 1000 | 0.086 | - |
| 0.3667 | 1100 | 0.0681 | - |
| 0.4 | 1200 | 0.067 | - |
| 0.4333 | 1300 | 0.0477 | - |
| 0.4667 | 1400 | 0.043 | - |
| 0.5 | 1500 | 0.0316 | - |
| 0.5333 | 1600 | 0.0251 | - |
| 0.5667 | 1700 | 0.0236 | - |
| 0.6 | 1800 | 0.0163 | - |
| 0.6333 | 1900 | 0.0148 | - |
| 0.6667 | 2000 | 0.0105 | - |
| 0.7 | 2100 | 0.018 | - |
| 0.7333 | 2200 | 0.013 | - |
| 0.7667 | 2300 | 0.0103 | - |
| 0.8 | 2400 | 0.0107 | - |
| 0.8333 | 2500 | 0.0115 | - |
| 0.8667 | 2600 | 0.0069 | - |
| 0.9 | 2700 | 0.0062 | - |
| 0.9333 | 2800 | 0.0074 | - |
| 0.9667 | 2900 | 0.0063 | - |
| 1.0 | 3000 | 0.0068 | - |
| 1.0333 | 3100 | 0.0048 | - |
| 1.0667 | 3200 | 0.0055 | - |
| 1.1 | 3300 | 0.0047 | - |
| 1.1333 | 3400 | 0.0043 | - |
| 1.1667 | 3500 | 0.0029 | - |
| 1.2 | 3600 | 0.0036 | - |
| 1.2333 | 3700 | 0.0034 | - |
| 1.2667 | 3800 | 0.0024 | - |
| 1.3 | 3900 | 0.0033 | - |
| 1.3333 | 4000 | 0.0042 | - |
| 1.3667 | 4100 | 0.0039 | - |
| 1.4 | 4200 | 0.0019 | - |
| 1.4333 | 4300 | 0.0022 | - |
| 1.4667 | 4400 | 0.0031 | - |
| 1.5 | 4500 | 0.0019 | - |
| 1.5333 | 4600 | 0.0036 | - |
| 1.5667 | 4700 | 0.0017 | - |
| 1.6 | 4800 | 0.0007 | - |
| 1.6333 | 4900 | 0.0006 | - |
| 1.6667 | 5000 | 0.0019 | - |
| 1.7 | 5100 | 0.0022 | - |
| 1.7333 | 5200 | 0.0013 | - |
| 1.7667 | 5300 | 0.0025 | - |
| 1.8 | 5400 | 0.0024 | - |
| 1.8333 | 5500 | 0.0013 | - |
| 1.8667 | 5600 | 0.0022 | - |
| 1.9 | 5700 | 0.0022 | - |
| 1.9333 | 5800 | 0.0019 | - |
| 1.9667 | 5900 | 0.0019 | - |
| 2.0 | 6000 | 0.0031 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.2.0.dev0
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0
- PyTorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
nhungphammmmm/f4dd682c-79ff-4237-a141-c43695838f85
|
nhungphammmmm
| 2025-01-15T16:07:31Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:51:51Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4dd682c-79ff-4237-a141-c43695838f85
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 756452db934ae1d1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/756452db934ae1d1_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/f4dd682c-79ff-4237-a141-c43695838f85
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/756452db934ae1d1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 74979543-06fb-451f-b2f1-4786823d5776
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 74979543-06fb-451f-b2f1-4786823d5776
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f4dd682c-79ff-4237-a141-c43695838f85
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8583 | 0.0195 | 200 | 0.9338 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso11/91645d6e-af35-438f-a2bb-ad7694222576
|
lesso11
| 2025-01-15T16:06:53Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T14:36:02Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 91645d6e-af35-438f-a2bb-ad7694222576
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: true
chat_template: llama3
datasets:
- data_files:
- a807bb88794a6c5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a807bb88794a6c5a_train_data.json
type:
field_instruction: prompt
field_output: label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/91645d6e-af35-438f-a2bb-ad7694222576
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/a807bb88794a6c5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f19c28d2-bbf3-4c64-b448-7d3ff5c4b03c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f19c28d2-bbf3-4c64-b448-7d3ff5c4b03c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 91645d6e-af35-438f-a2bb-ad7694222576
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0008 | 15 | nan |
| 0.0 | 0.0010 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
phonemetransformers/CHILDES-Icelandic-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:28Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:52Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-Farsi-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:27Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:51Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-Turkish-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:26Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:50Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-Basque-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:24Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:48Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-Danish-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:47Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-Japanese-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:18Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:43Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-Spanish-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:10Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:38Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-German-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:08Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:36Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-French-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:05:03Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:33Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phonemetransformers/CHILDES-English-phoneme-tokenizer
|
phonemetransformers
| 2025-01-15T16:04:55Z
| 0
| 0
|
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:23:31Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k17_task1_organization
|
MayBashendy
| 2025-01-15T16:04:43Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T15:46:57Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k17_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k17_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8110
- Qwk: 0.2754
- Mse: 1.8110
- Rmse: 1.3457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0270 | 2 | 7.2082 | -0.0162 | 7.2082 | 2.6848 |
| No log | 0.0541 | 4 | 4.4178 | 0.0833 | 4.4178 | 2.1018 |
| No log | 0.0811 | 6 | 3.6012 | -0.0423 | 3.6012 | 1.8977 |
| No log | 0.1081 | 8 | 3.9013 | -0.0796 | 3.9013 | 1.9752 |
| No log | 0.1351 | 10 | 2.2566 | 0.1194 | 2.2566 | 1.5022 |
| No log | 0.1622 | 12 | 1.8780 | 0.1593 | 1.8780 | 1.3704 |
| No log | 0.1892 | 14 | 2.1948 | -0.0625 | 2.1948 | 1.4815 |
| No log | 0.2162 | 16 | 2.2320 | -0.0472 | 2.2320 | 1.4940 |
| No log | 0.2432 | 18 | 2.2639 | 0.0305 | 2.2639 | 1.5046 |
| No log | 0.2703 | 20 | 2.2500 | 0.0606 | 2.2500 | 1.5000 |
| No log | 0.2973 | 22 | 2.1048 | 0.1157 | 2.1048 | 1.4508 |
| No log | 0.3243 | 24 | 2.0103 | 0.2018 | 2.0103 | 1.4179 |
| No log | 0.3514 | 26 | 1.9807 | 0.1869 | 1.9807 | 1.4074 |
| No log | 0.3784 | 28 | 1.9049 | 0.1698 | 1.9049 | 1.3802 |
| No log | 0.4054 | 30 | 1.8210 | 0.1714 | 1.8210 | 1.3494 |
| No log | 0.4324 | 32 | 1.7544 | 0.1714 | 1.7544 | 1.3245 |
| No log | 0.4595 | 34 | 1.6969 | 0.1154 | 1.6969 | 1.3027 |
| No log | 0.4865 | 36 | 1.6768 | 0.2569 | 1.6768 | 1.2949 |
| No log | 0.5135 | 38 | 1.8115 | 0.2459 | 1.8115 | 1.3459 |
| No log | 0.5405 | 40 | 1.8102 | 0.2459 | 1.8102 | 1.3454 |
| No log | 0.5676 | 42 | 1.8407 | 0.3465 | 1.8407 | 1.3567 |
| No log | 0.5946 | 44 | 1.6912 | 0.2735 | 1.6912 | 1.3005 |
| No log | 0.6216 | 46 | 1.5411 | 0.1682 | 1.5411 | 1.2414 |
| No log | 0.6486 | 48 | 1.4974 | 0.1524 | 1.4974 | 1.2237 |
| No log | 0.6757 | 50 | 1.5038 | 0.2243 | 1.5038 | 1.2263 |
| No log | 0.7027 | 52 | 1.5548 | 0.2202 | 1.5548 | 1.2469 |
| No log | 0.7297 | 54 | 1.5638 | 0.2321 | 1.5638 | 1.2505 |
| No log | 0.7568 | 56 | 1.5255 | 0.2301 | 1.5255 | 1.2351 |
| No log | 0.7838 | 58 | 1.4802 | 0.2182 | 1.4802 | 1.2166 |
| No log | 0.8108 | 60 | 1.4783 | 0.2056 | 1.4783 | 1.2159 |
| No log | 0.8378 | 62 | 1.5381 | 0.2407 | 1.5381 | 1.2402 |
| No log | 0.8649 | 64 | 1.5067 | 0.2545 | 1.5067 | 1.2275 |
| No log | 0.8919 | 66 | 1.4023 | 0.3509 | 1.4023 | 1.1842 |
| No log | 0.9189 | 68 | 1.2808 | 0.3966 | 1.2808 | 1.1317 |
| No log | 0.9459 | 70 | 1.1520 | 0.5116 | 1.1520 | 1.0733 |
| No log | 0.9730 | 72 | 1.1067 | 0.5238 | 1.1067 | 1.0520 |
| No log | 1.0 | 74 | 1.1387 | 0.5167 | 1.1387 | 1.0671 |
| No log | 1.0270 | 76 | 1.2173 | 0.5366 | 1.2173 | 1.1033 |
| No log | 1.0541 | 78 | 1.3335 | 0.4500 | 1.3335 | 1.1548 |
| No log | 1.0811 | 80 | 1.3091 | 0.5082 | 1.3091 | 1.1442 |
| No log | 1.1081 | 82 | 1.2611 | 0.5124 | 1.2611 | 1.1230 |
| No log | 1.1351 | 84 | 1.3162 | 0.4370 | 1.3162 | 1.1473 |
| No log | 1.1622 | 86 | 1.2898 | 0.4667 | 1.2898 | 1.1357 |
| No log | 1.1892 | 88 | 1.2614 | 0.4715 | 1.2614 | 1.1231 |
| No log | 1.2162 | 90 | 1.5039 | 0.3636 | 1.5039 | 1.2263 |
| No log | 1.2432 | 92 | 1.7922 | 0.2581 | 1.7922 | 1.3387 |
| No log | 1.2703 | 94 | 1.6562 | 0.2812 | 1.6562 | 1.2869 |
| No log | 1.2973 | 96 | 1.6710 | 0.2879 | 1.6710 | 1.2927 |
| No log | 1.3243 | 98 | 1.6769 | 0.2899 | 1.6769 | 1.2950 |
| No log | 1.3514 | 100 | 1.7390 | 0.3022 | 1.7390 | 1.3187 |
| No log | 1.3784 | 102 | 1.6659 | 0.3022 | 1.6659 | 1.2907 |
| No log | 1.4054 | 104 | 1.6323 | 0.3212 | 1.6323 | 1.2776 |
| No log | 1.4324 | 106 | 1.5980 | 0.3636 | 1.5980 | 1.2641 |
| No log | 1.4595 | 108 | 1.4648 | 0.4427 | 1.4648 | 1.2103 |
| No log | 1.4865 | 110 | 1.4642 | 0.5203 | 1.4642 | 1.2100 |
| No log | 1.5135 | 112 | 1.5540 | 0.3016 | 1.5540 | 1.2466 |
| No log | 1.5405 | 114 | 1.7152 | 0.2344 | 1.7152 | 1.3096 |
| No log | 1.5676 | 116 | 1.6887 | 0.3030 | 1.6887 | 1.2995 |
| No log | 1.5946 | 118 | 1.6345 | 0.3478 | 1.6345 | 1.2785 |
| No log | 1.6216 | 120 | 1.5541 | 0.3889 | 1.5541 | 1.2466 |
| No log | 1.6486 | 122 | 1.7321 | 0.3429 | 1.7321 | 1.3161 |
| No log | 1.6757 | 124 | 1.7849 | 0.3429 | 1.7849 | 1.3360 |
| No log | 1.7027 | 126 | 1.7942 | 0.3212 | 1.7942 | 1.3395 |
| No log | 1.7297 | 128 | 1.6782 | 0.3478 | 1.6782 | 1.2954 |
| No log | 1.7568 | 130 | 1.6333 | 0.2791 | 1.6333 | 1.2780 |
| No log | 1.7838 | 132 | 1.6812 | 0.2764 | 1.6812 | 1.2966 |
| No log | 1.8108 | 134 | 1.6148 | 0.2759 | 1.6148 | 1.2707 |
| No log | 1.8378 | 136 | 1.5928 | 0.3306 | 1.5928 | 1.2621 |
| No log | 1.8649 | 138 | 1.5961 | 0.3252 | 1.5961 | 1.2634 |
| No log | 1.8919 | 140 | 1.6132 | 0.2951 | 1.6132 | 1.2701 |
| No log | 1.9189 | 142 | 1.6740 | 0.3876 | 1.6740 | 1.2938 |
| No log | 1.9459 | 144 | 1.5854 | 0.3281 | 1.5854 | 1.2591 |
| No log | 1.9730 | 146 | 1.5721 | 0.3492 | 1.5721 | 1.2538 |
| No log | 2.0 | 148 | 1.5770 | 0.3759 | 1.5770 | 1.2558 |
| No log | 2.0270 | 150 | 1.4834 | 0.3360 | 1.4834 | 1.2179 |
| No log | 2.0541 | 152 | 1.3675 | 0.3871 | 1.3675 | 1.1694 |
| No log | 2.0811 | 154 | 1.3421 | 0.3577 | 1.3421 | 1.1585 |
| No log | 2.1081 | 156 | 1.5974 | 0.3942 | 1.5974 | 1.2639 |
| No log | 2.1351 | 158 | 1.7204 | 0.3597 | 1.7204 | 1.3117 |
| No log | 2.1622 | 160 | 1.5333 | 0.3852 | 1.5333 | 1.2382 |
| No log | 2.1892 | 162 | 1.4891 | 0.3622 | 1.4891 | 1.2203 |
| No log | 2.2162 | 164 | 1.6118 | 0.3459 | 1.6118 | 1.2696 |
| No log | 2.2432 | 166 | 1.8269 | 0.3358 | 1.8269 | 1.3516 |
| No log | 2.2703 | 168 | 1.6298 | 0.3259 | 1.6298 | 1.2766 |
| No log | 2.2973 | 170 | 1.5699 | 0.3134 | 1.5699 | 1.2530 |
| No log | 2.3243 | 172 | 1.5849 | 0.3529 | 1.5849 | 1.2589 |
| No log | 2.3514 | 174 | 1.6959 | 0.3478 | 1.6959 | 1.3023 |
| No log | 2.3784 | 176 | 1.8324 | 0.2857 | 1.8324 | 1.3537 |
| No log | 2.4054 | 178 | 1.6801 | 0.3741 | 1.6801 | 1.2962 |
| No log | 2.4324 | 180 | 1.6340 | 0.3623 | 1.6340 | 1.2783 |
| No log | 2.4595 | 182 | 1.5266 | 0.4427 | 1.5266 | 1.2355 |
| No log | 2.4865 | 184 | 1.4815 | 0.3770 | 1.4815 | 1.2172 |
| No log | 2.5135 | 186 | 1.4268 | 0.3130 | 1.4268 | 1.1945 |
| No log | 2.5405 | 188 | 1.3539 | 0.3478 | 1.3539 | 1.1636 |
| No log | 2.5676 | 190 | 1.4250 | 0.4545 | 1.4250 | 1.1937 |
| No log | 2.5946 | 192 | 1.6666 | 0.3768 | 1.6666 | 1.2910 |
| No log | 2.6216 | 194 | 1.8661 | 0.2878 | 1.8661 | 1.3661 |
| No log | 2.6486 | 196 | 1.8096 | 0.3121 | 1.8096 | 1.3452 |
| No log | 2.6757 | 198 | 1.5584 | 0.3942 | 1.5584 | 1.2484 |
| No log | 2.7027 | 200 | 1.3693 | 0.5481 | 1.3693 | 1.1702 |
| No log | 2.7297 | 202 | 1.3817 | 0.5185 | 1.3817 | 1.1755 |
| No log | 2.7568 | 204 | 1.4770 | 0.4478 | 1.4770 | 1.2153 |
| No log | 2.7838 | 206 | 1.6677 | 0.3597 | 1.6677 | 1.2914 |
| No log | 2.8108 | 208 | 1.7582 | 0.3453 | 1.7582 | 1.3260 |
| No log | 2.8378 | 210 | 1.6186 | 0.4060 | 1.6186 | 1.2722 |
| No log | 2.8649 | 212 | 1.4199 | 0.4286 | 1.4199 | 1.1916 |
| No log | 2.8919 | 214 | 1.3802 | 0.4567 | 1.3802 | 1.1748 |
| No log | 2.9189 | 216 | 1.5417 | 0.4380 | 1.5417 | 1.2416 |
| No log | 2.9459 | 218 | 2.0088 | 0.2207 | 2.0088 | 1.4173 |
| No log | 2.9730 | 220 | 2.2277 | 0.1408 | 2.2277 | 1.4925 |
| No log | 3.0 | 222 | 2.1407 | 0.0597 | 2.1407 | 1.4631 |
| No log | 3.0270 | 224 | 1.8514 | 0.1951 | 1.8514 | 1.3607 |
| No log | 3.0541 | 226 | 1.6430 | 0.3740 | 1.6430 | 1.2818 |
| No log | 3.0811 | 228 | 1.5615 | 0.4127 | 1.5615 | 1.2496 |
| No log | 3.1081 | 230 | 1.5855 | 0.4627 | 1.5855 | 1.2592 |
| No log | 3.1351 | 232 | 1.6564 | 0.3429 | 1.6564 | 1.2870 |
| No log | 3.1622 | 234 | 1.5814 | 0.4203 | 1.5814 | 1.2576 |
| No log | 3.1892 | 236 | 1.4123 | 0.4923 | 1.4123 | 1.1884 |
| No log | 3.2162 | 238 | 1.4214 | 0.4961 | 1.4214 | 1.1922 |
| No log | 3.2432 | 240 | 1.5848 | 0.3971 | 1.5848 | 1.2589 |
| No log | 3.2703 | 242 | 1.7020 | 0.3885 | 1.7020 | 1.3046 |
| No log | 3.2973 | 244 | 1.8595 | 0.2483 | 1.8595 | 1.3636 |
| No log | 3.3243 | 246 | 2.0443 | 0.1806 | 2.0443 | 1.4298 |
| No log | 3.3514 | 248 | 2.0324 | 0.1806 | 2.0324 | 1.4256 |
| No log | 3.3784 | 250 | 1.8233 | 0.2254 | 1.8233 | 1.3503 |
| No log | 3.4054 | 252 | 1.6159 | 0.3910 | 1.6159 | 1.2712 |
| No log | 3.4324 | 254 | 1.5849 | 0.3759 | 1.5849 | 1.2589 |
| No log | 3.4595 | 256 | 1.6667 | 0.3876 | 1.6667 | 1.2910 |
| No log | 3.4865 | 258 | 1.8424 | 0.2590 | 1.8424 | 1.3574 |
| No log | 3.5135 | 260 | 1.9403 | 0.1958 | 1.9403 | 1.3929 |
| No log | 3.5405 | 262 | 2.0503 | 0.1655 | 2.0502 | 1.4319 |
| No log | 3.5676 | 264 | 1.8608 | 0.2817 | 1.8608 | 1.3641 |
| No log | 3.5946 | 266 | 1.7975 | 0.3077 | 1.7975 | 1.3407 |
| No log | 3.6216 | 268 | 1.8322 | 0.2254 | 1.8322 | 1.3536 |
| No log | 3.6486 | 270 | 1.9213 | 0.2254 | 1.9213 | 1.3861 |
| No log | 3.6757 | 272 | 1.9880 | 0.2254 | 1.9880 | 1.4100 |
| No log | 3.7027 | 274 | 1.9393 | 0.2254 | 1.9393 | 1.3926 |
| No log | 3.7297 | 276 | 1.8321 | 0.2270 | 1.8321 | 1.3536 |
| No log | 3.7568 | 278 | 1.8982 | 0.2254 | 1.8982 | 1.3778 |
| No log | 3.7838 | 280 | 1.9421 | 0.2254 | 1.9421 | 1.3936 |
| No log | 3.8108 | 282 | 2.0218 | 0.2254 | 2.0218 | 1.4219 |
| No log | 3.8378 | 284 | 1.9314 | 0.2553 | 1.9314 | 1.3898 |
| No log | 3.8649 | 286 | 1.8592 | 0.2571 | 1.8592 | 1.3635 |
| No log | 3.8919 | 288 | 1.6958 | 0.2963 | 1.6958 | 1.3022 |
| No log | 3.9189 | 290 | 1.6157 | 0.3840 | 1.6157 | 1.2711 |
| No log | 3.9459 | 292 | 1.6598 | 0.3101 | 1.6598 | 1.2883 |
| No log | 3.9730 | 294 | 1.5943 | 0.4394 | 1.5943 | 1.2626 |
| No log | 4.0 | 296 | 1.4468 | 0.4615 | 1.4468 | 1.2028 |
| No log | 4.0270 | 298 | 1.3742 | 0.4567 | 1.3742 | 1.1723 |
| No log | 4.0541 | 300 | 1.3576 | 0.4531 | 1.3576 | 1.1651 |
| No log | 4.0811 | 302 | 1.5542 | 0.3881 | 1.5542 | 1.2467 |
| No log | 4.1081 | 304 | 1.7148 | 0.2979 | 1.7148 | 1.3095 |
| No log | 4.1351 | 306 | 1.7422 | 0.3099 | 1.7422 | 1.3199 |
| No log | 4.1622 | 308 | 1.8653 | 0.2817 | 1.8653 | 1.3658 |
| No log | 4.1892 | 310 | 1.8779 | 0.2817 | 1.8779 | 1.3704 |
| No log | 4.2162 | 312 | 1.9154 | 0.2517 | 1.9154 | 1.3840 |
| No log | 4.2432 | 314 | 1.8347 | 0.2817 | 1.8347 | 1.3545 |
| No log | 4.2703 | 316 | 1.7254 | 0.2937 | 1.7254 | 1.3135 |
| No log | 4.2973 | 318 | 1.7849 | 0.2937 | 1.7849 | 1.3360 |
| No log | 4.3243 | 320 | 2.0441 | 0.2207 | 2.0441 | 1.4297 |
| No log | 4.3514 | 322 | 2.0704 | 0.2207 | 2.0704 | 1.4389 |
| No log | 4.3784 | 324 | 2.0644 | 0.2222 | 2.0644 | 1.4368 |
| No log | 4.4054 | 326 | 1.8950 | 0.2817 | 1.8950 | 1.3766 |
| No log | 4.4324 | 328 | 1.6892 | 0.3088 | 1.6892 | 1.2997 |
| No log | 4.4595 | 330 | 1.6061 | 0.3284 | 1.6061 | 1.2673 |
| No log | 4.4865 | 332 | 1.6050 | 0.3284 | 1.6050 | 1.2669 |
| No log | 4.5135 | 334 | 1.6380 | 0.2707 | 1.6380 | 1.2798 |
| No log | 4.5405 | 336 | 1.6928 | 0.2628 | 1.6928 | 1.3011 |
| No log | 4.5676 | 338 | 1.6034 | 0.3088 | 1.6034 | 1.2662 |
| No log | 4.5946 | 340 | 1.5406 | 0.3704 | 1.5406 | 1.2412 |
| No log | 4.6216 | 342 | 1.6825 | 0.2899 | 1.6825 | 1.2971 |
| No log | 4.6486 | 344 | 1.7879 | 0.2553 | 1.7879 | 1.3371 |
| No log | 4.6757 | 346 | 1.7188 | 0.3000 | 1.7188 | 1.3110 |
| No log | 4.7027 | 348 | 1.6851 | 0.2794 | 1.6851 | 1.2981 |
| No log | 4.7297 | 350 | 1.7611 | 0.2256 | 1.7611 | 1.3271 |
| No log | 4.7568 | 352 | 1.9253 | 0.2090 | 1.9253 | 1.3875 |
| No log | 4.7838 | 354 | 2.0759 | 0.1515 | 2.0759 | 1.4408 |
| No log | 4.8108 | 356 | 2.0925 | 0.1571 | 2.0925 | 1.4465 |
| No log | 4.8378 | 358 | 2.0788 | 0.1944 | 2.0788 | 1.4418 |
| No log | 4.8649 | 360 | 1.9254 | 0.2553 | 1.9254 | 1.3876 |
| No log | 4.8919 | 362 | 1.6860 | 0.3088 | 1.6860 | 1.2985 |
| No log | 4.9189 | 364 | 1.6780 | 0.3459 | 1.6780 | 1.2954 |
| No log | 4.9459 | 366 | 1.6791 | 0.3459 | 1.6791 | 1.2958 |
| No log | 4.9730 | 368 | 1.6670 | 0.4242 | 1.6670 | 1.2911 |
| No log | 5.0 | 370 | 1.5885 | 0.4122 | 1.5885 | 1.2604 |
| No log | 5.0270 | 372 | 1.5051 | 0.3906 | 1.5051 | 1.2268 |
| No log | 5.0541 | 374 | 1.4867 | 0.3810 | 1.4867 | 1.2193 |
| No log | 5.0811 | 376 | 1.5628 | 0.4 | 1.5628 | 1.2501 |
| No log | 5.1081 | 378 | 1.6585 | 0.3731 | 1.6585 | 1.2878 |
| No log | 5.1351 | 380 | 1.8380 | 0.2734 | 1.8380 | 1.3557 |
| No log | 5.1622 | 382 | 1.9378 | 0.2270 | 1.9378 | 1.3920 |
| No log | 5.1892 | 384 | 1.9140 | 0.2270 | 1.9140 | 1.3835 |
| No log | 5.2162 | 386 | 1.7157 | 0.3066 | 1.7157 | 1.3099 |
| No log | 5.2432 | 388 | 1.5721 | 0.3910 | 1.5721 | 1.2538 |
| No log | 5.2703 | 390 | 1.5407 | 0.4375 | 1.5407 | 1.2413 |
| No log | 5.2973 | 392 | 1.4930 | 0.4375 | 1.4930 | 1.2219 |
| No log | 5.3243 | 394 | 1.4746 | 0.4375 | 1.4746 | 1.2143 |
| No log | 5.3514 | 396 | 1.4671 | 0.4308 | 1.4671 | 1.2113 |
| No log | 5.3784 | 398 | 1.4099 | 0.4341 | 1.4099 | 1.1874 |
| No log | 5.4054 | 400 | 1.4354 | 0.4394 | 1.4354 | 1.1981 |
| No log | 5.4324 | 402 | 1.5180 | 0.3881 | 1.5180 | 1.2321 |
| No log | 5.4595 | 404 | 1.6601 | 0.3433 | 1.6601 | 1.2885 |
| No log | 5.4865 | 406 | 1.7180 | 0.3043 | 1.7180 | 1.3107 |
| No log | 5.5135 | 408 | 1.6135 | 0.3556 | 1.6135 | 1.2702 |
| No log | 5.5405 | 410 | 1.5487 | 0.4328 | 1.5487 | 1.2445 |
| No log | 5.5676 | 412 | 1.6277 | 0.3731 | 1.6277 | 1.2758 |
| No log | 5.5946 | 414 | 1.7122 | 0.3235 | 1.7122 | 1.3085 |
| No log | 5.6216 | 416 | 1.8522 | 0.2734 | 1.8522 | 1.3610 |
| No log | 5.6486 | 418 | 1.9405 | 0.2553 | 1.9405 | 1.3930 |
| No log | 5.6757 | 420 | 1.9563 | 0.2553 | 1.9563 | 1.3987 |
| No log | 5.7027 | 422 | 1.8105 | 0.2734 | 1.8105 | 1.3455 |
| No log | 5.7297 | 424 | 1.6032 | 0.4242 | 1.6032 | 1.2662 |
| No log | 5.7568 | 426 | 1.4874 | 0.3968 | 1.4874 | 1.2196 |
| No log | 5.7838 | 428 | 1.5161 | 0.368 | 1.5161 | 1.2313 |
| No log | 5.8108 | 430 | 1.7124 | 0.4091 | 1.7124 | 1.3086 |
| No log | 5.8378 | 432 | 1.8787 | 0.2734 | 1.8787 | 1.3706 |
| No log | 5.8649 | 434 | 1.9319 | 0.2553 | 1.9319 | 1.3899 |
| No log | 5.8919 | 436 | 1.8562 | 0.2734 | 1.8562 | 1.3624 |
| No log | 5.9189 | 438 | 1.7858 | 0.2899 | 1.7858 | 1.3363 |
| No log | 5.9459 | 440 | 1.9567 | 0.2254 | 1.9567 | 1.3988 |
| No log | 5.9730 | 442 | 2.0003 | 0.2254 | 2.0003 | 1.4143 |
| No log | 6.0 | 444 | 1.9405 | 0.2254 | 1.9405 | 1.3930 |
| No log | 6.0270 | 446 | 1.7613 | 0.2899 | 1.7613 | 1.3271 |
| No log | 6.0541 | 448 | 1.6111 | 0.4242 | 1.6111 | 1.2693 |
| No log | 6.0811 | 450 | 1.6096 | 0.3731 | 1.6096 | 1.2687 |
| No log | 6.1081 | 452 | 1.8084 | 0.2734 | 1.8084 | 1.3447 |
| No log | 6.1351 | 454 | 2.0332 | 0.2254 | 2.0332 | 1.4259 |
| No log | 6.1622 | 456 | 2.0020 | 0.2254 | 2.0020 | 1.4149 |
| No log | 6.1892 | 458 | 1.8074 | 0.2734 | 1.8074 | 1.3444 |
| No log | 6.2162 | 460 | 1.5295 | 0.4265 | 1.5295 | 1.2367 |
| No log | 6.2432 | 462 | 1.4173 | 0.5191 | 1.4173 | 1.1905 |
| No log | 6.2703 | 464 | 1.4113 | 0.4882 | 1.4113 | 1.1880 |
| No log | 6.2973 | 466 | 1.5022 | 0.4580 | 1.5022 | 1.2256 |
| No log | 6.3243 | 468 | 1.6681 | 0.3731 | 1.6681 | 1.2915 |
| No log | 6.3514 | 470 | 1.7982 | 0.3407 | 1.7982 | 1.3410 |
| No log | 6.3784 | 472 | 1.7536 | 0.3284 | 1.7536 | 1.3242 |
| No log | 6.4054 | 474 | 1.5888 | 0.4179 | 1.5888 | 1.2605 |
| No log | 6.4324 | 476 | 1.4139 | 0.5303 | 1.4139 | 1.1891 |
| No log | 6.4595 | 478 | 1.4145 | 0.4962 | 1.4145 | 1.1893 |
| No log | 6.4865 | 480 | 1.5579 | 0.4242 | 1.5579 | 1.2481 |
| No log | 6.5135 | 482 | 1.7630 | 0.2754 | 1.7630 | 1.3278 |
| No log | 6.5405 | 484 | 1.8474 | 0.2411 | 1.8474 | 1.3592 |
| No log | 6.5676 | 486 | 1.7922 | 0.2446 | 1.7922 | 1.3387 |
| No log | 6.5946 | 488 | 1.6658 | 0.4242 | 1.6658 | 1.2907 |
| No log | 6.6216 | 490 | 1.5592 | 0.4098 | 1.5592 | 1.2487 |
| No log | 6.6486 | 492 | 1.5511 | 0.3761 | 1.5511 | 1.2454 |
| No log | 6.6757 | 494 | 1.5523 | 0.4500 | 1.5523 | 1.2459 |
| No log | 6.7027 | 496 | 1.6037 | 0.4031 | 1.6037 | 1.2664 |
| No log | 6.7297 | 498 | 1.6611 | 0.3731 | 1.6611 | 1.2888 |
| 0.3936 | 6.7568 | 500 | 1.6958 | 0.3235 | 1.6958 | 1.3022 |
| 0.3936 | 6.7838 | 502 | 1.6762 | 0.3235 | 1.6762 | 1.2947 |
| 0.3936 | 6.8108 | 504 | 1.6236 | 0.3852 | 1.6236 | 1.2742 |
| 0.3936 | 6.8378 | 506 | 1.5538 | 0.3852 | 1.5538 | 1.2465 |
| 0.3936 | 6.8649 | 508 | 1.5990 | 0.3852 | 1.5990 | 1.2645 |
| 0.3936 | 6.8919 | 510 | 1.6781 | 0.3407 | 1.6781 | 1.2954 |
| 0.3936 | 6.9189 | 512 | 1.7600 | 0.2920 | 1.7600 | 1.3267 |
| 0.3936 | 6.9459 | 514 | 1.7322 | 0.2920 | 1.7322 | 1.3161 |
| 0.3936 | 6.9730 | 516 | 1.6869 | 0.3556 | 1.6869 | 1.2988 |
| 0.3936 | 7.0 | 518 | 1.6151 | 0.3609 | 1.6151 | 1.2709 |
| 0.3936 | 7.0270 | 520 | 1.5323 | 0.4478 | 1.5323 | 1.2378 |
| 0.3936 | 7.0541 | 522 | 1.4406 | 0.4603 | 1.4406 | 1.2003 |
| 0.3936 | 7.0811 | 524 | 1.4687 | 0.4662 | 1.4687 | 1.2119 |
| 0.3936 | 7.1081 | 526 | 1.5421 | 0.4179 | 1.5421 | 1.2418 |
| 0.3936 | 7.1351 | 528 | 1.6858 | 0.2920 | 1.6858 | 1.2984 |
| 0.3936 | 7.1622 | 530 | 1.8141 | 0.2754 | 1.8141 | 1.3469 |
| 0.3936 | 7.1892 | 532 | 2.0164 | 0.1690 | 2.0164 | 1.4200 |
| 0.3936 | 7.2162 | 534 | 2.0350 | 0.1690 | 2.0350 | 1.4265 |
| 0.3936 | 7.2432 | 536 | 1.8110 | 0.2754 | 1.8110 | 1.3457 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
sahandrez/rloo-unpaired-Qwen2.5-1.5B-ultrafeedback-binarized-20250114-142811
|
sahandrez
| 2025-01-15T16:03:40Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"rloo",
"conversational",
"arxiv:2402.14740",
"base_model:sahandrez/Qwen2.5-1.5B-sft-uf",
"base_model:finetune:sahandrez/Qwen2.5-1.5B-sft-uf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-14T19:28:22Z
|
---
base_model: sahandrez/Qwen2.5-1.5B-sft-uf
library_name: transformers
model_name: rloo-unpaired-Qwen2.5-1.5B-ultrafeedback-binarized-20250114-142811
tags:
- generated_from_trainer
- trl
- rloo
licence: license
---
# Model Card for rloo-unpaired-Qwen2.5-1.5B-ultrafeedback-binarized-20250114-142811
This model is a fine-tuned version of [sahandrez/Qwen2.5-1.5B-sft-uf](https://huggingface.co/sahandrez/Qwen2.5-1.5B-sft-uf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sahandrez/rloo-unpaired-Qwen2.5-1.5B-ultrafeedback-binarized-20250114-142811", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/unpaired_rlhf/sahand/runs/w2hc91ea)
This model was trained with RLOO, a method introduced in [Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs](https://huggingface.co/papers/2402.14740).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.0
- Pytorch: 2.4.0
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite RLOO as:
```bibtex
@inproceedings{ahmadian2024back,
title = {{Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs}},
author = {Arash Ahmadian and Chris Cremer and Matthias Gall{'{e}} and Marzieh Fadaee and Julia Kreutzer and Olivier Pietquin and Ahmet {"{U}}st{"{u}}n and Sara Hooker},
year = 2024,
booktitle = {Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2024, Bangkok, Thailand, August 11-16, 2024},
publisher = {Association for Computational Linguistics},
pages = {12248--12267},
editor = {Lun{-}Wei Ku and Andre Martins and Vivek Srikumar},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
itlwas/Custom-KoLLM-13B-v1-Q4_K_M-GGUF
|
itlwas
| 2025-01-15T16:00:26Z
| 19
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"base_model:PracticeLLM/Custom-KoLLM-13B-v1",
"base_model:quantized:PracticeLLM/Custom-KoLLM-13B-v1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T15:59:53Z
|
---
language:
- ko
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- llama-cpp
- gguf-my-repo
base_model: PracticeLLM/Custom-KoLLM-13B-v1
---
# itlwas/Custom-KoLLM-13B-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`PracticeLLM/Custom-KoLLM-13B-v1`](https://huggingface.co/PracticeLLM/Custom-KoLLM-13B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PracticeLLM/Custom-KoLLM-13B-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Custom-KoLLM-13B-v1-Q4_K_M-GGUF --hf-file custom-kollm-13b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Custom-KoLLM-13B-v1-Q4_K_M-GGUF --hf-file custom-kollm-13b-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Custom-KoLLM-13B-v1-Q4_K_M-GGUF --hf-file custom-kollm-13b-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Custom-KoLLM-13B-v1-Q4_K_M-GGUF --hf-file custom-kollm-13b-v1-q4_k_m.gguf -c 2048
```
|
ivangrapher/f067d804-cf60-45c6-8409-593a1c2e69c0
|
ivangrapher
| 2025-01-15T16:00:17Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:56:57Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f067d804-cf60-45c6-8409-593a1c2e69c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cb0c735aa9ecc20c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cb0c735aa9ecc20c_train_data.json
type:
field_input: init_response
field_instruction: critic_prompt
field_output: revision_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: ivangrapher/f067d804-cf60-45c6-8409-593a1c2e69c0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/cb0c735aa9ecc20c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 53e19756-9979-4666-a32e-c3622fab4ea4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 53e19756-9979-4666-a32e-c3622fab4ea4
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f067d804-cf60-45c6-8409-593a1c2e69c0
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0060 | 8 | nan |
| 0.0 | 0.0121 | 16 | nan |
| 0.0 | 0.0181 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhoxinh/1d537965-820c-4214-8f76-78946dc4e7c2
|
nhoxinh
| 2025-01-15T15:57:48Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:42:56Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1d537965-820c-4214-8f76-78946dc4e7c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0754f1f9a7257ab1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0754f1f9a7257ab1_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/1d537965-820c-4214-8f76-78946dc4e7c2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0754f1f9a7257ab1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b623591-63a4-4fae-839a-cad854463700
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b623591-63a4-4fae-839a-cad854463700
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1d537965-820c-4214-8f76-78946dc4e7c2
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.081 | 0.0376 | 200 | 0.0409 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Patnev71/floorplanv3
|
Patnev71
| 2025-01-15T15:57:47Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-15T15:57:45Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bbytxt/d94c1346-5615-4f88-b572-e185e31c489a
|
bbytxt
| 2025-01-15T15:57:38Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:04:40Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d94c1346-5615-4f88-b572-e185e31c489a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7708f8a044b2986d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7708f8a044b2986d_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bbytxt/d94c1346-5615-4f88-b572-e185e31c489a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/7708f8a044b2986d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0a956f4b-2530-421e-9d67-9975bdd289dc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0a956f4b-2530-421e-9d67-9975bdd289dc
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d94c1346-5615-4f88-b572-e185e31c489a
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.4345 | 0.0008 | 1 | 2.2083 |
| 2.7382 | 0.0377 | 50 | 1.6734 |
| 1.6323 | 0.0754 | 100 | 1.5817 |
| 2.5445 | 0.1132 | 150 | 1.4006 |
| 1.7847 | 0.1509 | 200 | 1.3177 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso10/7f7eaa0f-17d1-424f-922a-7f1286dfe318
|
lesso10
| 2025-01-15T15:56:07Z
| 13
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:42:41Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f7eaa0f-17d1-424f-922a-7f1286dfe318
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 0754f1f9a7257ab1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0754f1f9a7257ab1_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso10/7f7eaa0f-17d1-424f-922a-7f1286dfe318
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/0754f1f9a7257ab1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b623591-63a4-4fae-839a-cad854463700
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b623591-63a4-4fae-839a-cad854463700
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7f7eaa0f-17d1-424f-922a-7f1286dfe318
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0009 | 5 | nan |
| 0.0 | 0.0019 | 10 | nan |
| 0.0 | 0.0028 | 15 | nan |
| 0.0 | 0.0038 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nadejdatarabukina/fffd188d-8ff3-46b8-a154-566ea0e51a69
|
nadejdatarabukina
| 2025-01-15T15:50:51Z
| 16
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:42:45Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fffd188d-8ff3-46b8-a154-566ea0e51a69
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0754f1f9a7257ab1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0754f1f9a7257ab1_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 6
gradient_checkpointing: false
group_by_length: false
hub_model_id: nadejdatarabukina/fffd188d-8ff3-46b8-a154-566ea0e51a69
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0754f1f9a7257ab1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b623591-63a4-4fae-839a-cad854463700
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b623591-63a4-4fae-839a-cad854463700
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fffd188d-8ff3-46b8-a154-566ea0e51a69
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | nan |
| 0.0 | 0.0023 | 8 | nan |
| 0.0 | 0.0045 | 16 | nan |
| 0.0 | 0.0068 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
VerbACxSS/sempl-it-gpt2-small-italian
|
VerbACxSS
| 2025-01-15T15:50:42Z
| 48
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"legal",
"it",
"base_model:GroNLP/gpt2-small-italian",
"base_model:finetune:GroNLP/gpt2-small-italian",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-27T08:58:19Z
|
---
license: mit
language:
- it
base_model:
- GroNLP/gpt2-small-italian
pipeline_tag: text-generation
tags:
- legal
library_name: transformers
---
## Usage
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("VerbACxSS/sempl-it-gpt2-small-italian", model_max_length=1024)
model = AutoModelForCausalLM.from_pretrained("VerbACxSS/sempl-it-gpt2-small-italian")
model.eval()
text_to_simplify = 'Nella fattispecie, questo documento è di natura prescrittiva'
prompt = f'### [Input]:\n{text_to_simplify}\n\n###[Output]:\n'
x = tokenizer(prompt, max_length=1024, truncation=True, padding=True, return_tensors='pt').input_ids
y = model.generate(x, max_length=1024)[0]
y_dec = tokenizer.decode(y, max_length=1024, truncation=True)
output = y_dec.split('###[Output]:\n')[1].split('<|endoftext|>')[0].strip()
print(output)
```
## Acknowledgements
This contribution is a result of the research conducted within the framework of the PRIN 2020 (Progetti di Rilevante Interesse Nazionale) "VerbACxSS: on analytic verbs, complexity, synthetic verbs, and simplification. For accessibility" (Prot. 2020BJKB9M), funded by the Italian Ministero dell'Università e della Ricerca.
|
dimasik2987/3c9903ab-f11a-4346-b33b-53331b56f77d
|
dimasik2987
| 2025-01-15T15:50:29Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:42:50Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3c9903ab-f11a-4346-b33b-53331b56f77d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0754f1f9a7257ab1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0754f1f9a7257ab1_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik2987/3c9903ab-f11a-4346-b33b-53331b56f77d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0754f1f9a7257ab1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b623591-63a4-4fae-839a-cad854463700
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5b623591-63a4-4fae-839a-cad854463700
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3c9903ab-f11a-4346-b33b-53331b56f77d
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0009 | 5 | nan |
| 0.0 | 0.0019 | 10 | nan |
| 0.0 | 0.0028 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
VerbACxSS/sempl-it-mt5-small
|
VerbACxSS
| 2025-01-15T15:50:21Z
| 68
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"legal",
"it",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-12-27T08:58:49Z
|
---
license: mit
language:
- it
base_model:
- google/mt5-small
pipeline_tag: text2text-generation
tags:
- legal
library_name: transformers
---
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("VerbACxSS/sempl-it-mt5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("VerbACxSS/sempl-it-mt5-small")
model.eval()
text_to_simplify = 'Nella fattispecie, questo documento è di natura prescrittiva'
prompt = f'semplifica: {text_to_simplify}'
x = tokenizer(prompt, max_length=1024, truncation=True, padding=True, return_tensors='pt').input_ids
y = model.generate(x, max_length=1024)[0]
output = tokenizer.decode(y, max_length=1024, truncation=True, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output)
```
## Acknowledgements
This contribution is a result of the research conducted within the framework of the PRIN 2020 (Progetti di Rilevante Interesse Nazionale) "VerbACxSS: on analytic verbs, complexity, synthetic verbs, and simplification. For accessibility" (Prot. 2020BJKB9M), funded by the Italian Ministero dell'Università e della Ricerca.
|
laquythang/49668481-8ea3-4632-b312-cd2956370777
|
laquythang
| 2025-01-15T15:50:05Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:32:51Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 49668481-8ea3-4632-b312-cd2956370777
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 59ee0b6235c26daa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/59ee0b6235c26daa_train_data.json
type:
field_input: pronoun
field_instruction: sentence
field_output: definition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/49668481-8ea3-4632-b312-cd2956370777
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/59ee0b6235c26daa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cac36870-8e18-426e-9717-ca6857936964
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cac36870-8e18-426e-9717-ca6857936964
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 49668481-8ea3-4632-b312-cd2956370777
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8976 | 0.7073 | 200 | 1.0637 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aurelvu/bert-bibtex-classifier
|
aurelvu
| 2025-01-15T15:49:15Z
| 171
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-01-12T22:59:50Z
|
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nhung01/73decf8f-573c-4ede-a607-775c7da77ed4
|
nhung01
| 2025-01-15T15:48:40Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:32:43Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73decf8f-573c-4ede-a607-775c7da77ed4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 59ee0b6235c26daa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/59ee0b6235c26daa_train_data.json
type:
field_input: pronoun
field_instruction: sentence
field_output: definition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/73decf8f-573c-4ede-a607-775c7da77ed4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/59ee0b6235c26daa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cac36870-8e18-426e-9717-ca6857936964
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cac36870-8e18-426e-9717-ca6857936964
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 73decf8f-573c-4ede-a607-775c7da77ed4
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8979 | 0.7073 | 200 | 1.0612 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k16_task1_organization
|
MayBashendy
| 2025-01-15T15:46:25Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T15:28:55Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k16_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k16_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3764
- Qwk: 0.4593
- Mse: 1.3764
- Rmse: 1.1732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0286 | 2 | 6.8337 | 0.0242 | 6.8337 | 2.6141 |
| No log | 0.0571 | 4 | 4.4891 | 0.0928 | 4.4891 | 2.1187 |
| No log | 0.0857 | 6 | 3.6095 | -0.0417 | 3.6095 | 1.8999 |
| No log | 0.1143 | 8 | 2.6066 | 0.0805 | 2.6066 | 1.6145 |
| No log | 0.1429 | 10 | 2.3240 | 0.0833 | 2.3240 | 1.5245 |
| No log | 0.1714 | 12 | 2.4556 | 0.0946 | 2.4556 | 1.5670 |
| No log | 0.2 | 14 | 2.0410 | 0.0960 | 2.0410 | 1.4286 |
| No log | 0.2286 | 16 | 1.8799 | 0.2931 | 1.8799 | 1.3711 |
| No log | 0.2571 | 18 | 1.7234 | 0.2957 | 1.7234 | 1.3128 |
| No log | 0.2857 | 20 | 1.5811 | 0.3036 | 1.5811 | 1.2574 |
| No log | 0.3143 | 22 | 1.4460 | 0.1869 | 1.4460 | 1.2025 |
| No log | 0.3429 | 24 | 1.5540 | 0.1321 | 1.5540 | 1.2466 |
| No log | 0.3714 | 26 | 1.4213 | 0.2222 | 1.4213 | 1.1922 |
| No log | 0.4 | 28 | 1.5136 | 0.3193 | 1.5136 | 1.2303 |
| No log | 0.4286 | 30 | 1.9478 | 0.2105 | 1.9478 | 1.3956 |
| No log | 0.4571 | 32 | 2.0764 | 0.2014 | 2.0764 | 1.4410 |
| No log | 0.4857 | 34 | 1.5700 | 0.3281 | 1.5700 | 1.2530 |
| No log | 0.5143 | 36 | 1.2381 | 0.3826 | 1.2381 | 1.1127 |
| No log | 0.5429 | 38 | 1.2643 | 0.3423 | 1.2643 | 1.1244 |
| No log | 0.5714 | 40 | 1.2574 | 0.4068 | 1.2574 | 1.1213 |
| No log | 0.6 | 42 | 1.2750 | 0.4298 | 1.2750 | 1.1291 |
| No log | 0.6286 | 44 | 1.6089 | 0.3220 | 1.6089 | 1.2684 |
| No log | 0.6571 | 46 | 1.6026 | 0.3636 | 1.6026 | 1.2659 |
| No log | 0.6857 | 48 | 1.0863 | 0.5156 | 1.0863 | 1.0423 |
| No log | 0.7143 | 50 | 0.9043 | 0.6809 | 0.9043 | 0.9509 |
| No log | 0.7429 | 52 | 1.0416 | 0.5538 | 1.0416 | 1.0206 |
| No log | 0.7714 | 54 | 1.5162 | 0.3200 | 1.5162 | 1.2313 |
| No log | 0.8 | 56 | 1.3478 | 0.4580 | 1.3478 | 1.1610 |
| No log | 0.8286 | 58 | 0.9924 | 0.6269 | 0.9924 | 0.9962 |
| No log | 0.8571 | 60 | 0.9594 | 0.6763 | 0.9594 | 0.9795 |
| No log | 0.8857 | 62 | 1.1393 | 0.5481 | 1.1393 | 1.0674 |
| No log | 0.9143 | 64 | 1.1885 | 0.5255 | 1.1885 | 1.0902 |
| No log | 0.9429 | 66 | 1.0726 | 0.6222 | 1.0726 | 1.0357 |
| No log | 0.9714 | 68 | 1.0284 | 0.5649 | 1.0284 | 1.0141 |
| No log | 1.0 | 70 | 1.0905 | 0.6324 | 1.0905 | 1.0443 |
| No log | 1.0286 | 72 | 1.0872 | 0.6131 | 1.0872 | 1.0427 |
| No log | 1.0571 | 74 | 1.1639 | 0.5493 | 1.1639 | 1.0788 |
| No log | 1.0857 | 76 | 1.4816 | 0.4965 | 1.4816 | 1.2172 |
| No log | 1.1143 | 78 | 1.4626 | 0.5143 | 1.4626 | 1.2094 |
| No log | 1.1429 | 80 | 1.4851 | 0.5180 | 1.4851 | 1.2186 |
| No log | 1.1714 | 82 | 1.3317 | 0.4889 | 1.3317 | 1.1540 |
| No log | 1.2 | 84 | 1.2976 | 0.4062 | 1.2976 | 1.1391 |
| No log | 1.2286 | 86 | 1.4304 | 0.4032 | 1.4304 | 1.1960 |
| No log | 1.2571 | 88 | 1.7350 | 0.3333 | 1.7350 | 1.3172 |
| No log | 1.2857 | 90 | 1.8839 | 0.2880 | 1.8839 | 1.3725 |
| No log | 1.3143 | 92 | 1.8076 | 0.2326 | 1.8076 | 1.3445 |
| No log | 1.3429 | 94 | 1.2636 | 0.5109 | 1.2636 | 1.1241 |
| No log | 1.3714 | 96 | 1.0193 | 0.5857 | 1.0193 | 1.0096 |
| No log | 1.4 | 98 | 1.2112 | 0.5674 | 1.2112 | 1.1005 |
| No log | 1.4286 | 100 | 1.5935 | 0.4113 | 1.5935 | 1.2623 |
| No log | 1.4571 | 102 | 1.3159 | 0.5217 | 1.3159 | 1.1471 |
| No log | 1.4857 | 104 | 0.8062 | 0.7050 | 0.8062 | 0.8979 |
| No log | 1.5143 | 106 | 0.7865 | 0.7391 | 0.7865 | 0.8869 |
| No log | 1.5429 | 108 | 0.9035 | 0.6418 | 0.9035 | 0.9505 |
| No log | 1.5714 | 110 | 1.2976 | 0.4964 | 1.2976 | 1.1391 |
| No log | 1.6 | 112 | 1.3322 | 0.4964 | 1.3322 | 1.1542 |
| No log | 1.6286 | 114 | 1.0269 | 0.6074 | 1.0269 | 1.0134 |
| No log | 1.6571 | 116 | 0.8800 | 0.6377 | 0.8800 | 0.9381 |
| No log | 1.6857 | 118 | 0.9260 | 0.6331 | 0.9260 | 0.9623 |
| No log | 1.7143 | 120 | 1.1650 | 0.5315 | 1.1650 | 1.0794 |
| No log | 1.7429 | 122 | 1.4832 | 0.4895 | 1.4832 | 1.2179 |
| No log | 1.7714 | 124 | 1.4908 | 0.4615 | 1.4908 | 1.2210 |
| No log | 1.8 | 126 | 1.4700 | 0.4476 | 1.4700 | 1.2124 |
| No log | 1.8286 | 128 | 1.2040 | 0.5468 | 1.2040 | 1.0972 |
| No log | 1.8571 | 130 | 1.0580 | 0.6615 | 1.0580 | 1.0286 |
| No log | 1.8857 | 132 | 1.0125 | 0.4959 | 1.0125 | 1.0062 |
| No log | 1.9143 | 134 | 1.0953 | 0.6299 | 1.0953 | 1.0466 |
| No log | 1.9429 | 136 | 1.3044 | 0.5075 | 1.3044 | 1.1421 |
| No log | 1.9714 | 138 | 1.4761 | 0.4265 | 1.4761 | 1.2150 |
| No log | 2.0 | 140 | 1.4946 | 0.4265 | 1.4946 | 1.2225 |
| No log | 2.0286 | 142 | 1.3233 | 0.5263 | 1.3233 | 1.1504 |
| No log | 2.0571 | 144 | 1.2769 | 0.5926 | 1.2769 | 1.1300 |
| No log | 2.0857 | 146 | 1.5574 | 0.3942 | 1.5574 | 1.2480 |
| No log | 2.1143 | 148 | 1.7650 | 0.2222 | 1.7650 | 1.3285 |
| No log | 2.1429 | 150 | 1.6343 | 0.3333 | 1.6343 | 1.2784 |
| No log | 2.1714 | 152 | 1.4845 | 0.3971 | 1.4845 | 1.2184 |
| No log | 2.2 | 154 | 1.2324 | 0.5152 | 1.2324 | 1.1101 |
| No log | 2.2286 | 156 | 1.0931 | 0.5938 | 1.0931 | 1.0455 |
| No log | 2.2571 | 158 | 1.0929 | 0.5645 | 1.0929 | 1.0454 |
| No log | 2.2857 | 160 | 1.2566 | 0.4812 | 1.2566 | 1.1210 |
| No log | 2.3143 | 162 | 1.2894 | 0.4812 | 1.2894 | 1.1355 |
| No log | 2.3429 | 164 | 1.2423 | 0.5075 | 1.2423 | 1.1146 |
| No log | 2.3714 | 166 | 1.1305 | 0.5758 | 1.1305 | 1.0633 |
| No log | 2.4 | 168 | 0.9736 | 0.6471 | 0.9736 | 0.9867 |
| No log | 2.4286 | 170 | 0.9289 | 0.6567 | 0.9289 | 0.9638 |
| No log | 2.4571 | 172 | 1.0946 | 0.6232 | 1.0946 | 1.0462 |
| No log | 2.4857 | 174 | 1.4287 | 0.4593 | 1.4287 | 1.1953 |
| No log | 2.5143 | 176 | 1.3580 | 0.4706 | 1.3580 | 1.1653 |
| No log | 2.5429 | 178 | 1.0219 | 0.6232 | 1.0219 | 1.0109 |
| No log | 2.5714 | 180 | 0.9143 | 0.6277 | 0.9143 | 0.9562 |
| No log | 2.6 | 182 | 0.9552 | 0.6232 | 0.9552 | 0.9773 |
| No log | 2.6286 | 184 | 1.0716 | 0.5857 | 1.0716 | 1.0352 |
| No log | 2.6571 | 186 | 0.9842 | 0.6232 | 0.9842 | 0.9921 |
| No log | 2.6857 | 188 | 0.8826 | 0.6471 | 0.8826 | 0.9395 |
| No log | 2.7143 | 190 | 1.0422 | 0.6176 | 1.0422 | 1.0209 |
| No log | 2.7429 | 192 | 1.2937 | 0.4889 | 1.2937 | 1.1374 |
| No log | 2.7714 | 194 | 1.3087 | 0.4889 | 1.3087 | 1.1440 |
| No log | 2.8 | 196 | 1.0811 | 0.6087 | 1.0811 | 1.0398 |
| No log | 2.8286 | 198 | 0.8948 | 0.6418 | 0.8948 | 0.9459 |
| No log | 2.8571 | 200 | 0.9906 | 0.6187 | 0.9906 | 0.9953 |
| No log | 2.8857 | 202 | 1.3906 | 0.4593 | 1.3906 | 1.1792 |
| No log | 2.9143 | 204 | 1.5787 | 0.3429 | 1.5787 | 1.2564 |
| No log | 2.9429 | 206 | 1.3830 | 0.4662 | 1.3830 | 1.1760 |
| No log | 2.9714 | 208 | 1.1465 | 0.5385 | 1.1465 | 1.0707 |
| No log | 3.0 | 210 | 1.0218 | 0.6129 | 1.0218 | 1.0108 |
| No log | 3.0286 | 212 | 1.0315 | 0.5246 | 1.0315 | 1.0156 |
| No log | 3.0571 | 214 | 1.1276 | 0.6032 | 1.1276 | 1.0619 |
| No log | 3.0857 | 216 | 1.4077 | 0.4320 | 1.4077 | 1.1865 |
| No log | 3.1143 | 218 | 1.5440 | 0.4531 | 1.5440 | 1.2426 |
| No log | 3.1429 | 220 | 1.5291 | 0.4531 | 1.5291 | 1.2365 |
| No log | 3.1714 | 222 | 1.3984 | 0.5039 | 1.3984 | 1.1825 |
| No log | 3.2 | 224 | 1.4500 | 0.4341 | 1.4500 | 1.2041 |
| No log | 3.2286 | 226 | 1.6216 | 0.2923 | 1.6216 | 1.2734 |
| No log | 3.2571 | 228 | 1.5246 | 0.3556 | 1.5246 | 1.2347 |
| No log | 3.2857 | 230 | 1.3671 | 0.4783 | 1.3671 | 1.1692 |
| No log | 3.3143 | 232 | 1.4404 | 0.4265 | 1.4404 | 1.2001 |
| No log | 3.3429 | 234 | 1.7448 | 0.2609 | 1.7448 | 1.3209 |
| No log | 3.3714 | 236 | 1.8269 | 0.2774 | 1.8269 | 1.3516 |
| No log | 3.4 | 238 | 1.5437 | 0.3582 | 1.5437 | 1.2425 |
| No log | 3.4286 | 240 | 1.2087 | 0.5191 | 1.2087 | 1.0994 |
| No log | 3.4571 | 242 | 1.0510 | 0.6212 | 1.0510 | 1.0252 |
| No log | 3.4857 | 244 | 1.0694 | 0.5954 | 1.0694 | 1.0341 |
| No log | 3.5143 | 246 | 1.2784 | 0.5263 | 1.2784 | 1.1307 |
| No log | 3.5429 | 248 | 1.4112 | 0.4925 | 1.4112 | 1.1879 |
| No log | 3.5714 | 250 | 1.5436 | 0.4058 | 1.5436 | 1.2424 |
| No log | 3.6 | 252 | 1.4830 | 0.4234 | 1.4830 | 1.2178 |
| No log | 3.6286 | 254 | 1.5739 | 0.3571 | 1.5739 | 1.2546 |
| No log | 3.6571 | 256 | 1.5217 | 0.4058 | 1.5217 | 1.2336 |
| No log | 3.6857 | 258 | 1.4990 | 0.4148 | 1.4990 | 1.2243 |
| No log | 3.7143 | 260 | 1.4782 | 0.4030 | 1.4782 | 1.2158 |
| No log | 3.7429 | 262 | 1.4390 | 0.4545 | 1.4390 | 1.1996 |
| No log | 3.7714 | 264 | 1.4553 | 0.4545 | 1.4553 | 1.2064 |
| No log | 3.8 | 266 | 1.3295 | 0.5191 | 1.3295 | 1.1531 |
| No log | 3.8286 | 268 | 1.2024 | 0.5564 | 1.2024 | 1.0965 |
| No log | 3.8571 | 270 | 1.1636 | 0.5735 | 1.1636 | 1.0787 |
| No log | 3.8857 | 272 | 1.2539 | 0.5 | 1.2539 | 1.1198 |
| No log | 3.9143 | 274 | 1.3901 | 0.4559 | 1.3901 | 1.1790 |
| No log | 3.9429 | 276 | 1.3526 | 0.4818 | 1.3526 | 1.1630 |
| No log | 3.9714 | 278 | 1.4592 | 0.4511 | 1.4592 | 1.2080 |
| No log | 4.0 | 280 | 1.8012 | 0.2774 | 1.8012 | 1.3421 |
| No log | 4.0286 | 282 | 2.0166 | 0.2014 | 2.0166 | 1.4201 |
| No log | 4.0571 | 284 | 1.7866 | 0.3066 | 1.7866 | 1.3366 |
| No log | 4.0857 | 286 | 1.3599 | 0.4627 | 1.3599 | 1.1662 |
| No log | 4.1143 | 288 | 1.0635 | 0.5865 | 1.0635 | 1.0313 |
| No log | 4.1429 | 290 | 1.0123 | 0.6119 | 1.0123 | 1.0061 |
| No log | 4.1714 | 292 | 1.1615 | 0.5693 | 1.1615 | 1.0777 |
| No log | 4.2 | 294 | 1.3948 | 0.4593 | 1.3948 | 1.1810 |
| No log | 4.2286 | 296 | 1.3104 | 0.4889 | 1.3104 | 1.1447 |
| No log | 4.2571 | 298 | 1.0651 | 0.5778 | 1.0651 | 1.0320 |
| No log | 4.2857 | 300 | 0.9699 | 0.6519 | 0.9699 | 0.9849 |
| No log | 4.3143 | 302 | 1.0309 | 0.6176 | 1.0309 | 1.0153 |
| No log | 4.3429 | 304 | 1.3110 | 0.4706 | 1.3110 | 1.1450 |
| No log | 4.3714 | 306 | 1.7069 | 0.3478 | 1.7069 | 1.3065 |
| No log | 4.4 | 308 | 1.7723 | 0.2899 | 1.7723 | 1.3313 |
| No log | 4.4286 | 310 | 1.5822 | 0.4088 | 1.5822 | 1.2578 |
| No log | 4.4571 | 312 | 1.3861 | 0.5 | 1.3861 | 1.1773 |
| No log | 4.4857 | 314 | 1.3641 | 0.5 | 1.3641 | 1.1679 |
| No log | 4.5143 | 316 | 1.4717 | 0.4203 | 1.4717 | 1.2131 |
| No log | 4.5429 | 318 | 1.6087 | 0.3768 | 1.6087 | 1.2683 |
| No log | 4.5714 | 320 | 1.6747 | 0.3453 | 1.6747 | 1.2941 |
| No log | 4.6 | 322 | 1.5752 | 0.3942 | 1.5752 | 1.2551 |
| No log | 4.6286 | 324 | 1.4327 | 0.4203 | 1.4327 | 1.1970 |
| No log | 4.6571 | 326 | 1.2610 | 0.5496 | 1.2610 | 1.1230 |
| No log | 4.6857 | 328 | 1.1840 | 0.5758 | 1.1840 | 1.0881 |
| No log | 4.7143 | 330 | 1.2157 | 0.5496 | 1.2157 | 1.1026 |
| No log | 4.7429 | 332 | 1.3424 | 0.5075 | 1.3424 | 1.1586 |
| No log | 4.7714 | 334 | 1.5371 | 0.3407 | 1.5371 | 1.2398 |
| No log | 4.8 | 336 | 1.6706 | 0.2878 | 1.6706 | 1.2925 |
| No log | 4.8286 | 338 | 1.6747 | 0.2714 | 1.6747 | 1.2941 |
| No log | 4.8571 | 340 | 1.5275 | 0.3731 | 1.5275 | 1.2359 |
| No log | 4.8857 | 342 | 1.3610 | 0.5414 | 1.3610 | 1.1666 |
| No log | 4.9143 | 344 | 1.3677 | 0.5075 | 1.3677 | 1.1695 |
| No log | 4.9429 | 346 | 1.4244 | 0.4889 | 1.4244 | 1.1935 |
| No log | 4.9714 | 348 | 1.4352 | 0.4380 | 1.4352 | 1.1980 |
| No log | 5.0 | 350 | 1.4374 | 0.4380 | 1.4374 | 1.1989 |
| No log | 5.0286 | 352 | 1.2030 | 0.5333 | 1.2030 | 1.0968 |
| No log | 5.0571 | 354 | 1.1369 | 0.5821 | 1.1369 | 1.0662 |
| No log | 5.0857 | 356 | 1.2578 | 0.5333 | 1.2578 | 1.1215 |
| No log | 5.1143 | 358 | 1.5682 | 0.3623 | 1.5682 | 1.2523 |
| No log | 5.1429 | 360 | 1.7093 | 0.3212 | 1.7093 | 1.3074 |
| No log | 5.1714 | 362 | 1.6142 | 0.3286 | 1.6142 | 1.2705 |
| No log | 5.2 | 364 | 1.4060 | 0.4706 | 1.4060 | 1.1858 |
| No log | 5.2286 | 366 | 1.2391 | 0.5414 | 1.2391 | 1.1131 |
| No log | 5.2571 | 368 | 1.1590 | 0.5426 | 1.1590 | 1.0766 |
| No log | 5.2857 | 370 | 1.1743 | 0.5426 | 1.1743 | 1.0837 |
| No log | 5.3143 | 372 | 1.1697 | 0.5692 | 1.1697 | 1.0815 |
| No log | 5.3429 | 374 | 1.2521 | 0.5714 | 1.2521 | 1.1190 |
| No log | 5.3714 | 376 | 1.4472 | 0.4706 | 1.4472 | 1.2030 |
| No log | 5.4 | 378 | 1.4903 | 0.4706 | 1.4903 | 1.2208 |
| No log | 5.4286 | 380 | 1.4069 | 0.4706 | 1.4069 | 1.1861 |
| No log | 5.4571 | 382 | 1.3369 | 0.5075 | 1.3369 | 1.1563 |
| No log | 5.4857 | 384 | 1.3297 | 0.5075 | 1.3297 | 1.1531 |
| No log | 5.5143 | 386 | 1.2427 | 0.5455 | 1.2427 | 1.1148 |
| No log | 5.5429 | 388 | 1.0827 | 0.5649 | 1.0827 | 1.0405 |
| No log | 5.5714 | 390 | 0.9922 | 0.5692 | 0.9922 | 0.9961 |
| No log | 5.6 | 392 | 0.9703 | 0.6015 | 0.9703 | 0.9851 |
| No log | 5.6286 | 394 | 1.0948 | 0.5821 | 1.0948 | 1.0463 |
| No log | 5.6571 | 396 | 1.3265 | 0.5147 | 1.3265 | 1.1517 |
| No log | 5.6857 | 398 | 1.3873 | 0.4889 | 1.3873 | 1.1778 |
| No log | 5.7143 | 400 | 1.2730 | 0.5373 | 1.2730 | 1.1283 |
| No log | 5.7429 | 402 | 1.2221 | 0.5481 | 1.2221 | 1.1055 |
| No log | 5.7714 | 404 | 1.1536 | 0.5649 | 1.1536 | 1.0741 |
| No log | 5.8 | 406 | 1.1435 | 0.6015 | 1.1435 | 1.0694 |
| No log | 5.8286 | 408 | 1.0895 | 0.6 | 1.0895 | 1.0438 |
| No log | 5.8571 | 410 | 1.1584 | 0.5891 | 1.1584 | 1.0763 |
| No log | 5.8857 | 412 | 1.2302 | 0.5846 | 1.2302 | 1.1091 |
| No log | 5.9143 | 414 | 1.2604 | 0.5606 | 1.2604 | 1.1227 |
| No log | 5.9429 | 416 | 1.2379 | 0.5714 | 1.2379 | 1.1126 |
| No log | 5.9714 | 418 | 1.0615 | 0.5649 | 1.0615 | 1.0303 |
| No log | 6.0 | 420 | 0.9040 | 0.6015 | 0.9040 | 0.9508 |
| No log | 6.0286 | 422 | 0.8775 | 0.5970 | 0.8775 | 0.9367 |
| No log | 6.0571 | 424 | 1.0238 | 0.5714 | 1.0238 | 1.0118 |
| No log | 6.0857 | 426 | 1.2761 | 0.4889 | 1.2761 | 1.1297 |
| No log | 6.1143 | 428 | 1.3554 | 0.5072 | 1.3554 | 1.1642 |
| No log | 6.1429 | 430 | 1.2540 | 0.4885 | 1.2540 | 1.1198 |
| No log | 6.1714 | 432 | 1.0339 | 0.5231 | 1.0339 | 1.0168 |
| No log | 6.2 | 434 | 0.9861 | 0.5736 | 0.9861 | 0.9930 |
| No log | 6.2286 | 436 | 1.1010 | 0.5385 | 1.1010 | 1.0493 |
| No log | 6.2571 | 438 | 1.2149 | 0.5455 | 1.2149 | 1.1022 |
| No log | 6.2857 | 440 | 1.3697 | 0.5077 | 1.3697 | 1.1703 |
| No log | 6.3143 | 442 | 1.3210 | 0.5970 | 1.3210 | 1.1494 |
| No log | 6.3429 | 444 | 1.2623 | 0.5985 | 1.2623 | 1.1235 |
| No log | 6.3714 | 446 | 1.2335 | 0.5821 | 1.2335 | 1.1106 |
| No log | 6.4 | 448 | 1.2572 | 0.5564 | 1.2572 | 1.1213 |
| No log | 6.4286 | 450 | 1.2073 | 0.5564 | 1.2073 | 1.0988 |
| No log | 6.4571 | 452 | 1.1963 | 0.5303 | 1.1963 | 1.0938 |
| No log | 6.4857 | 454 | 1.2310 | 0.5564 | 1.2310 | 1.1095 |
| No log | 6.5143 | 456 | 1.2620 | 0.5778 | 1.2620 | 1.1234 |
| No log | 6.5429 | 458 | 1.2205 | 0.5714 | 1.2205 | 1.1048 |
| No log | 6.5714 | 460 | 1.1813 | 0.5758 | 1.1813 | 1.0869 |
| No log | 6.6 | 462 | 1.1858 | 0.5758 | 1.1858 | 1.0890 |
| No log | 6.6286 | 464 | 1.2509 | 0.5758 | 1.2509 | 1.1184 |
| No log | 6.6571 | 466 | 1.3665 | 0.4593 | 1.3665 | 1.1690 |
| No log | 6.6857 | 468 | 1.3494 | 0.4706 | 1.3494 | 1.1616 |
| No log | 6.7143 | 470 | 1.3706 | 0.4818 | 1.3706 | 1.1707 |
| No log | 6.7429 | 472 | 1.2916 | 0.5072 | 1.2916 | 1.1365 |
| No log | 6.7714 | 474 | 1.1484 | 0.5985 | 1.1484 | 1.0716 |
| No log | 6.8 | 476 | 1.1952 | 0.5985 | 1.1952 | 1.0932 |
| No log | 6.8286 | 478 | 1.2637 | 0.5441 | 1.2637 | 1.1241 |
| No log | 6.8571 | 480 | 1.2950 | 0.5441 | 1.2950 | 1.1380 |
| No log | 6.8857 | 482 | 1.2568 | 0.5778 | 1.2568 | 1.1211 |
| No log | 6.9143 | 484 | 1.2612 | 0.5778 | 1.2612 | 1.1230 |
| No log | 6.9429 | 486 | 1.1672 | 0.5821 | 1.1672 | 1.0804 |
| No log | 6.9714 | 488 | 1.1548 | 0.5649 | 1.1548 | 1.0746 |
| No log | 7.0 | 490 | 1.2945 | 0.5522 | 1.2945 | 1.1378 |
| No log | 7.0286 | 492 | 1.4219 | 0.4812 | 1.4219 | 1.1924 |
| No log | 7.0571 | 494 | 1.3554 | 0.5152 | 1.3554 | 1.1642 |
| No log | 7.0857 | 496 | 1.2111 | 0.5197 | 1.2111 | 1.1005 |
| No log | 7.1143 | 498 | 1.2144 | 0.4793 | 1.2144 | 1.1020 |
| 0.3532 | 7.1429 | 500 | 1.2226 | 0.5323 | 1.2226 | 1.1057 |
| 0.3532 | 7.1714 | 502 | 1.2471 | 0.5385 | 1.2471 | 1.1167 |
| 0.3532 | 7.2 | 504 | 1.3811 | 0.4925 | 1.3811 | 1.1752 |
| 0.3532 | 7.2286 | 506 | 1.4908 | 0.3971 | 1.4908 | 1.2210 |
| 0.3532 | 7.2571 | 508 | 1.4098 | 0.5 | 1.4098 | 1.1874 |
| 0.3532 | 7.2857 | 510 | 1.3510 | 0.5 | 1.3510 | 1.1623 |
| 0.3532 | 7.3143 | 512 | 1.3337 | 0.5373 | 1.3337 | 1.1549 |
| 0.3532 | 7.3429 | 514 | 1.2646 | 0.5649 | 1.2646 | 1.1245 |
| 0.3532 | 7.3714 | 516 | 1.2132 | 0.5938 | 1.2132 | 1.1015 |
| 0.3532 | 7.4 | 518 | 1.2061 | 0.5938 | 1.2061 | 1.0982 |
| 0.3532 | 7.4286 | 520 | 1.2082 | 0.5649 | 1.2082 | 1.0992 |
| 0.3532 | 7.4571 | 522 | 1.2063 | 0.5649 | 1.2063 | 1.0983 |
| 0.3532 | 7.4857 | 524 | 1.2944 | 0.5373 | 1.2944 | 1.1377 |
| 0.3532 | 7.5143 | 526 | 1.3583 | 0.4672 | 1.3583 | 1.1654 |
| 0.3532 | 7.5429 | 528 | 1.3764 | 0.4783 | 1.3764 | 1.1732 |
| 0.3532 | 7.5714 | 530 | 1.4234 | 0.4853 | 1.4234 | 1.1931 |
| 0.3532 | 7.6 | 532 | 1.5196 | 0.4234 | 1.5196 | 1.2327 |
| 0.3532 | 7.6286 | 534 | 1.3764 | 0.4593 | 1.3764 | 1.1732 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ClarenceDan/fb4700b3-f04a-48ac-b7f7-8b018fb175ac
|
ClarenceDan
| 2025-01-15T15:45:10Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:20:36Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fb4700b3-f04a-48ac-b7f7-8b018fb175ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a807bb88794a6c5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a807bb88794a6c5a_train_data.json
type:
field_instruction: prompt
field_output: label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/fb4700b3-f04a-48ac-b7f7-8b018fb175ac
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a807bb88794a6c5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f19c28d2-bbf3-4c64-b448-7d3ff5c4b03c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f19c28d2-bbf3-4c64-b448-7d3ff5c4b03c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fb4700b3-f04a-48ac-b7f7-8b018fb175ac
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0003 | 6 | nan |
| 0.0 | 0.0005 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Keltezaa/Katie_Thomas_Actress
|
Keltezaa
| 2025-01-15T15:44:56Z
| 39
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-to-image
| 2025-01-15T15:40:47Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
35mm photograph of a young woman, (wearing a jacket:1.5),having a coffee in
a vintage cafe, smile, professional, atmospheric haze, excellent dynamic
range, masterpiece, excellent quality, ultra detailed, subtle lighting, soft
focus, detailed shadows
output:
url: images/example_9nxcl3qpv.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: cc-by-nc-nd-4.0
---
# Katie Thomas Actress
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/Katie_Thomas_Actress/tree/main) them in the Files & versions tab.
|
Patnev71/floorplanv2
|
Patnev71
| 2025-01-15T15:44:45Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-15T15:44:43Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dzanbek/cc00dfb0-c695-416a-8916-6686f2159be0
|
dzanbek
| 2025-01-15T15:44:05Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-01-15T14:15:45Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc00dfb0-c695-416a-8916-6686f2159be0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0431498a24ad26e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0431498a24ad26e2_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: responseA
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dzanbek/cc00dfb0-c695-416a-8916-6686f2159be0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0431498a24ad26e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21027872-1320-4917-9c1c-3ca7864e1be9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21027872-1320-4917-9c1c-3ca7864e1be9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cc00dfb0-c695-416a-8916-6686f2159be0
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/fe65bf08-5d51-4a1f-a7a4-29850ba83120
|
kk-aivio
| 2025-01-15T15:42:14Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-01-15T15:12:58Z
|
---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe65bf08-5d51-4a1f-a7a4-29850ba83120
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6e6190eb26c4eb66_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6e6190eb26c4eb66_train_data.json
type:
field_input: user_input
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/fe65bf08-5d51-4a1f-a7a4-29850ba83120
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6e6190eb26c4eb66_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22a7e7a8-f3c8-4b2e-bae8-382fa9d6d294
wandb_project: birthday-sn56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22a7e7a8-f3c8-4b2e-bae8-382fa9d6d294
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fe65bf08-5d51-4a1f-a7a4-29850ba83120
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.145 | 0.0001 | 1 | 2.5498 |
| 2.7495 | 0.0002 | 3 | 2.5397 |
| 1.9831 | 0.0003 | 6 | 2.3370 |
| 2.0553 | 0.0005 | 9 | 1.8674 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF
|
itlwas
| 2025-01-15T15:42:05Z
| 25
| 0
|
transformers
|
[
"transformers",
"gguf",
"ROC",
"Taiwan",
"zh-tw",
"llama-factory",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"dataset:lianghsun/tw-novel-1.1B",
"dataset:lianghsun/tw-finance-159M",
"dataset:lianghsun/tw-legal-news-24M",
"dataset:lianghsun/tw-gov-news-90M",
"dataset:lianghsun/tw-gov-556k",
"dataset:lianghsun/tw-news-551M",
"dataset:lianghsun/tw-health-43M",
"dataset:lianghsun/tw-science-24M",
"dataset:lianghsun/tw-book-43M",
"dataset:lianghsun/tw-society-88M",
"dataset:lianghsun/tw-law-article-evolution",
"dataset:lianghsun/tw-processed-judgments",
"dataset:lianghsun/tw-legal-methodology",
"dataset:lianghsun/tw-legal-qa",
"dataset:lianghsun/tw-judgment-gist",
"dataset:lianghsun/reasoning-base-20k",
"dataset:lianghsun/wikipedia-zh-filtered",
"dataset:AWeirdDev/zh-tw-pts-articles-sm",
"dataset:bhxiang/c4_calibrate_mini",
"dataset:benchang1110/pretrainedtw",
"dataset:benchang1110/sciencetw",
"dataset:intfloat/multilingual_cc_news",
"base_model:lianghsun/Llama-3.2-Taiwan-3B",
"base_model:quantized:lianghsun/Llama-3.2-Taiwan-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-15T15:41:53Z
|
---
base_model: lianghsun/Llama-3.2-Taiwan-3B
library_name: transformers
datasets:
- lianghsun/tw-novel-1.1B
- lianghsun/tw-finance-159M
- lianghsun/tw-legal-news-24M
- lianghsun/tw-gov-news-90M
- lianghsun/tw-gov-556k
- lianghsun/tw-news-551M
- lianghsun/tw-health-43M
- lianghsun/tw-science-24M
- lianghsun/tw-book-43M
- lianghsun/tw-society-88M
- lianghsun/tw-law-article-evolution
- lianghsun/tw-processed-judgments
- lianghsun/tw-legal-methodology
- lianghsun/tw-legal-qa
- lianghsun/tw-judgment-gist
- lianghsun/reasoning-base-20k
- lianghsun/wikipedia-zh-filtered
- AWeirdDev/zh-tw-pts-articles-sm
- bhxiang/c4_calibrate_mini
- benchang1110/pretrainedtw
- benchang1110/sciencetw
- intfloat/multilingual_cc_news
language:
- zh
- en
license: llama3.2
tags:
- ROC
- Taiwan
- zh-tw
- llama-factory
- llama-cpp
- gguf-my-repo
new_version: lianghsun/Llama-3.2-Taiwan-3B-Instruct
pipeline_tag: text-generation
widget:
- text: 中華民國憲法第一條
---
# itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF
This model was converted to GGUF format from [`lianghsun/Llama-3.2-Taiwan-3B`](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/Llama-3.2-Taiwan-3B-Q4_K_M-GGUF --hf-file llama-3.2-taiwan-3b-q4_k_m.gguf -c 2048
```
|
thaffggg/a02adaab-00b1-4220-a9a9-154dea3495b5
|
thaffggg
| 2025-01-15T15:39:09Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:20:01Z
|
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a02adaab-00b1-4220-a9a9-154dea3495b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 80c99709830fd48a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80c99709830fd48a_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/a02adaab-00b1-4220-a9a9-154dea3495b5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/80c99709830fd48a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e24b6a86-83f1-40ca-ac06-bdb6e674fa7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e24b6a86-83f1-40ca-ac06-bdb6e674fa7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a02adaab-00b1-4220-a9a9-154dea3495b5
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7407 | 0.1206 | 200 | 0.4311 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
asafd60/HebQwen-json-2025-meta
|
asafd60
| 2025-01-15T15:38:37Z
| 84
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-01-15T15:32:36Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cunghoctienganh/a4f8f6b7-473d-44a0-ad19-9c32ae368864
|
cunghoctienganh
| 2025-01-15T15:38:30Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:19:21Z
|
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4f8f6b7-473d-44a0-ad19-9c32ae368864
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 80c99709830fd48a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80c99709830fd48a_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/a4f8f6b7-473d-44a0-ad19-9c32ae368864
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/80c99709830fd48a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e24b6a86-83f1-40ca-ac06-bdb6e674fa7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e24b6a86-83f1-40ca-ac06-bdb6e674fa7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a4f8f6b7-473d-44a0-ad19-9c32ae368864
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7471 | 0.1206 | 200 | 0.4312 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thangla01/cc2b4a23-f3b3-4c69-aaa5-14746b129c9c
|
thangla01
| 2025-01-15T15:38:26Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T14:35:58Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc2b4a23-f3b3-4c69-aaa5-14746b129c9c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a807bb88794a6c5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a807bb88794a6c5a_train_data.json
type:
field_instruction: prompt
field_output: label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/cc2b4a23-f3b3-4c69-aaa5-14746b129c9c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a807bb88794a6c5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f19c28d2-bbf3-4c64-b448-7d3ff5c4b03c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f19c28d2-bbf3-4c64-b448-7d3ff5c4b03c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cc2b4a23-f3b3-4c69-aaa5-14746b129c9c
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8421 | 0.0105 | 200 | 1.6362 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k10_task2_organization
|
MayBashendy
| 2025-01-15T15:38:20Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T14:51:41Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k10_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k10_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6404
- Qwk: 0.5029
- Mse: 0.6404
- Rmse: 0.8003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0377 | 2 | 4.2473 | -0.0228 | 4.2473 | 2.0609 |
| No log | 0.0755 | 4 | 2.3310 | 0.0238 | 2.3310 | 1.5268 |
| No log | 0.1132 | 6 | 1.3560 | 0.0205 | 1.3560 | 1.1645 |
| No log | 0.1509 | 8 | 0.9130 | 0.0383 | 0.9130 | 0.9555 |
| No log | 0.1887 | 10 | 0.8565 | 0.1596 | 0.8565 | 0.9255 |
| No log | 0.2264 | 12 | 1.0357 | 0.0713 | 1.0357 | 1.0177 |
| No log | 0.2642 | 14 | 1.0549 | 0.1234 | 1.0549 | 1.0271 |
| No log | 0.3019 | 16 | 0.8566 | 0.1477 | 0.8566 | 0.9255 |
| No log | 0.3396 | 18 | 1.1207 | 0.1474 | 1.1207 | 1.0587 |
| No log | 0.3774 | 20 | 1.6261 | 0.0643 | 1.6261 | 1.2752 |
| No log | 0.4151 | 22 | 1.5250 | 0.0550 | 1.5250 | 1.2349 |
| No log | 0.4528 | 24 | 1.2243 | 0.0677 | 1.2243 | 1.1065 |
| No log | 0.4906 | 26 | 0.9723 | 0.1509 | 0.9723 | 0.9860 |
| No log | 0.5283 | 28 | 0.9085 | 0.1681 | 0.9085 | 0.9532 |
| No log | 0.5660 | 30 | 0.8029 | 0.2499 | 0.8029 | 0.8961 |
| No log | 0.6038 | 32 | 0.8286 | 0.1573 | 0.8286 | 0.9103 |
| No log | 0.6415 | 34 | 0.8689 | 0.1731 | 0.8689 | 0.9322 |
| No log | 0.6792 | 36 | 1.1002 | 0.0847 | 1.1002 | 1.0489 |
| No log | 0.7170 | 38 | 1.4252 | 0.0868 | 1.4252 | 1.1938 |
| No log | 0.7547 | 40 | 1.8930 | 0.1372 | 1.8930 | 1.3759 |
| No log | 0.7925 | 42 | 1.7148 | 0.1316 | 1.7148 | 1.3095 |
| No log | 0.8302 | 44 | 1.2532 | 0.2452 | 1.2532 | 1.1195 |
| No log | 0.8679 | 46 | 0.9984 | 0.2395 | 0.9984 | 0.9992 |
| No log | 0.9057 | 48 | 1.0085 | 0.2069 | 1.0085 | 1.0042 |
| No log | 0.9434 | 50 | 0.8329 | 0.3641 | 0.8329 | 0.9126 |
| No log | 0.9811 | 52 | 0.6157 | 0.3959 | 0.6157 | 0.7846 |
| No log | 1.0189 | 54 | 0.6081 | 0.4061 | 0.6081 | 0.7798 |
| No log | 1.0566 | 56 | 0.6818 | 0.3701 | 0.6818 | 0.8257 |
| No log | 1.0943 | 58 | 0.9277 | 0.2847 | 0.9277 | 0.9632 |
| No log | 1.1321 | 60 | 1.0415 | 0.3412 | 1.0415 | 1.0205 |
| No log | 1.1698 | 62 | 0.9216 | 0.3506 | 0.9216 | 0.9600 |
| No log | 1.2075 | 64 | 0.6502 | 0.4398 | 0.6502 | 0.8063 |
| No log | 1.2453 | 66 | 0.7747 | 0.2631 | 0.7747 | 0.8802 |
| No log | 1.2830 | 68 | 0.9121 | 0.1402 | 0.9121 | 0.9550 |
| No log | 1.3208 | 70 | 0.7329 | 0.2601 | 0.7329 | 0.8561 |
| No log | 1.3585 | 72 | 0.6610 | 0.5366 | 0.6610 | 0.8130 |
| No log | 1.3962 | 74 | 0.9865 | 0.4573 | 0.9865 | 0.9932 |
| No log | 1.4340 | 76 | 1.1531 | 0.3424 | 1.1531 | 1.0738 |
| No log | 1.4717 | 78 | 0.9473 | 0.4736 | 0.9473 | 0.9733 |
| No log | 1.5094 | 80 | 0.9756 | 0.4506 | 0.9756 | 0.9877 |
| No log | 1.5472 | 82 | 0.9997 | 0.4460 | 0.9997 | 0.9999 |
| No log | 1.5849 | 84 | 1.2514 | 0.2979 | 1.2514 | 1.1187 |
| No log | 1.6226 | 86 | 1.2607 | 0.3124 | 1.2607 | 1.1228 |
| No log | 1.6604 | 88 | 1.3783 | 0.2598 | 1.3783 | 1.1740 |
| No log | 1.6981 | 90 | 1.3063 | 0.2830 | 1.3063 | 1.1429 |
| No log | 1.7358 | 92 | 0.9235 | 0.4575 | 0.9235 | 0.9610 |
| No log | 1.7736 | 94 | 0.5829 | 0.4552 | 0.5829 | 0.7635 |
| No log | 1.8113 | 96 | 0.6093 | 0.4342 | 0.6093 | 0.7805 |
| No log | 1.8491 | 98 | 0.5742 | 0.3628 | 0.5742 | 0.7578 |
| No log | 1.8868 | 100 | 0.6228 | 0.4603 | 0.6228 | 0.7892 |
| No log | 1.9245 | 102 | 0.6994 | 0.5053 | 0.6994 | 0.8363 |
| No log | 1.9623 | 104 | 0.5974 | 0.4997 | 0.5974 | 0.7729 |
| No log | 2.0 | 106 | 0.5891 | 0.5375 | 0.5891 | 0.7675 |
| No log | 2.0377 | 108 | 0.6116 | 0.5281 | 0.6116 | 0.7821 |
| No log | 2.0755 | 110 | 0.6222 | 0.6008 | 0.6222 | 0.7888 |
| No log | 2.1132 | 112 | 0.7781 | 0.5635 | 0.7781 | 0.8821 |
| No log | 2.1509 | 114 | 1.1884 | 0.3373 | 1.1884 | 1.0902 |
| No log | 2.1887 | 116 | 1.5178 | 0.2919 | 1.5178 | 1.2320 |
| No log | 2.2264 | 118 | 1.2534 | 0.3059 | 1.2534 | 1.1195 |
| No log | 2.2642 | 120 | 0.7709 | 0.5254 | 0.7709 | 0.8780 |
| No log | 2.3019 | 122 | 0.5978 | 0.5234 | 0.5978 | 0.7732 |
| No log | 2.3396 | 124 | 0.6255 | 0.4921 | 0.6255 | 0.7909 |
| No log | 2.3774 | 126 | 0.5841 | 0.5536 | 0.5841 | 0.7643 |
| No log | 2.4151 | 128 | 0.7852 | 0.5560 | 0.7852 | 0.8861 |
| No log | 2.4528 | 130 | 0.9215 | 0.3990 | 0.9215 | 0.9599 |
| No log | 2.4906 | 132 | 0.7816 | 0.5569 | 0.7816 | 0.8841 |
| No log | 2.5283 | 134 | 0.5990 | 0.5452 | 0.5990 | 0.7739 |
| No log | 2.5660 | 136 | 0.5412 | 0.4919 | 0.5412 | 0.7357 |
| No log | 2.6038 | 138 | 0.5472 | 0.4689 | 0.5472 | 0.7397 |
| No log | 2.6415 | 140 | 0.7115 | 0.5535 | 0.7115 | 0.8435 |
| No log | 2.6792 | 142 | 1.3119 | 0.3421 | 1.3119 | 1.1454 |
| No log | 2.7170 | 144 | 1.5224 | 0.2856 | 1.5224 | 1.2338 |
| No log | 2.7547 | 146 | 1.1841 | 0.4041 | 1.1841 | 1.0882 |
| No log | 2.7925 | 148 | 0.9617 | 0.4544 | 0.9617 | 0.9807 |
| No log | 2.8302 | 150 | 0.8756 | 0.4804 | 0.8756 | 0.9358 |
| No log | 2.8679 | 152 | 0.9670 | 0.4648 | 0.9670 | 0.9834 |
| No log | 2.9057 | 154 | 1.1254 | 0.3595 | 1.1254 | 1.0609 |
| No log | 2.9434 | 156 | 0.9562 | 0.4757 | 0.9562 | 0.9779 |
| No log | 2.9811 | 158 | 0.7437 | 0.5500 | 0.7437 | 0.8624 |
| No log | 3.0189 | 160 | 0.6462 | 0.5414 | 0.6462 | 0.8039 |
| No log | 3.0566 | 162 | 0.6470 | 0.5277 | 0.6470 | 0.8044 |
| No log | 3.0943 | 164 | 0.7120 | 0.5137 | 0.7120 | 0.8438 |
| No log | 3.1321 | 166 | 0.8192 | 0.5071 | 0.8192 | 0.9051 |
| No log | 3.1698 | 168 | 0.9236 | 0.4534 | 0.9236 | 0.9610 |
| No log | 3.2075 | 170 | 0.8508 | 0.5040 | 0.8508 | 0.9224 |
| No log | 3.2453 | 172 | 0.7356 | 0.5419 | 0.7356 | 0.8577 |
| No log | 3.2830 | 174 | 0.6803 | 0.5183 | 0.6803 | 0.8248 |
| No log | 3.3208 | 176 | 0.6747 | 0.5787 | 0.6747 | 0.8214 |
| No log | 3.3585 | 178 | 0.6850 | 0.5714 | 0.6850 | 0.8277 |
| No log | 3.3962 | 180 | 0.7099 | 0.5388 | 0.7099 | 0.8426 |
| No log | 3.4340 | 182 | 0.7962 | 0.5214 | 0.7962 | 0.8923 |
| No log | 3.4717 | 184 | 0.7702 | 0.5532 | 0.7702 | 0.8776 |
| No log | 3.5094 | 186 | 0.6431 | 0.5696 | 0.6431 | 0.8019 |
| No log | 3.5472 | 188 | 0.6170 | 0.5693 | 0.6170 | 0.7855 |
| No log | 3.5849 | 190 | 0.7147 | 0.5322 | 0.7147 | 0.8454 |
| No log | 3.6226 | 192 | 0.9754 | 0.4538 | 0.9754 | 0.9876 |
| No log | 3.6604 | 194 | 1.0223 | 0.4284 | 1.0223 | 1.0111 |
| No log | 3.6981 | 196 | 0.8039 | 0.5049 | 0.8039 | 0.8966 |
| No log | 3.7358 | 198 | 0.6262 | 0.5378 | 0.6262 | 0.7913 |
| No log | 3.7736 | 200 | 0.5897 | 0.544 | 0.5897 | 0.7679 |
| No log | 3.8113 | 202 | 0.5879 | 0.4848 | 0.5879 | 0.7667 |
| No log | 3.8491 | 204 | 0.6732 | 0.5162 | 0.6732 | 0.8205 |
| No log | 3.8868 | 206 | 0.7012 | 0.5104 | 0.7012 | 0.8374 |
| No log | 3.9245 | 208 | 0.7607 | 0.5141 | 0.7607 | 0.8722 |
| No log | 3.9623 | 210 | 0.7177 | 0.4838 | 0.7177 | 0.8472 |
| No log | 4.0 | 212 | 0.7243 | 0.4587 | 0.7243 | 0.8511 |
| No log | 4.0377 | 214 | 0.7986 | 0.4752 | 0.7986 | 0.8937 |
| No log | 4.0755 | 216 | 0.8368 | 0.4465 | 0.8368 | 0.9147 |
| No log | 4.1132 | 218 | 0.7661 | 0.4456 | 0.7661 | 0.8753 |
| No log | 4.1509 | 220 | 0.6834 | 0.4762 | 0.6834 | 0.8267 |
| No log | 4.1887 | 222 | 0.6034 | 0.5200 | 0.6034 | 0.7768 |
| No log | 4.2264 | 224 | 0.5829 | 0.5026 | 0.5829 | 0.7635 |
| No log | 4.2642 | 226 | 0.5522 | 0.5046 | 0.5522 | 0.7431 |
| No log | 4.3019 | 228 | 0.6732 | 0.5143 | 0.6732 | 0.8205 |
| No log | 4.3396 | 230 | 0.8137 | 0.5024 | 0.8137 | 0.9020 |
| No log | 4.3774 | 232 | 0.7688 | 0.5289 | 0.7688 | 0.8768 |
| No log | 4.4151 | 234 | 0.6635 | 0.5474 | 0.6635 | 0.8146 |
| No log | 4.4528 | 236 | 0.5890 | 0.5847 | 0.5890 | 0.7675 |
| No log | 4.4906 | 238 | 0.5520 | 0.5296 | 0.5520 | 0.7430 |
| No log | 4.5283 | 240 | 0.5479 | 0.4513 | 0.5479 | 0.7402 |
| No log | 4.5660 | 242 | 0.5546 | 0.4287 | 0.5546 | 0.7447 |
| No log | 4.6038 | 244 | 0.6018 | 0.4705 | 0.6018 | 0.7758 |
| No log | 4.6415 | 246 | 0.8159 | 0.4863 | 0.8159 | 0.9033 |
| No log | 4.6792 | 248 | 0.9919 | 0.3930 | 0.9919 | 0.9959 |
| No log | 4.7170 | 250 | 0.8988 | 0.4560 | 0.8988 | 0.9480 |
| No log | 4.7547 | 252 | 0.7981 | 0.5224 | 0.7981 | 0.8934 |
| No log | 4.7925 | 254 | 0.7082 | 0.5429 | 0.7082 | 0.8416 |
| No log | 4.8302 | 256 | 0.6740 | 0.5810 | 0.6740 | 0.8210 |
| No log | 4.8679 | 258 | 0.6624 | 0.5910 | 0.6624 | 0.8139 |
| No log | 4.9057 | 260 | 0.6916 | 0.5185 | 0.6916 | 0.8316 |
| No log | 4.9434 | 262 | 0.8107 | 0.4617 | 0.8107 | 0.9004 |
| No log | 4.9811 | 264 | 0.9351 | 0.4061 | 0.9351 | 0.9670 |
| No log | 5.0189 | 266 | 0.9234 | 0.4039 | 0.9234 | 0.9609 |
| No log | 5.0566 | 268 | 0.7650 | 0.5064 | 0.7650 | 0.8746 |
| No log | 5.0943 | 270 | 0.6317 | 0.5487 | 0.6317 | 0.7948 |
| No log | 5.1321 | 272 | 0.5789 | 0.4779 | 0.5789 | 0.7609 |
| No log | 5.1698 | 274 | 0.6494 | 0.4821 | 0.6494 | 0.8058 |
| No log | 5.2075 | 276 | 0.6298 | 0.4959 | 0.6298 | 0.7936 |
| No log | 5.2453 | 278 | 0.5982 | 0.5173 | 0.5981 | 0.7734 |
| No log | 5.2830 | 280 | 0.8517 | 0.4940 | 0.8517 | 0.9229 |
| No log | 5.3208 | 282 | 1.2323 | 0.2818 | 1.2323 | 1.1101 |
| No log | 5.3585 | 284 | 1.2627 | 0.2595 | 1.2627 | 1.1237 |
| No log | 5.3962 | 286 | 0.9694 | 0.4360 | 0.9694 | 0.9846 |
| No log | 5.4340 | 288 | 0.6580 | 0.5133 | 0.6580 | 0.8112 |
| No log | 5.4717 | 290 | 0.5646 | 0.5791 | 0.5646 | 0.7514 |
| No log | 5.5094 | 292 | 0.5872 | 0.4476 | 0.5872 | 0.7663 |
| No log | 5.5472 | 294 | 0.5897 | 0.4657 | 0.5897 | 0.7679 |
| No log | 5.5849 | 296 | 0.5925 | 0.5642 | 0.5925 | 0.7697 |
| No log | 5.6226 | 298 | 0.6847 | 0.5382 | 0.6847 | 0.8275 |
| No log | 5.6604 | 300 | 0.9729 | 0.4245 | 0.9729 | 0.9864 |
| No log | 5.6981 | 302 | 1.3748 | 0.3148 | 1.3748 | 1.1725 |
| No log | 5.7358 | 304 | 1.6679 | 0.2424 | 1.6679 | 1.2915 |
| No log | 5.7736 | 306 | 1.5423 | 0.2462 | 1.5423 | 1.2419 |
| No log | 5.8113 | 308 | 1.1594 | 0.3335 | 1.1594 | 1.0767 |
| No log | 5.8491 | 310 | 0.8002 | 0.4709 | 0.8002 | 0.8945 |
| No log | 5.8868 | 312 | 0.5973 | 0.5889 | 0.5973 | 0.7729 |
| No log | 5.9245 | 314 | 0.5677 | 0.6079 | 0.5677 | 0.7535 |
| No log | 5.9623 | 316 | 0.5561 | 0.6258 | 0.5561 | 0.7457 |
| No log | 6.0 | 318 | 0.5507 | 0.6429 | 0.5507 | 0.7421 |
| No log | 6.0377 | 320 | 0.5508 | 0.6175 | 0.5508 | 0.7422 |
| No log | 6.0755 | 322 | 0.5591 | 0.6309 | 0.5591 | 0.7478 |
| No log | 6.1132 | 324 | 0.5875 | 0.6441 | 0.5875 | 0.7665 |
| No log | 6.1509 | 326 | 0.6572 | 0.5946 | 0.6572 | 0.8107 |
| No log | 6.1887 | 328 | 0.7346 | 0.5047 | 0.7346 | 0.8571 |
| No log | 6.2264 | 330 | 0.7883 | 0.4531 | 0.7883 | 0.8879 |
| No log | 6.2642 | 332 | 0.7583 | 0.4830 | 0.7583 | 0.8708 |
| No log | 6.3019 | 334 | 0.6975 | 0.5044 | 0.6975 | 0.8352 |
| No log | 6.3396 | 336 | 0.6328 | 0.5577 | 0.6328 | 0.7955 |
| No log | 6.3774 | 338 | 0.6077 | 0.6084 | 0.6077 | 0.7796 |
| No log | 6.4151 | 340 | 0.6454 | 0.6105 | 0.6454 | 0.8034 |
| No log | 6.4528 | 342 | 0.6772 | 0.6293 | 0.6772 | 0.8229 |
| No log | 6.4906 | 344 | 0.7261 | 0.5732 | 0.7261 | 0.8521 |
| No log | 6.5283 | 346 | 0.8307 | 0.4926 | 0.8307 | 0.9115 |
| No log | 6.5660 | 348 | 0.8205 | 0.4750 | 0.8205 | 0.9058 |
| No log | 6.6038 | 350 | 0.7000 | 0.5564 | 0.7000 | 0.8367 |
| No log | 6.6415 | 352 | 0.5884 | 0.5860 | 0.5884 | 0.7670 |
| No log | 6.6792 | 354 | 0.5588 | 0.6040 | 0.5588 | 0.7475 |
| No log | 6.7170 | 356 | 0.5540 | 0.5842 | 0.5540 | 0.7443 |
| No log | 6.7547 | 358 | 0.5684 | 0.5746 | 0.5684 | 0.7540 |
| No log | 6.7925 | 360 | 0.5849 | 0.5558 | 0.5849 | 0.7648 |
| No log | 6.8302 | 362 | 0.5776 | 0.5804 | 0.5776 | 0.7600 |
| No log | 6.8679 | 364 | 0.5644 | 0.5859 | 0.5644 | 0.7512 |
| No log | 6.9057 | 366 | 0.5538 | 0.5618 | 0.5538 | 0.7442 |
| No log | 6.9434 | 368 | 0.5417 | 0.5377 | 0.5417 | 0.7360 |
| No log | 6.9811 | 370 | 0.5460 | 0.4654 | 0.5460 | 0.7389 |
| No log | 7.0189 | 372 | 0.5722 | 0.4889 | 0.5722 | 0.7565 |
| No log | 7.0566 | 374 | 0.5693 | 0.4889 | 0.5693 | 0.7545 |
| No log | 7.0943 | 376 | 0.6087 | 0.5328 | 0.6087 | 0.7802 |
| No log | 7.1321 | 378 | 0.5971 | 0.5242 | 0.5971 | 0.7727 |
| No log | 7.1698 | 380 | 0.5952 | 0.5262 | 0.5952 | 0.7715 |
| No log | 7.2075 | 382 | 0.5716 | 0.5724 | 0.5716 | 0.7560 |
| No log | 7.2453 | 384 | 0.5663 | 0.5643 | 0.5663 | 0.7525 |
| No log | 7.2830 | 386 | 0.5560 | 0.5641 | 0.5560 | 0.7456 |
| No log | 7.3208 | 388 | 0.5432 | 0.4990 | 0.5432 | 0.7370 |
| No log | 7.3585 | 390 | 0.5355 | 0.5048 | 0.5355 | 0.7318 |
| No log | 7.3962 | 392 | 0.5335 | 0.4804 | 0.5335 | 0.7304 |
| No log | 7.4340 | 394 | 0.5408 | 0.5046 | 0.5408 | 0.7354 |
| No log | 7.4717 | 396 | 0.5895 | 0.5395 | 0.5895 | 0.7678 |
| No log | 7.5094 | 398 | 0.6544 | 0.5187 | 0.6544 | 0.8089 |
| No log | 7.5472 | 400 | 0.6710 | 0.5369 | 0.6710 | 0.8192 |
| No log | 7.5849 | 402 | 0.6784 | 0.5301 | 0.6784 | 0.8236 |
| No log | 7.6226 | 404 | 0.6731 | 0.5271 | 0.6731 | 0.8204 |
| No log | 7.6604 | 406 | 0.7178 | 0.5552 | 0.7178 | 0.8472 |
| No log | 7.6981 | 408 | 0.7901 | 0.5554 | 0.7901 | 0.8889 |
| No log | 7.7358 | 410 | 0.9238 | 0.4669 | 0.9238 | 0.9611 |
| No log | 7.7736 | 412 | 0.9980 | 0.4031 | 0.9980 | 0.9990 |
| No log | 7.8113 | 414 | 0.9183 | 0.4453 | 0.9183 | 0.9583 |
| No log | 7.8491 | 416 | 0.7103 | 0.5337 | 0.7103 | 0.8428 |
| No log | 7.8868 | 418 | 0.6005 | 0.5762 | 0.6005 | 0.7749 |
| No log | 7.9245 | 420 | 0.5906 | 0.4772 | 0.5906 | 0.7685 |
| No log | 7.9623 | 422 | 0.5679 | 0.4024 | 0.5679 | 0.7536 |
| No log | 8.0 | 424 | 0.5324 | 0.3755 | 0.5324 | 0.7296 |
| No log | 8.0377 | 426 | 0.5280 | 0.4100 | 0.5280 | 0.7266 |
| No log | 8.0755 | 428 | 0.5301 | 0.4996 | 0.5301 | 0.7281 |
| No log | 8.1132 | 430 | 0.5226 | 0.4621 | 0.5226 | 0.7229 |
| No log | 8.1509 | 432 | 0.5423 | 0.5516 | 0.5423 | 0.7364 |
| No log | 8.1887 | 434 | 0.5498 | 0.6100 | 0.5498 | 0.7415 |
| No log | 8.2264 | 436 | 0.5586 | 0.6482 | 0.5586 | 0.7474 |
| No log | 8.2642 | 438 | 0.6275 | 0.5434 | 0.6275 | 0.7922 |
| No log | 8.3019 | 440 | 0.6392 | 0.5291 | 0.6392 | 0.7995 |
| No log | 8.3396 | 442 | 0.5715 | 0.5607 | 0.5715 | 0.7560 |
| No log | 8.3774 | 444 | 0.5203 | 0.5805 | 0.5203 | 0.7213 |
| No log | 8.4151 | 446 | 0.5551 | 0.5067 | 0.5551 | 0.7451 |
| No log | 8.4528 | 448 | 0.5769 | 0.4789 | 0.5769 | 0.7596 |
| No log | 8.4906 | 450 | 0.5339 | 0.5006 | 0.5339 | 0.7307 |
| No log | 8.5283 | 452 | 0.5035 | 0.4897 | 0.5035 | 0.7096 |
| No log | 8.5660 | 454 | 0.5923 | 0.5489 | 0.5923 | 0.7696 |
| No log | 8.6038 | 456 | 0.6918 | 0.5313 | 0.6918 | 0.8318 |
| No log | 8.6415 | 458 | 0.7063 | 0.5126 | 0.7063 | 0.8404 |
| No log | 8.6792 | 460 | 0.6414 | 0.5401 | 0.6414 | 0.8009 |
| No log | 8.7170 | 462 | 0.5628 | 0.6358 | 0.5628 | 0.7502 |
| No log | 8.7547 | 464 | 0.5601 | 0.6314 | 0.5601 | 0.7484 |
| No log | 8.7925 | 466 | 0.5738 | 0.6288 | 0.5738 | 0.7575 |
| No log | 8.8302 | 468 | 0.6672 | 0.5434 | 0.6672 | 0.8169 |
| No log | 8.8679 | 470 | 0.8416 | 0.4655 | 0.8416 | 0.9174 |
| No log | 8.9057 | 472 | 0.8374 | 0.4467 | 0.8374 | 0.9151 |
| No log | 8.9434 | 474 | 0.6751 | 0.4961 | 0.6751 | 0.8216 |
| No log | 8.9811 | 476 | 0.5743 | 0.5342 | 0.5743 | 0.7578 |
| No log | 9.0189 | 478 | 0.5423 | 0.5137 | 0.5423 | 0.7364 |
| No log | 9.0566 | 480 | 0.5473 | 0.5446 | 0.5473 | 0.7398 |
| No log | 9.0943 | 482 | 0.5612 | 0.5665 | 0.5612 | 0.7491 |
| No log | 9.1321 | 484 | 0.5746 | 0.5318 | 0.5747 | 0.7581 |
| No log | 9.1698 | 486 | 0.6079 | 0.5389 | 0.6079 | 0.7797 |
| No log | 9.2075 | 488 | 0.6587 | 0.5355 | 0.6587 | 0.8116 |
| No log | 9.2453 | 490 | 0.6466 | 0.5158 | 0.6466 | 0.8041 |
| No log | 9.2830 | 492 | 0.5998 | 0.5187 | 0.5998 | 0.7744 |
| No log | 9.3208 | 494 | 0.5841 | 0.4931 | 0.5841 | 0.7643 |
| No log | 9.3585 | 496 | 0.5826 | 0.4931 | 0.5826 | 0.7633 |
| No log | 9.3962 | 498 | 0.6094 | 0.5269 | 0.6094 | 0.7806 |
| 0.4026 | 9.4340 | 500 | 0.6498 | 0.4771 | 0.6498 | 0.8061 |
| 0.4026 | 9.4717 | 502 | 0.7081 | 0.4931 | 0.7081 | 0.8415 |
| 0.4026 | 9.5094 | 504 | 0.6831 | 0.4858 | 0.6831 | 0.8265 |
| 0.4026 | 9.5472 | 506 | 0.6339 | 0.5921 | 0.6339 | 0.7962 |
| 0.4026 | 9.5849 | 508 | 0.6050 | 0.5660 | 0.6050 | 0.7778 |
| 0.4026 | 9.6226 | 510 | 0.6029 | 0.5348 | 0.6029 | 0.7765 |
| 0.4026 | 9.6604 | 512 | 0.5805 | 0.5531 | 0.5805 | 0.7619 |
| 0.4026 | 9.6981 | 514 | 0.5793 | 0.4448 | 0.5793 | 0.7611 |
| 0.4026 | 9.7358 | 516 | 0.6128 | 0.4777 | 0.6128 | 0.7828 |
| 0.4026 | 9.7736 | 518 | 0.6073 | 0.4777 | 0.6073 | 0.7793 |
| 0.4026 | 9.8113 | 520 | 0.5890 | 0.4941 | 0.5890 | 0.7674 |
| 0.4026 | 9.8491 | 522 | 0.6008 | 0.5494 | 0.6008 | 0.7751 |
| 0.4026 | 9.8868 | 524 | 0.6307 | 0.6296 | 0.6307 | 0.7942 |
| 0.4026 | 9.9245 | 526 | 0.6690 | 0.5738 | 0.6690 | 0.8179 |
| 0.4026 | 9.9623 | 528 | 0.7104 | 0.4782 | 0.7104 | 0.8429 |
| 0.4026 | 10.0 | 530 | 0.7745 | 0.4514 | 0.7745 | 0.8801 |
| 0.4026 | 10.0377 | 532 | 0.7810 | 0.4642 | 0.7810 | 0.8837 |
| 0.4026 | 10.0755 | 534 | 0.6887 | 0.4742 | 0.6887 | 0.8299 |
| 0.4026 | 10.1132 | 536 | 0.6139 | 0.5396 | 0.6139 | 0.7835 |
| 0.4026 | 10.1509 | 538 | 0.5841 | 0.5476 | 0.5841 | 0.7643 |
| 0.4026 | 10.1887 | 540 | 0.5686 | 0.5402 | 0.5686 | 0.7541 |
| 0.4026 | 10.2264 | 542 | 0.5925 | 0.5269 | 0.5925 | 0.7698 |
| 0.4026 | 10.2642 | 544 | 0.5995 | 0.4816 | 0.5995 | 0.7743 |
| 0.4026 | 10.3019 | 546 | 0.5616 | 0.4984 | 0.5616 | 0.7494 |
| 0.4026 | 10.3396 | 548 | 0.5623 | 0.5271 | 0.5623 | 0.7499 |
| 0.4026 | 10.3774 | 550 | 0.6186 | 0.5229 | 0.6186 | 0.7865 |
| 0.4026 | 10.4151 | 552 | 0.6444 | 0.5453 | 0.6444 | 0.8027 |
| 0.4026 | 10.4528 | 554 | 0.6285 | 0.5237 | 0.6285 | 0.7928 |
| 0.4026 | 10.4906 | 556 | 0.6011 | 0.5447 | 0.6011 | 0.7753 |
| 0.4026 | 10.5283 | 558 | 0.5841 | 0.5861 | 0.5841 | 0.7643 |
| 0.4026 | 10.5660 | 560 | 0.5949 | 0.5238 | 0.5949 | 0.7713 |
| 0.4026 | 10.6038 | 562 | 0.5829 | 0.5649 | 0.5829 | 0.7635 |
| 0.4026 | 10.6415 | 564 | 0.5605 | 0.5341 | 0.5605 | 0.7486 |
| 0.4026 | 10.6792 | 566 | 0.5942 | 0.5437 | 0.5942 | 0.7708 |
| 0.4026 | 10.7170 | 568 | 0.6717 | 0.4899 | 0.6717 | 0.8196 |
| 0.4026 | 10.7547 | 570 | 0.6902 | 0.4899 | 0.6902 | 0.8308 |
| 0.4026 | 10.7925 | 572 | 0.6270 | 0.5578 | 0.6270 | 0.7918 |
| 0.4026 | 10.8302 | 574 | 0.5758 | 0.5919 | 0.5758 | 0.7588 |
| 0.4026 | 10.8679 | 576 | 0.5609 | 0.5746 | 0.5609 | 0.7489 |
| 0.4026 | 10.9057 | 578 | 0.5661 | 0.5514 | 0.5661 | 0.7524 |
| 0.4026 | 10.9434 | 580 | 0.5552 | 0.5593 | 0.5552 | 0.7451 |
| 0.4026 | 10.9811 | 582 | 0.5368 | 0.5970 | 0.5368 | 0.7326 |
| 0.4026 | 11.0189 | 584 | 0.5309 | 0.5560 | 0.5309 | 0.7286 |
| 0.4026 | 11.0566 | 586 | 0.5418 | 0.5589 | 0.5418 | 0.7361 |
| 0.4026 | 11.0943 | 588 | 0.5484 | 0.5777 | 0.5484 | 0.7406 |
| 0.4026 | 11.1321 | 590 | 0.5748 | 0.5442 | 0.5748 | 0.7582 |
| 0.4026 | 11.1698 | 592 | 0.5988 | 0.5629 | 0.5988 | 0.7738 |
| 0.4026 | 11.2075 | 594 | 0.5828 | 0.5570 | 0.5828 | 0.7634 |
| 0.4026 | 11.2453 | 596 | 0.5513 | 0.6214 | 0.5513 | 0.7425 |
| 0.4026 | 11.2830 | 598 | 0.5608 | 0.5872 | 0.5608 | 0.7489 |
| 0.4026 | 11.3208 | 600 | 0.5678 | 0.6336 | 0.5678 | 0.7535 |
| 0.4026 | 11.3585 | 602 | 0.5578 | 0.6292 | 0.5578 | 0.7469 |
| 0.4026 | 11.3962 | 604 | 0.5570 | 0.6111 | 0.5570 | 0.7463 |
| 0.4026 | 11.4340 | 606 | 0.6239 | 0.5537 | 0.6239 | 0.7899 |
| 0.4026 | 11.4717 | 608 | 0.6865 | 0.5001 | 0.6865 | 0.8285 |
| 0.4026 | 11.5094 | 610 | 0.6768 | 0.5178 | 0.6768 | 0.8227 |
| 0.4026 | 11.5472 | 612 | 0.5969 | 0.5299 | 0.5969 | 0.7726 |
| 0.4026 | 11.5849 | 614 | 0.5379 | 0.6202 | 0.5379 | 0.7334 |
| 0.4026 | 11.6226 | 616 | 0.5269 | 0.5943 | 0.5269 | 0.7259 |
| 0.4026 | 11.6604 | 618 | 0.5273 | 0.5522 | 0.5273 | 0.7262 |
| 0.4026 | 11.6981 | 620 | 0.5315 | 0.4704 | 0.5315 | 0.7290 |
| 0.4026 | 11.7358 | 622 | 0.5357 | 0.4693 | 0.5357 | 0.7319 |
| 0.4026 | 11.7736 | 624 | 0.5408 | 0.4763 | 0.5408 | 0.7354 |
| 0.4026 | 11.8113 | 626 | 0.5448 | 0.5294 | 0.5448 | 0.7381 |
| 0.4026 | 11.8491 | 628 | 0.5527 | 0.5948 | 0.5527 | 0.7434 |
| 0.4026 | 11.8868 | 630 | 0.5598 | 0.6508 | 0.5598 | 0.7482 |
| 0.4026 | 11.9245 | 632 | 0.5630 | 0.6100 | 0.5630 | 0.7504 |
| 0.4026 | 11.9623 | 634 | 0.5638 | 0.5968 | 0.5638 | 0.7509 |
| 0.4026 | 12.0 | 636 | 0.5580 | 0.6151 | 0.5580 | 0.7470 |
| 0.4026 | 12.0377 | 638 | 0.5523 | 0.6151 | 0.5523 | 0.7432 |
| 0.4026 | 12.0755 | 640 | 0.5518 | 0.5886 | 0.5518 | 0.7428 |
| 0.4026 | 12.1132 | 642 | 0.5606 | 0.5969 | 0.5606 | 0.7487 |
| 0.4026 | 12.1509 | 644 | 0.5594 | 0.5722 | 0.5594 | 0.7479 |
| 0.4026 | 12.1887 | 646 | 0.5685 | 0.53 | 0.5685 | 0.7540 |
| 0.4026 | 12.2264 | 648 | 0.5943 | 0.5674 | 0.5943 | 0.7709 |
| 0.4026 | 12.2642 | 650 | 0.6236 | 0.6040 | 0.6236 | 0.7897 |
| 0.4026 | 12.3019 | 652 | 0.6388 | 0.6125 | 0.6388 | 0.7992 |
| 0.4026 | 12.3396 | 654 | 0.6041 | 0.6178 | 0.6041 | 0.7772 |
| 0.4026 | 12.3774 | 656 | 0.5825 | 0.6264 | 0.5825 | 0.7632 |
| 0.4026 | 12.4151 | 658 | 0.5679 | 0.5762 | 0.5679 | 0.7536 |
| 0.4026 | 12.4528 | 660 | 0.5633 | 0.5067 | 0.5633 | 0.7505 |
| 0.4026 | 12.4906 | 662 | 0.5971 | 0.4847 | 0.5971 | 0.7727 |
| 0.4026 | 12.5283 | 664 | 0.6482 | 0.4899 | 0.6482 | 0.8051 |
| 0.4026 | 12.5660 | 666 | 0.6779 | 0.5240 | 0.6779 | 0.8234 |
| 0.4026 | 12.6038 | 668 | 0.6705 | 0.5282 | 0.6705 | 0.8189 |
| 0.4026 | 12.6415 | 670 | 0.6883 | 0.5501 | 0.6883 | 0.8296 |
| 0.4026 | 12.6792 | 672 | 0.7226 | 0.5104 | 0.7226 | 0.8501 |
| 0.4026 | 12.7170 | 674 | 0.6862 | 0.5238 | 0.6862 | 0.8284 |
| 0.4026 | 12.7547 | 676 | 0.6551 | 0.5396 | 0.6551 | 0.8094 |
| 0.4026 | 12.7925 | 678 | 0.6419 | 0.5340 | 0.6419 | 0.8012 |
| 0.4026 | 12.8302 | 680 | 0.5884 | 0.5355 | 0.5884 | 0.7671 |
| 0.4026 | 12.8679 | 682 | 0.5400 | 0.5896 | 0.5400 | 0.7349 |
| 0.4026 | 12.9057 | 684 | 0.5287 | 0.5767 | 0.5287 | 0.7271 |
| 0.4026 | 12.9434 | 686 | 0.5255 | 0.4666 | 0.5255 | 0.7249 |
| 0.4026 | 12.9811 | 688 | 0.5244 | 0.4816 | 0.5244 | 0.7241 |
| 0.4026 | 13.0189 | 690 | 0.5278 | 0.5004 | 0.5278 | 0.7265 |
| 0.4026 | 13.0566 | 692 | 0.5362 | 0.5856 | 0.5362 | 0.7322 |
| 0.4026 | 13.0943 | 694 | 0.5394 | 0.5211 | 0.5394 | 0.7345 |
| 0.4026 | 13.1321 | 696 | 0.5516 | 0.5197 | 0.5516 | 0.7427 |
| 0.4026 | 13.1698 | 698 | 0.5498 | 0.5197 | 0.5498 | 0.7415 |
| 0.4026 | 13.2075 | 700 | 0.5545 | 0.5197 | 0.5545 | 0.7447 |
| 0.4026 | 13.2453 | 702 | 0.5731 | 0.5223 | 0.5731 | 0.7570 |
| 0.4026 | 13.2830 | 704 | 0.5552 | 0.5197 | 0.5552 | 0.7451 |
| 0.4026 | 13.3208 | 706 | 0.5549 | 0.5197 | 0.5549 | 0.7449 |
| 0.4026 | 13.3585 | 708 | 0.5819 | 0.5010 | 0.5819 | 0.7628 |
| 0.4026 | 13.3962 | 710 | 0.5824 | 0.4920 | 0.5824 | 0.7632 |
| 0.4026 | 13.4340 | 712 | 0.5841 | 0.5010 | 0.5841 | 0.7643 |
| 0.4026 | 13.4717 | 714 | 0.5661 | 0.4740 | 0.5661 | 0.7524 |
| 0.4026 | 13.5094 | 716 | 0.5719 | 0.5130 | 0.5719 | 0.7562 |
| 0.4026 | 13.5472 | 718 | 0.5917 | 0.5262 | 0.5917 | 0.7692 |
| 0.4026 | 13.5849 | 720 | 0.6151 | 0.5205 | 0.6151 | 0.7843 |
| 0.4026 | 13.6226 | 722 | 0.6434 | 0.5072 | 0.6434 | 0.8021 |
| 0.4026 | 13.6604 | 724 | 0.6276 | 0.4949 | 0.6276 | 0.7922 |
| 0.4026 | 13.6981 | 726 | 0.5934 | 0.4935 | 0.5934 | 0.7703 |
| 0.4026 | 13.7358 | 728 | 0.5870 | 0.4908 | 0.5870 | 0.7661 |
| 0.4026 | 13.7736 | 730 | 0.5700 | 0.5524 | 0.5700 | 0.7550 |
| 0.4026 | 13.8113 | 732 | 0.5656 | 0.6182 | 0.5656 | 0.7520 |
| 0.4026 | 13.8491 | 734 | 0.5804 | 0.6054 | 0.5804 | 0.7619 |
| 0.4026 | 13.8868 | 736 | 0.6192 | 0.5462 | 0.6192 | 0.7869 |
| 0.4026 | 13.9245 | 738 | 0.6436 | 0.5109 | 0.6436 | 0.8022 |
| 0.4026 | 13.9623 | 740 | 0.6516 | 0.5066 | 0.6516 | 0.8072 |
| 0.4026 | 14.0 | 742 | 0.6361 | 0.5176 | 0.6361 | 0.7976 |
| 0.4026 | 14.0377 | 744 | 0.6154 | 0.5101 | 0.6154 | 0.7845 |
| 0.4026 | 14.0755 | 746 | 0.5645 | 0.5878 | 0.5645 | 0.7513 |
| 0.4026 | 14.1132 | 748 | 0.5596 | 0.6190 | 0.5596 | 0.7480 |
| 0.4026 | 14.1509 | 750 | 0.5807 | 0.5571 | 0.5807 | 0.7621 |
| 0.4026 | 14.1887 | 752 | 0.5913 | 0.5840 | 0.5913 | 0.7689 |
| 0.4026 | 14.2264 | 754 | 0.5706 | 0.6046 | 0.5706 | 0.7554 |
| 0.4026 | 14.2642 | 756 | 0.5529 | 0.6461 | 0.5529 | 0.7436 |
| 0.4026 | 14.3019 | 758 | 0.5776 | 0.5482 | 0.5776 | 0.7600 |
| 0.4026 | 14.3396 | 760 | 0.6419 | 0.5207 | 0.6419 | 0.8012 |
| 0.4026 | 14.3774 | 762 | 0.6703 | 0.4790 | 0.6703 | 0.8187 |
| 0.4026 | 14.4151 | 764 | 0.6680 | 0.4916 | 0.6680 | 0.8173 |
| 0.4026 | 14.4528 | 766 | 0.6454 | 0.5284 | 0.6454 | 0.8034 |
| 0.4026 | 14.4906 | 768 | 0.5906 | 0.5660 | 0.5906 | 0.7685 |
| 0.4026 | 14.5283 | 770 | 0.5657 | 0.6221 | 0.5657 | 0.7521 |
| 0.4026 | 14.5660 | 772 | 0.5493 | 0.6707 | 0.5493 | 0.7411 |
| 0.4026 | 14.6038 | 774 | 0.5489 | 0.6429 | 0.5489 | 0.7409 |
| 0.4026 | 14.6415 | 776 | 0.5699 | 0.5510 | 0.5699 | 0.7549 |
| 0.4026 | 14.6792 | 778 | 0.5806 | 0.5425 | 0.5806 | 0.7619 |
| 0.4026 | 14.7170 | 780 | 0.5693 | 0.5514 | 0.5693 | 0.7545 |
| 0.4026 | 14.7547 | 782 | 0.5407 | 0.6059 | 0.5407 | 0.7354 |
| 0.4026 | 14.7925 | 784 | 0.5329 | 0.6322 | 0.5329 | 0.7300 |
| 0.4026 | 14.8302 | 786 | 0.5516 | 0.6479 | 0.5516 | 0.7427 |
| 0.4026 | 14.8679 | 788 | 0.6065 | 0.5923 | 0.6065 | 0.7788 |
| 0.4026 | 14.9057 | 790 | 0.6305 | 0.5952 | 0.6305 | 0.7941 |
| 0.4026 | 14.9434 | 792 | 0.6533 | 0.5898 | 0.6533 | 0.8082 |
| 0.4026 | 14.9811 | 794 | 0.7234 | 0.4894 | 0.7234 | 0.8505 |
| 0.4026 | 15.0189 | 796 | 0.7686 | 0.4736 | 0.7686 | 0.8767 |
| 0.4026 | 15.0566 | 798 | 0.7126 | 0.4996 | 0.7126 | 0.8441 |
| 0.4026 | 15.0943 | 800 | 0.6281 | 0.5070 | 0.6281 | 0.7925 |
| 0.4026 | 15.1321 | 802 | 0.5525 | 0.6103 | 0.5525 | 0.7433 |
| 0.4026 | 15.1698 | 804 | 0.5437 | 0.6582 | 0.5437 | 0.7373 |
| 0.4026 | 15.2075 | 806 | 0.5752 | 0.6160 | 0.5752 | 0.7584 |
| 0.4026 | 15.2453 | 808 | 0.5765 | 0.6462 | 0.5765 | 0.7593 |
| 0.4026 | 15.2830 | 810 | 0.5650 | 0.6936 | 0.5650 | 0.7517 |
| 0.4026 | 15.3208 | 812 | 0.5896 | 0.6098 | 0.5896 | 0.7679 |
| 0.4026 | 15.3585 | 814 | 0.6334 | 0.5324 | 0.6334 | 0.7958 |
| 0.4026 | 15.3962 | 816 | 0.6662 | 0.5117 | 0.6662 | 0.8162 |
| 0.4026 | 15.4340 | 818 | 0.6409 | 0.4959 | 0.6409 | 0.8006 |
| 0.4026 | 15.4717 | 820 | 0.5866 | 0.5220 | 0.5866 | 0.7659 |
| 0.4026 | 15.5094 | 822 | 0.5443 | 0.5510 | 0.5443 | 0.7378 |
| 0.4026 | 15.5472 | 824 | 0.5345 | 0.5723 | 0.5345 | 0.7311 |
| 0.4026 | 15.5849 | 826 | 0.5417 | 0.5702 | 0.5417 | 0.7360 |
| 0.4026 | 15.6226 | 828 | 0.5403 | 0.6262 | 0.5403 | 0.7350 |
| 0.4026 | 15.6604 | 830 | 0.5824 | 0.5425 | 0.5824 | 0.7631 |
| 0.4026 | 15.6981 | 832 | 0.6294 | 0.5107 | 0.6294 | 0.7933 |
| 0.4026 | 15.7358 | 834 | 0.6359 | 0.5107 | 0.6359 | 0.7974 |
| 0.4026 | 15.7736 | 836 | 0.5840 | 0.5226 | 0.5840 | 0.7642 |
| 0.4026 | 15.8113 | 838 | 0.5516 | 0.5326 | 0.5516 | 0.7427 |
| 0.4026 | 15.8491 | 840 | 0.5465 | 0.4654 | 0.5465 | 0.7392 |
| 0.4026 | 15.8868 | 842 | 0.5452 | 0.5185 | 0.5452 | 0.7384 |
| 0.4026 | 15.9245 | 844 | 0.5479 | 0.5540 | 0.5479 | 0.7402 |
| 0.4026 | 15.9623 | 846 | 0.5709 | 0.5478 | 0.5709 | 0.7556 |
| 0.4026 | 16.0 | 848 | 0.5965 | 0.5244 | 0.5965 | 0.7723 |
| 0.4026 | 16.0377 | 850 | 0.5855 | 0.5244 | 0.5855 | 0.7652 |
| 0.4026 | 16.0755 | 852 | 0.5563 | 0.5236 | 0.5563 | 0.7458 |
| 0.4026 | 16.1132 | 854 | 0.5512 | 0.5236 | 0.5512 | 0.7425 |
| 0.4026 | 16.1509 | 856 | 0.5425 | 0.5236 | 0.5425 | 0.7366 |
| 0.4026 | 16.1887 | 858 | 0.5488 | 0.5236 | 0.5488 | 0.7408 |
| 0.4026 | 16.2264 | 860 | 0.5559 | 0.4886 | 0.5559 | 0.7456 |
| 0.4026 | 16.2642 | 862 | 0.5384 | 0.5155 | 0.5384 | 0.7338 |
| 0.4026 | 16.3019 | 864 | 0.5216 | 0.4828 | 0.5216 | 0.7222 |
| 0.4026 | 16.3396 | 866 | 0.5268 | 0.5323 | 0.5268 | 0.7258 |
| 0.4026 | 16.3774 | 868 | 0.5287 | 0.5185 | 0.5287 | 0.7271 |
| 0.4026 | 16.4151 | 870 | 0.5528 | 0.5339 | 0.5528 | 0.7435 |
| 0.4026 | 16.4528 | 872 | 0.6222 | 0.5080 | 0.6222 | 0.7888 |
| 0.4026 | 16.4906 | 874 | 0.7171 | 0.5038 | 0.7171 | 0.8468 |
| 0.4026 | 16.5283 | 876 | 0.7497 | 0.4812 | 0.7497 | 0.8658 |
| 0.4026 | 16.5660 | 878 | 0.6911 | 0.4845 | 0.6911 | 0.8313 |
| 0.4026 | 16.6038 | 880 | 0.6496 | 0.5364 | 0.6496 | 0.8060 |
| 0.4026 | 16.6415 | 882 | 0.6414 | 0.5208 | 0.6414 | 0.8009 |
| 0.4026 | 16.6792 | 884 | 0.5922 | 0.5420 | 0.5922 | 0.7695 |
| 0.4026 | 16.7170 | 886 | 0.5651 | 0.6104 | 0.5651 | 0.7517 |
| 0.4026 | 16.7547 | 888 | 0.5588 | 0.5669 | 0.5588 | 0.7475 |
| 0.4026 | 16.7925 | 890 | 0.5557 | 0.5481 | 0.5557 | 0.7455 |
| 0.4026 | 16.8302 | 892 | 0.5771 | 0.4740 | 0.5771 | 0.7597 |
| 0.4026 | 16.8679 | 894 | 0.5899 | 0.4550 | 0.5899 | 0.7680 |
| 0.4026 | 16.9057 | 896 | 0.5709 | 0.4102 | 0.5709 | 0.7556 |
| 0.4026 | 16.9434 | 898 | 0.5444 | 0.4642 | 0.5444 | 0.7378 |
| 0.4026 | 16.9811 | 900 | 0.5414 | 0.5285 | 0.5414 | 0.7358 |
| 0.4026 | 17.0189 | 902 | 0.5508 | 0.5601 | 0.5508 | 0.7422 |
| 0.4026 | 17.0566 | 904 | 0.5689 | 0.5713 | 0.5689 | 0.7543 |
| 0.4026 | 17.0943 | 906 | 0.6032 | 0.5619 | 0.6032 | 0.7766 |
| 0.4026 | 17.1321 | 908 | 0.6153 | 0.5504 | 0.6153 | 0.7844 |
| 0.4026 | 17.1698 | 910 | 0.6342 | 0.5334 | 0.6342 | 0.7964 |
| 0.4026 | 17.2075 | 912 | 0.6339 | 0.5214 | 0.6339 | 0.7962 |
| 0.4026 | 17.2453 | 914 | 0.6112 | 0.5437 | 0.6112 | 0.7818 |
| 0.4026 | 17.2830 | 916 | 0.5990 | 0.5355 | 0.5990 | 0.7739 |
| 0.4026 | 17.3208 | 918 | 0.6068 | 0.5536 | 0.6068 | 0.7789 |
| 0.4026 | 17.3585 | 920 | 0.6082 | 0.6025 | 0.6082 | 0.7799 |
| 0.4026 | 17.3962 | 922 | 0.6076 | 0.5941 | 0.6076 | 0.7795 |
| 0.4026 | 17.4340 | 924 | 0.5976 | 0.5650 | 0.5976 | 0.7731 |
| 0.4026 | 17.4717 | 926 | 0.5777 | 0.5848 | 0.5777 | 0.7601 |
| 0.4026 | 17.5094 | 928 | 0.5699 | 0.5968 | 0.5699 | 0.7549 |
| 0.4026 | 17.5472 | 930 | 0.5633 | 0.5972 | 0.5633 | 0.7505 |
| 0.4026 | 17.5849 | 932 | 0.5612 | 0.5489 | 0.5612 | 0.7491 |
| 0.4026 | 17.6226 | 934 | 0.5579 | 0.5320 | 0.5579 | 0.7469 |
| 0.4026 | 17.6604 | 936 | 0.5655 | 0.5476 | 0.5655 | 0.7520 |
| 0.4026 | 17.6981 | 938 | 0.5781 | 0.5565 | 0.5781 | 0.7603 |
| 0.4026 | 17.7358 | 940 | 0.5921 | 0.5649 | 0.5921 | 0.7695 |
| 0.4026 | 17.7736 | 942 | 0.5825 | 0.5811 | 0.5825 | 0.7632 |
| 0.4026 | 17.8113 | 944 | 0.5678 | 0.5728 | 0.5678 | 0.7535 |
| 0.4026 | 17.8491 | 946 | 0.5673 | 0.5684 | 0.5673 | 0.7532 |
| 0.4026 | 17.8868 | 948 | 0.5647 | 0.5402 | 0.5647 | 0.7514 |
| 0.4026 | 17.9245 | 950 | 0.5597 | 0.5323 | 0.5597 | 0.7481 |
| 0.4026 | 17.9623 | 952 | 0.5595 | 0.4975 | 0.5595 | 0.7480 |
| 0.4026 | 18.0 | 954 | 0.5616 | 0.5131 | 0.5616 | 0.7494 |
| 0.4026 | 18.0377 | 956 | 0.5634 | 0.5323 | 0.5634 | 0.7506 |
| 0.4026 | 18.0755 | 958 | 0.5776 | 0.5176 | 0.5776 | 0.7600 |
| 0.4026 | 18.1132 | 960 | 0.5828 | 0.5176 | 0.5828 | 0.7634 |
| 0.4026 | 18.1509 | 962 | 0.5774 | 0.4913 | 0.5774 | 0.7599 |
| 0.4026 | 18.1887 | 964 | 0.5772 | 0.4913 | 0.5772 | 0.7598 |
| 0.4026 | 18.2264 | 966 | 0.5806 | 0.4913 | 0.5806 | 0.7620 |
| 0.4026 | 18.2642 | 968 | 0.5831 | 0.4928 | 0.5831 | 0.7636 |
| 0.4026 | 18.3019 | 970 | 0.5795 | 0.5019 | 0.5795 | 0.7612 |
| 0.4026 | 18.3396 | 972 | 0.5723 | 0.5312 | 0.5723 | 0.7565 |
| 0.4026 | 18.3774 | 974 | 0.5806 | 0.5500 | 0.5806 | 0.7620 |
| 0.4026 | 18.4151 | 976 | 0.5773 | 0.5500 | 0.5773 | 0.7598 |
| 0.4026 | 18.4528 | 978 | 0.5773 | 0.5500 | 0.5773 | 0.7598 |
| 0.4026 | 18.4906 | 980 | 0.5722 | 0.5164 | 0.5722 | 0.7565 |
| 0.4026 | 18.5283 | 982 | 0.5644 | 0.4888 | 0.5644 | 0.7513 |
| 0.4026 | 18.5660 | 984 | 0.5568 | 0.4864 | 0.5568 | 0.7462 |
| 0.4026 | 18.6038 | 986 | 0.5575 | 0.5078 | 0.5575 | 0.7467 |
| 0.4026 | 18.6415 | 988 | 0.5570 | 0.5078 | 0.5570 | 0.7463 |
| 0.4026 | 18.6792 | 990 | 0.5582 | 0.5004 | 0.5582 | 0.7471 |
| 0.4026 | 18.7170 | 992 | 0.5635 | 0.4835 | 0.5635 | 0.7506 |
| 0.4026 | 18.7547 | 994 | 0.5736 | 0.4812 | 0.5736 | 0.7574 |
| 0.4026 | 18.7925 | 996 | 0.5634 | 0.4904 | 0.5634 | 0.7506 |
| 0.4026 | 18.8302 | 998 | 0.5571 | 0.5147 | 0.5571 | 0.7464 |
| 0.0707 | 18.8679 | 1000 | 0.5565 | 0.5421 | 0.5565 | 0.7460 |
| 0.0707 | 18.9057 | 1002 | 0.5585 | 0.4857 | 0.5585 | 0.7473 |
| 0.0707 | 18.9434 | 1004 | 0.5899 | 0.5059 | 0.5899 | 0.7681 |
| 0.0707 | 18.9811 | 1006 | 0.6331 | 0.4695 | 0.6331 | 0.7957 |
| 0.0707 | 19.0189 | 1008 | 0.6374 | 0.5029 | 0.6374 | 0.7983 |
| 0.0707 | 19.0566 | 1010 | 0.6404 | 0.5029 | 0.6404 | 0.8003 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
deepnet111/sn9-3b-018
|
deepnet111
| 2025-01-15T15:37:50Z
| 178
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T15:35:47Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cabustillo13/roberta-base-bne-platzi-project-nlp
|
cabustillo13
| 2025-01-15T15:37:14Z
| 14
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BSC-LT/roberta-base-bne",
"base_model:finetune:BSC-LT/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T04:08:44Z
|
---
library_name: transformers
license: apache-2.0
base_model: BSC-TeMU/roberta-base-bne
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-platzi-project-nlp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-platzi-project-nlp
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4563
- Accuracy: 0.8538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3594 | 1.0 | 2500 | 0.3971 | 0.8436 |
| 0.2582 | 2.0 | 5000 | 0.4563 | 0.8538 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Tokenizers 0.21.0
|
mradermacher/WiNGPT-Babel-GGUF
|
mradermacher
| 2025-01-15T15:36:52Z
| 262
| 0
|
transformers
|
[
"transformers",
"gguf",
"translation",
"multilingual",
"en",
"zh",
"base_model:winninghealth/WiNGPT-Babel",
"base_model:quantized:winninghealth/WiNGPT-Babel",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
translation
| 2025-01-15T15:23:35Z
|
---
base_model: winninghealth/WiNGPT-Babel
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- translation
- multilingual
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/winninghealth/WiNGPT-Babel
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WiNGPT-Babel-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WiNGPT-Babel-GGUF/resolve/main/WiNGPT-Babel.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso06/933a3df0-e8dc-4b5f-808c-5edc7e3ea567
|
lesso06
| 2025-01-15T15:35:57Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:32:36Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 933a3df0-e8dc-4b5f-808c-5edc7e3ea567
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 59ee0b6235c26daa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/59ee0b6235c26daa_train_data.json
type:
field_input: pronoun
field_instruction: sentence
field_output: definition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso06/933a3df0-e8dc-4b5f-808c-5edc7e3ea567
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/59ee0b6235c26daa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cac36870-8e18-426e-9717-ca6857936964
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cac36870-8e18-426e-9717-ca6857936964
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 933a3df0-e8dc-4b5f-808c-5edc7e3ea567
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5612 | 0.0035 | 1 | 3.7545 |
| 3.549 | 0.0177 | 5 | 3.5143 |
| 2.8654 | 0.0354 | 10 | 2.2111 |
| 1.9689 | 0.0531 | 15 | 1.8258 |
| 1.7188 | 0.0707 | 20 | 1.6879 |
| 1.7133 | 0.0884 | 25 | 1.6562 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
August4293/Llama3.1-8B-PRM-Deepseek-Data-4bit
|
August4293
| 2025-01-15T15:35:53Z
| 16
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-01-15T15:33:26Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k13_task1_organization
|
MayBashendy
| 2025-01-15T15:35:39Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T15:17:34Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k13_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k13_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8239
- Qwk: 0.6567
- Mse: 0.8239
- Rmse: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0206 | 2 | 6.6386 | 0.0308 | 6.6386 | 2.5766 |
| No log | 0.0412 | 4 | 4.3409 | 0.0894 | 4.3409 | 2.0835 |
| No log | 0.0619 | 6 | 2.9340 | 0.1149 | 2.9340 | 1.7129 |
| No log | 0.0825 | 8 | 2.5329 | 0.0970 | 2.5329 | 1.5915 |
| No log | 0.1031 | 10 | 2.0854 | 0.1260 | 2.0854 | 1.4441 |
| No log | 0.1237 | 12 | 1.7133 | 0.2655 | 1.7133 | 1.3089 |
| No log | 0.1443 | 14 | 1.5473 | 0.1852 | 1.5473 | 1.2439 |
| No log | 0.1649 | 16 | 1.5422 | 0.2832 | 1.5422 | 1.2419 |
| No log | 0.1856 | 18 | 1.8395 | 0.2333 | 1.8395 | 1.3563 |
| No log | 0.2062 | 20 | 2.4245 | 0.0541 | 2.4245 | 1.5571 |
| No log | 0.2268 | 22 | 2.2319 | 0.0699 | 2.2319 | 1.4940 |
| No log | 0.2474 | 24 | 1.6828 | 0.2951 | 1.6828 | 1.2972 |
| No log | 0.2680 | 26 | 1.1992 | 0.4874 | 1.1992 | 1.0951 |
| No log | 0.2887 | 28 | 1.5225 | 0.2000 | 1.5225 | 1.2339 |
| No log | 0.3093 | 30 | 1.3550 | 0.3966 | 1.3550 | 1.1641 |
| No log | 0.3299 | 32 | 1.0281 | 0.6015 | 1.0281 | 1.0139 |
| No log | 0.3505 | 34 | 1.1542 | 0.5037 | 1.1542 | 1.0743 |
| No log | 0.3711 | 36 | 1.2467 | 0.5211 | 1.2467 | 1.1166 |
| No log | 0.3918 | 38 | 1.0146 | 0.6197 | 1.0146 | 1.0073 |
| No log | 0.4124 | 40 | 0.9003 | 0.64 | 0.9003 | 0.9489 |
| No log | 0.4330 | 42 | 0.8324 | 0.6918 | 0.8324 | 0.9123 |
| No log | 0.4536 | 44 | 0.8800 | 0.7329 | 0.8800 | 0.9381 |
| No log | 0.4742 | 46 | 1.3931 | 0.5829 | 1.3931 | 1.1803 |
| No log | 0.4948 | 48 | 1.2730 | 0.6036 | 1.2730 | 1.1283 |
| No log | 0.5155 | 50 | 0.9855 | 0.6667 | 0.9855 | 0.9927 |
| No log | 0.5361 | 52 | 1.0368 | 0.5946 | 1.0368 | 1.0182 |
| No log | 0.5567 | 54 | 1.1082 | 0.5517 | 1.1082 | 1.0527 |
| No log | 0.5773 | 56 | 1.0923 | 0.5882 | 1.0923 | 1.0451 |
| No log | 0.5979 | 58 | 1.0331 | 0.6 | 1.0331 | 1.0164 |
| No log | 0.6186 | 60 | 0.9837 | 0.6225 | 0.9837 | 0.9918 |
| No log | 0.6392 | 62 | 0.9091 | 0.6323 | 0.9091 | 0.9535 |
| No log | 0.6598 | 64 | 0.8918 | 0.6582 | 0.8918 | 0.9444 |
| No log | 0.6804 | 66 | 1.0270 | 0.6471 | 1.0270 | 1.0134 |
| No log | 0.7010 | 68 | 0.9795 | 0.6282 | 0.9795 | 0.9897 |
| No log | 0.7216 | 70 | 0.8182 | 0.7059 | 0.8182 | 0.9046 |
| No log | 0.7423 | 72 | 0.8577 | 0.7105 | 0.8577 | 0.9261 |
| No log | 0.7629 | 74 | 1.1304 | 0.5769 | 1.1304 | 1.0632 |
| No log | 0.7835 | 76 | 1.3418 | 0.5093 | 1.3418 | 1.1584 |
| No log | 0.8041 | 78 | 1.1096 | 0.6125 | 1.1096 | 1.0534 |
| No log | 0.8247 | 80 | 0.9768 | 0.6835 | 0.9768 | 0.9883 |
| No log | 0.8454 | 82 | 0.9448 | 0.6800 | 0.9448 | 0.9720 |
| No log | 0.8660 | 84 | 1.1318 | 0.5844 | 1.1318 | 1.0639 |
| No log | 0.8866 | 86 | 1.1446 | 0.5503 | 1.1446 | 1.0699 |
| No log | 0.9072 | 88 | 1.0225 | 0.6081 | 1.0225 | 1.0112 |
| No log | 0.9278 | 90 | 0.9466 | 0.6494 | 0.9466 | 0.9729 |
| No log | 0.9485 | 92 | 0.9471 | 0.6410 | 0.9471 | 0.9732 |
| No log | 0.9691 | 94 | 0.9464 | 0.6369 | 0.9464 | 0.9728 |
| No log | 0.9897 | 96 | 0.9880 | 0.6832 | 0.9880 | 0.9940 |
| No log | 1.0103 | 98 | 1.0383 | 0.6220 | 1.0383 | 1.0190 |
| No log | 1.0309 | 100 | 0.9291 | 0.6708 | 0.9291 | 0.9639 |
| No log | 1.0515 | 102 | 0.8811 | 0.7051 | 0.8811 | 0.9387 |
| No log | 1.0722 | 104 | 0.8661 | 0.6933 | 0.8661 | 0.9306 |
| No log | 1.0928 | 106 | 0.8505 | 0.7320 | 0.8505 | 0.9222 |
| No log | 1.1134 | 108 | 0.9108 | 0.6835 | 0.9108 | 0.9544 |
| No log | 1.1340 | 110 | 0.9671 | 0.6545 | 0.9671 | 0.9834 |
| No log | 1.1546 | 112 | 0.8803 | 0.7143 | 0.8803 | 0.9382 |
| No log | 1.1753 | 114 | 0.7077 | 0.7927 | 0.7077 | 0.8412 |
| No log | 1.1959 | 116 | 0.6499 | 0.7907 | 0.6499 | 0.8062 |
| No log | 1.2165 | 118 | 0.6344 | 0.7977 | 0.6344 | 0.7965 |
| No log | 1.2371 | 120 | 0.7094 | 0.7602 | 0.7094 | 0.8422 |
| No log | 1.2577 | 122 | 0.8116 | 0.7425 | 0.8116 | 0.9009 |
| No log | 1.2784 | 124 | 0.8744 | 0.6962 | 0.8744 | 0.9351 |
| No log | 1.2990 | 126 | 0.7462 | 0.7407 | 0.7462 | 0.8638 |
| No log | 1.3196 | 128 | 0.5628 | 0.8070 | 0.5628 | 0.7502 |
| No log | 1.3402 | 130 | 0.6567 | 0.7831 | 0.6567 | 0.8103 |
| No log | 1.3608 | 132 | 0.9735 | 0.6748 | 0.9735 | 0.9867 |
| No log | 1.3814 | 134 | 0.6423 | 0.8024 | 0.6423 | 0.8014 |
| No log | 1.4021 | 136 | 0.6233 | 0.8128 | 0.6233 | 0.7895 |
| No log | 1.4227 | 138 | 0.9567 | 0.7437 | 0.9567 | 0.9781 |
| No log | 1.4433 | 140 | 0.9548 | 0.7111 | 0.9548 | 0.9771 |
| No log | 1.4639 | 142 | 1.0082 | 0.6144 | 1.0082 | 1.0041 |
| No log | 1.4845 | 144 | 0.9656 | 0.6234 | 0.9656 | 0.9827 |
| No log | 1.5052 | 146 | 0.9097 | 0.6941 | 0.9097 | 0.9538 |
| No log | 1.5258 | 148 | 0.8018 | 0.6918 | 0.8018 | 0.8954 |
| No log | 1.5464 | 150 | 0.7871 | 0.6918 | 0.7871 | 0.8872 |
| No log | 1.5670 | 152 | 0.8130 | 0.6923 | 0.8130 | 0.9017 |
| No log | 1.5876 | 154 | 0.8739 | 0.6883 | 0.8739 | 0.9348 |
| No log | 1.6082 | 156 | 0.9073 | 0.6883 | 0.9073 | 0.9525 |
| No log | 1.6289 | 158 | 0.8979 | 0.6797 | 0.8979 | 0.9476 |
| No log | 1.6495 | 160 | 0.9202 | 0.7020 | 0.9202 | 0.9593 |
| No log | 1.6701 | 162 | 0.9533 | 0.6933 | 0.9533 | 0.9764 |
| No log | 1.6907 | 164 | 1.0495 | 0.6323 | 1.0495 | 1.0244 |
| No log | 1.7113 | 166 | 0.9693 | 0.6667 | 0.9693 | 0.9845 |
| No log | 1.7320 | 168 | 0.9011 | 0.7059 | 0.9011 | 0.9493 |
| No log | 1.7526 | 170 | 0.8788 | 0.6914 | 0.8788 | 0.9375 |
| No log | 1.7732 | 172 | 0.8192 | 0.7152 | 0.8192 | 0.9051 |
| No log | 1.7938 | 174 | 0.7623 | 0.7738 | 0.7623 | 0.8731 |
| No log | 1.8144 | 176 | 0.6998 | 0.7436 | 0.6998 | 0.8365 |
| No log | 1.8351 | 178 | 0.6744 | 0.775 | 0.6744 | 0.8212 |
| No log | 1.8557 | 180 | 0.6923 | 0.7882 | 0.6923 | 0.8321 |
| No log | 1.8763 | 182 | 1.0034 | 0.6889 | 1.0034 | 1.0017 |
| No log | 1.8969 | 184 | 1.1008 | 0.6243 | 1.1008 | 1.0492 |
| No log | 1.9175 | 186 | 0.8802 | 0.6962 | 0.8802 | 0.9382 |
| No log | 1.9381 | 188 | 0.6508 | 0.7432 | 0.6508 | 0.8067 |
| No log | 1.9588 | 190 | 0.6284 | 0.7821 | 0.6284 | 0.7927 |
| No log | 1.9794 | 192 | 0.6294 | 0.7547 | 0.6294 | 0.7933 |
| No log | 2.0 | 194 | 0.5511 | 0.8187 | 0.5511 | 0.7424 |
| No log | 2.0206 | 196 | 0.6162 | 0.8105 | 0.6162 | 0.7850 |
| No log | 2.0412 | 198 | 0.9315 | 0.7263 | 0.9315 | 0.9651 |
| No log | 2.0619 | 200 | 1.2573 | 0.6597 | 1.2573 | 1.1213 |
| No log | 2.0825 | 202 | 1.2732 | 0.6136 | 1.2732 | 1.1283 |
| No log | 2.1031 | 204 | 1.0180 | 0.6581 | 1.0180 | 1.0090 |
| No log | 2.1237 | 206 | 0.7410 | 0.6803 | 0.7410 | 0.8608 |
| No log | 2.1443 | 208 | 0.5425 | 0.7975 | 0.5425 | 0.7366 |
| No log | 2.1649 | 210 | 0.4957 | 0.7879 | 0.4957 | 0.7041 |
| No log | 2.1856 | 212 | 0.5009 | 0.8235 | 0.5009 | 0.7077 |
| No log | 2.2062 | 214 | 0.6764 | 0.7727 | 0.6764 | 0.8225 |
| No log | 2.2268 | 216 | 0.8855 | 0.7368 | 0.8855 | 0.9410 |
| No log | 2.2474 | 218 | 0.8688 | 0.7018 | 0.8688 | 0.9321 |
| No log | 2.2680 | 220 | 0.7828 | 0.7389 | 0.7828 | 0.8847 |
| No log | 2.2887 | 222 | 0.7133 | 0.7285 | 0.7133 | 0.8446 |
| No log | 2.3093 | 224 | 0.7029 | 0.7403 | 0.7029 | 0.8384 |
| No log | 2.3299 | 226 | 0.7274 | 0.7468 | 0.7274 | 0.8529 |
| No log | 2.3505 | 228 | 0.7905 | 0.7024 | 0.7905 | 0.8891 |
| No log | 2.3711 | 230 | 0.8163 | 0.7024 | 0.8163 | 0.9035 |
| No log | 2.3918 | 232 | 0.6817 | 0.7407 | 0.6817 | 0.8257 |
| No log | 2.4124 | 234 | 0.6279 | 0.7898 | 0.6279 | 0.7924 |
| No log | 2.4330 | 236 | 0.6566 | 0.7285 | 0.6566 | 0.8103 |
| No log | 2.4536 | 238 | 0.6737 | 0.7432 | 0.6737 | 0.8208 |
| No log | 2.4742 | 240 | 0.6652 | 0.7586 | 0.6652 | 0.8156 |
| No log | 2.4948 | 242 | 0.6573 | 0.7133 | 0.6573 | 0.8108 |
| No log | 2.5155 | 244 | 0.6947 | 0.7376 | 0.6947 | 0.8335 |
| No log | 2.5361 | 246 | 0.6919 | 0.7532 | 0.6919 | 0.8318 |
| No log | 2.5567 | 248 | 0.7783 | 0.7634 | 0.7783 | 0.8822 |
| No log | 2.5773 | 250 | 0.9080 | 0.7725 | 0.9080 | 0.9529 |
| No log | 2.5979 | 252 | 0.7760 | 0.7789 | 0.7760 | 0.8809 |
| No log | 2.6186 | 254 | 0.5427 | 0.8046 | 0.5427 | 0.7367 |
| No log | 2.6392 | 256 | 0.5157 | 0.8395 | 0.5157 | 0.7181 |
| No log | 2.6598 | 258 | 0.5074 | 0.8199 | 0.5074 | 0.7123 |
| No log | 2.6804 | 260 | 0.5683 | 0.7879 | 0.5683 | 0.7539 |
| No log | 2.7010 | 262 | 0.7725 | 0.6994 | 0.7725 | 0.8789 |
| No log | 2.7216 | 264 | 0.9238 | 0.6790 | 0.9238 | 0.9612 |
| No log | 2.7423 | 266 | 0.8716 | 0.7152 | 0.8716 | 0.9336 |
| No log | 2.7629 | 268 | 0.7049 | 0.7123 | 0.7049 | 0.8396 |
| No log | 2.7835 | 270 | 0.5827 | 0.8280 | 0.5827 | 0.7634 |
| No log | 2.8041 | 272 | 0.5649 | 0.8101 | 0.5649 | 0.7516 |
| No log | 2.8247 | 274 | 0.5398 | 0.8323 | 0.5398 | 0.7347 |
| No log | 2.8454 | 276 | 0.5942 | 0.8098 | 0.5942 | 0.7708 |
| No log | 2.8660 | 278 | 0.8162 | 0.7 | 0.8162 | 0.9035 |
| No log | 2.8866 | 280 | 1.0026 | 0.6341 | 1.0026 | 1.0013 |
| No log | 2.9072 | 282 | 1.0403 | 0.6174 | 1.0403 | 1.0199 |
| No log | 2.9278 | 284 | 0.9378 | 0.6968 | 0.9378 | 0.9684 |
| No log | 2.9485 | 286 | 0.7710 | 0.7273 | 0.7710 | 0.8781 |
| No log | 2.9691 | 288 | 0.6648 | 0.7758 | 0.6648 | 0.8153 |
| No log | 2.9897 | 290 | 0.6411 | 0.7952 | 0.6411 | 0.8007 |
| No log | 3.0103 | 292 | 0.6512 | 0.7719 | 0.6512 | 0.8070 |
| No log | 3.0309 | 294 | 0.6983 | 0.7273 | 0.6983 | 0.8356 |
| No log | 3.0515 | 296 | 0.7335 | 0.7051 | 0.7335 | 0.8565 |
| No log | 3.0722 | 298 | 0.7403 | 0.7179 | 0.7403 | 0.8604 |
| No log | 3.0928 | 300 | 0.6836 | 0.7179 | 0.6836 | 0.8268 |
| No log | 3.1134 | 302 | 0.6452 | 0.7296 | 0.6452 | 0.8033 |
| No log | 3.1340 | 304 | 0.6395 | 0.7297 | 0.6395 | 0.7997 |
| No log | 3.1546 | 306 | 0.6961 | 0.7 | 0.6961 | 0.8343 |
| No log | 3.1753 | 308 | 0.7678 | 0.6567 | 0.7678 | 0.8762 |
| No log | 3.1959 | 310 | 0.7398 | 0.6912 | 0.7398 | 0.8601 |
| No log | 3.2165 | 312 | 0.7046 | 0.7123 | 0.7046 | 0.8394 |
| No log | 3.2371 | 314 | 0.8046 | 0.7037 | 0.8046 | 0.8970 |
| No log | 3.2577 | 316 | 0.9462 | 0.7273 | 0.9462 | 0.9727 |
| No log | 3.2784 | 318 | 0.8458 | 0.7135 | 0.8458 | 0.9197 |
| No log | 3.2990 | 320 | 0.6890 | 0.7160 | 0.6890 | 0.8300 |
| No log | 3.3196 | 322 | 0.6354 | 0.7654 | 0.6354 | 0.7971 |
| No log | 3.3402 | 324 | 0.6535 | 0.7403 | 0.6535 | 0.8084 |
| No log | 3.3608 | 326 | 0.6979 | 0.7237 | 0.6979 | 0.8354 |
| No log | 3.3814 | 328 | 0.8189 | 0.72 | 0.8189 | 0.9049 |
| No log | 3.4021 | 330 | 0.8727 | 0.6714 | 0.8727 | 0.9342 |
| No log | 3.4227 | 332 | 0.8652 | 0.7059 | 0.8652 | 0.9302 |
| No log | 3.4433 | 334 | 0.7842 | 0.7286 | 0.7842 | 0.8856 |
| No log | 3.4639 | 336 | 0.7216 | 0.7517 | 0.7216 | 0.8495 |
| No log | 3.4845 | 338 | 0.7338 | 0.7215 | 0.7338 | 0.8566 |
| No log | 3.5052 | 340 | 0.7771 | 0.6957 | 0.7771 | 0.8815 |
| No log | 3.5258 | 342 | 0.7209 | 0.7305 | 0.7209 | 0.8490 |
| No log | 3.5464 | 344 | 0.6624 | 0.7262 | 0.6624 | 0.8139 |
| No log | 3.5670 | 346 | 0.6127 | 0.7879 | 0.6127 | 0.7828 |
| No log | 3.5876 | 348 | 0.6189 | 0.7665 | 0.6189 | 0.7867 |
| No log | 3.6082 | 350 | 0.6325 | 0.7456 | 0.6325 | 0.7953 |
| No log | 3.6289 | 352 | 0.6053 | 0.7952 | 0.6053 | 0.7780 |
| No log | 3.6495 | 354 | 0.5672 | 0.8323 | 0.5672 | 0.7531 |
| No log | 3.6701 | 356 | 0.5962 | 0.7719 | 0.5962 | 0.7721 |
| No log | 3.6907 | 358 | 0.6806 | 0.7650 | 0.6806 | 0.8250 |
| No log | 3.7113 | 360 | 0.7369 | 0.7701 | 0.7369 | 0.8585 |
| No log | 3.7320 | 362 | 0.7234 | 0.7514 | 0.7234 | 0.8505 |
| No log | 3.7526 | 364 | 0.6946 | 0.7262 | 0.6946 | 0.8334 |
| No log | 3.7732 | 366 | 0.7153 | 0.7375 | 0.7153 | 0.8457 |
| No log | 3.7938 | 368 | 0.8381 | 0.6795 | 0.8381 | 0.9155 |
| No log | 3.8144 | 370 | 0.9907 | 0.6467 | 0.9907 | 0.9953 |
| No log | 3.8351 | 372 | 0.9842 | 0.6588 | 0.9842 | 0.9921 |
| No log | 3.8557 | 374 | 0.8263 | 0.7030 | 0.8263 | 0.9090 |
| No log | 3.8763 | 376 | 0.6364 | 0.7826 | 0.6364 | 0.7978 |
| No log | 3.8969 | 378 | 0.5979 | 0.8129 | 0.5979 | 0.7733 |
| No log | 3.9175 | 380 | 0.6032 | 0.8129 | 0.6032 | 0.7766 |
| No log | 3.9381 | 382 | 0.6340 | 0.7821 | 0.6340 | 0.7963 |
| No log | 3.9588 | 384 | 0.6742 | 0.7515 | 0.6742 | 0.8211 |
| No log | 3.9794 | 386 | 0.7577 | 0.7066 | 0.7577 | 0.8705 |
| No log | 4.0 | 388 | 0.7956 | 0.6918 | 0.7956 | 0.8920 |
| No log | 4.0206 | 390 | 0.8222 | 0.6623 | 0.8222 | 0.9068 |
| No log | 4.0412 | 392 | 0.7831 | 0.6835 | 0.7831 | 0.8849 |
| No log | 4.0619 | 394 | 0.7014 | 0.7453 | 0.7014 | 0.8375 |
| No log | 4.0825 | 396 | 0.6360 | 0.7595 | 0.6360 | 0.7975 |
| No log | 4.1031 | 398 | 0.6268 | 0.7848 | 0.6268 | 0.7917 |
| No log | 4.1237 | 400 | 0.6348 | 0.7853 | 0.6348 | 0.7968 |
| No log | 4.1443 | 402 | 0.7172 | 0.7337 | 0.7172 | 0.8469 |
| No log | 4.1649 | 404 | 0.8698 | 0.7262 | 0.8698 | 0.9327 |
| No log | 4.1856 | 406 | 0.9461 | 0.6909 | 0.9461 | 0.9727 |
| No log | 4.2062 | 408 | 0.8941 | 0.6795 | 0.8941 | 0.9456 |
| No log | 4.2268 | 410 | 0.8287 | 0.7123 | 0.8287 | 0.9103 |
| No log | 4.2474 | 412 | 0.8115 | 0.7123 | 0.8115 | 0.9008 |
| No log | 4.2680 | 414 | 0.8251 | 0.6994 | 0.8251 | 0.9084 |
| No log | 4.2887 | 416 | 0.9469 | 0.7396 | 0.9469 | 0.9731 |
| No log | 4.3093 | 418 | 0.9229 | 0.7437 | 0.9229 | 0.9607 |
| No log | 4.3299 | 420 | 0.7387 | 0.7668 | 0.7387 | 0.8595 |
| No log | 4.3505 | 422 | 0.6246 | 0.7294 | 0.6246 | 0.7903 |
| No log | 4.3711 | 424 | 0.6276 | 0.7355 | 0.6276 | 0.7922 |
| No log | 4.3918 | 426 | 0.6380 | 0.7368 | 0.6380 | 0.7987 |
| No log | 4.4124 | 428 | 0.6312 | 0.7651 | 0.6312 | 0.7945 |
| No log | 4.4330 | 430 | 0.6137 | 0.7651 | 0.6137 | 0.7834 |
| No log | 4.4536 | 432 | 0.5521 | 0.8182 | 0.5521 | 0.7430 |
| No log | 4.4742 | 434 | 0.5049 | 0.8354 | 0.5049 | 0.7106 |
| No log | 4.4948 | 436 | 0.4706 | 0.8415 | 0.4706 | 0.6860 |
| No log | 4.5155 | 438 | 0.4786 | 0.8276 | 0.4786 | 0.6918 |
| No log | 4.5361 | 440 | 0.5208 | 0.8161 | 0.5208 | 0.7217 |
| No log | 4.5567 | 442 | 0.5843 | 0.7758 | 0.5843 | 0.7644 |
| No log | 4.5773 | 444 | 0.6199 | 0.7451 | 0.6199 | 0.7874 |
| No log | 4.5979 | 446 | 0.6672 | 0.7397 | 0.6672 | 0.8168 |
| No log | 4.6186 | 448 | 0.6637 | 0.7465 | 0.6637 | 0.8147 |
| No log | 4.6392 | 450 | 0.6270 | 0.7361 | 0.6270 | 0.7918 |
| No log | 4.6598 | 452 | 0.6211 | 0.7467 | 0.6211 | 0.7881 |
| No log | 4.6804 | 454 | 0.6129 | 0.7595 | 0.6129 | 0.7829 |
| No log | 4.7010 | 456 | 0.6952 | 0.7375 | 0.6952 | 0.8338 |
| No log | 4.7216 | 458 | 0.8275 | 0.7487 | 0.8275 | 0.9097 |
| No log | 4.7423 | 460 | 0.8799 | 0.7391 | 0.8799 | 0.9381 |
| No log | 4.7629 | 462 | 0.7715 | 0.7143 | 0.7715 | 0.8784 |
| No log | 4.7835 | 464 | 0.6494 | 0.7451 | 0.6494 | 0.8059 |
| No log | 4.8041 | 466 | 0.6108 | 0.7792 | 0.6108 | 0.7815 |
| No log | 4.8247 | 468 | 0.5989 | 0.8 | 0.5989 | 0.7739 |
| No log | 4.8454 | 470 | 0.5888 | 0.8101 | 0.5888 | 0.7673 |
| No log | 4.8660 | 472 | 0.6099 | 0.7643 | 0.6099 | 0.7809 |
| No log | 4.8866 | 474 | 0.6723 | 0.7485 | 0.6723 | 0.8199 |
| No log | 4.9072 | 476 | 0.6485 | 0.7826 | 0.6485 | 0.8053 |
| No log | 4.9278 | 478 | 0.5903 | 0.8 | 0.5903 | 0.7683 |
| No log | 4.9485 | 480 | 0.5514 | 0.8068 | 0.5514 | 0.7426 |
| No log | 4.9691 | 482 | 0.5580 | 0.8298 | 0.5580 | 0.7470 |
| No log | 4.9897 | 484 | 0.5587 | 0.8046 | 0.5587 | 0.7475 |
| No log | 5.0103 | 486 | 0.5812 | 0.7976 | 0.5812 | 0.7624 |
| No log | 5.0309 | 488 | 0.6220 | 0.8182 | 0.6220 | 0.7887 |
| No log | 5.0515 | 490 | 0.6713 | 0.7465 | 0.6713 | 0.8194 |
| No log | 5.0722 | 492 | 0.7329 | 0.6883 | 0.7329 | 0.8561 |
| No log | 5.0928 | 494 | 0.8174 | 0.7195 | 0.8174 | 0.9041 |
| No log | 5.1134 | 496 | 0.8007 | 0.7253 | 0.8007 | 0.8948 |
| No log | 5.1340 | 498 | 0.7276 | 0.7429 | 0.7276 | 0.8530 |
| 0.3973 | 5.1546 | 500 | 0.6351 | 0.7717 | 0.6351 | 0.7969 |
| 0.3973 | 5.1753 | 502 | 0.6046 | 0.8222 | 0.6046 | 0.7775 |
| 0.3973 | 5.1959 | 504 | 0.5989 | 0.8161 | 0.5989 | 0.7739 |
| 0.3973 | 5.2165 | 506 | 0.5907 | 0.8 | 0.5907 | 0.7686 |
| 0.3973 | 5.2371 | 508 | 0.6053 | 0.7746 | 0.6053 | 0.7780 |
| 0.3973 | 5.2577 | 510 | 0.5994 | 0.7929 | 0.5994 | 0.7742 |
| 0.3973 | 5.2784 | 512 | 0.6174 | 0.7927 | 0.6174 | 0.7858 |
| 0.3973 | 5.2990 | 514 | 0.5979 | 0.8121 | 0.5979 | 0.7732 |
| 0.3973 | 5.3196 | 516 | 0.5881 | 0.8049 | 0.5881 | 0.7669 |
| 0.3973 | 5.3402 | 518 | 0.5806 | 0.8199 | 0.5806 | 0.7619 |
| 0.3973 | 5.3608 | 520 | 0.5868 | 0.8228 | 0.5868 | 0.7660 |
| 0.3973 | 5.3814 | 522 | 0.5853 | 0.8302 | 0.5853 | 0.7651 |
| 0.3973 | 5.4021 | 524 | 0.6008 | 0.7826 | 0.6008 | 0.7751 |
| 0.3973 | 5.4227 | 526 | 0.6094 | 0.7826 | 0.6094 | 0.7807 |
| 0.3973 | 5.4433 | 528 | 0.5773 | 0.7853 | 0.5773 | 0.7598 |
| 0.3973 | 5.4639 | 530 | 0.5429 | 0.8050 | 0.5429 | 0.7368 |
| 0.3973 | 5.4845 | 532 | 0.5545 | 0.8050 | 0.5545 | 0.7446 |
| 0.3973 | 5.5052 | 534 | 0.6134 | 0.7925 | 0.6134 | 0.7832 |
| 0.3973 | 5.5258 | 536 | 0.7517 | 0.7067 | 0.7517 | 0.8670 |
| 0.3973 | 5.5464 | 538 | 0.8842 | 0.6621 | 0.8842 | 0.9403 |
| 0.3973 | 5.5670 | 540 | 0.9498 | 0.6377 | 0.9498 | 0.9746 |
| 0.3973 | 5.5876 | 542 | 0.9129 | 0.6569 | 0.9129 | 0.9555 |
| 0.3973 | 5.6082 | 544 | 0.8239 | 0.6567 | 0.8239 | 0.9077 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k19_task1_organization
|
MayBashendy
| 2025-01-15T15:35:36Z
| 184
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-31T16:45:16Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k19_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k19_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8235
- Qwk: 0.6980
- Mse: 0.8235
- Rmse: 0.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0225 | 2 | 6.7499 | 0.0239 | 6.7499 | 2.5981 |
| No log | 0.0449 | 4 | 3.9448 | 0.0779 | 3.9448 | 1.9862 |
| No log | 0.0674 | 6 | 3.2132 | 0.0119 | 3.2132 | 1.7925 |
| No log | 0.0899 | 8 | 2.8634 | 0.0759 | 2.8634 | 1.6921 |
| No log | 0.1124 | 10 | 2.4238 | 0.1399 | 2.4238 | 1.5569 |
| No log | 0.1348 | 12 | 2.4654 | 0.1678 | 2.4654 | 1.5702 |
| No log | 0.1573 | 14 | 2.3670 | 0.1644 | 2.3670 | 1.5385 |
| No log | 0.1798 | 16 | 2.0761 | 0.2121 | 2.0761 | 1.4409 |
| No log | 0.2022 | 18 | 3.0611 | 0.0256 | 3.0611 | 1.7496 |
| No log | 0.2247 | 20 | 3.0347 | 0.0261 | 3.0347 | 1.7420 |
| No log | 0.2472 | 22 | 2.9350 | 0.0261 | 2.9350 | 1.7132 |
| No log | 0.2697 | 24 | 2.3165 | 0.1127 | 2.3165 | 1.5220 |
| No log | 0.2921 | 26 | 2.0376 | 0.2105 | 2.0376 | 1.4275 |
| No log | 0.3146 | 28 | 1.7324 | 0.2087 | 1.7324 | 1.3162 |
| No log | 0.3371 | 30 | 1.6030 | 0.1835 | 1.6030 | 1.2661 |
| No log | 0.3596 | 32 | 1.6510 | 0.1897 | 1.6510 | 1.2849 |
| No log | 0.3820 | 34 | 2.1315 | 0.2302 | 2.1315 | 1.4600 |
| No log | 0.4045 | 36 | 2.9023 | 0.0914 | 2.9023 | 1.7036 |
| No log | 0.4270 | 38 | 3.1944 | 0.0978 | 3.1944 | 1.7873 |
| No log | 0.4494 | 40 | 2.9683 | 0.1163 | 2.9683 | 1.7229 |
| No log | 0.4719 | 42 | 2.3236 | 0.2125 | 2.3236 | 1.5243 |
| No log | 0.4944 | 44 | 1.7125 | 0.3704 | 1.7125 | 1.3086 |
| No log | 0.5169 | 46 | 1.4584 | 0.4394 | 1.4584 | 1.2076 |
| No log | 0.5393 | 48 | 1.5439 | 0.4317 | 1.5439 | 1.2425 |
| No log | 0.5618 | 50 | 2.0450 | 0.2420 | 2.0450 | 1.4300 |
| No log | 0.5843 | 52 | 2.1974 | 0.2375 | 2.1974 | 1.4824 |
| No log | 0.6067 | 54 | 2.1522 | 0.2420 | 2.1522 | 1.4670 |
| No log | 0.6292 | 56 | 2.2223 | 0.2420 | 2.2223 | 1.4908 |
| No log | 0.6517 | 58 | 2.0063 | 0.3289 | 2.0063 | 1.4164 |
| No log | 0.6742 | 60 | 1.9971 | 0.3333 | 1.9971 | 1.4132 |
| No log | 0.6966 | 62 | 1.7158 | 0.3889 | 1.7158 | 1.3099 |
| No log | 0.7191 | 64 | 1.3191 | 0.5248 | 1.3191 | 1.1485 |
| No log | 0.7416 | 66 | 1.2559 | 0.5362 | 1.2559 | 1.1207 |
| No log | 0.7640 | 68 | 1.3554 | 0.5616 | 1.3554 | 1.1642 |
| No log | 0.7865 | 70 | 1.6083 | 0.4940 | 1.6083 | 1.2682 |
| No log | 0.8090 | 72 | 1.7086 | 0.5111 | 1.7086 | 1.3071 |
| No log | 0.8315 | 74 | 1.4948 | 0.5444 | 1.4948 | 1.2226 |
| No log | 0.8539 | 76 | 1.3163 | 0.5479 | 1.3163 | 1.1473 |
| No log | 0.8764 | 78 | 1.3710 | 0.5235 | 1.3710 | 1.1709 |
| No log | 0.8989 | 80 | 1.2650 | 0.5405 | 1.2650 | 1.1247 |
| No log | 0.9213 | 82 | 1.1409 | 0.6234 | 1.1409 | 1.0681 |
| No log | 0.9438 | 84 | 0.9831 | 0.6351 | 0.9831 | 0.9915 |
| No log | 0.9663 | 86 | 0.9026 | 0.6809 | 0.9026 | 0.9501 |
| No log | 0.9888 | 88 | 0.8614 | 0.6853 | 0.8614 | 0.9281 |
| No log | 1.0112 | 90 | 0.9918 | 0.6099 | 0.9918 | 0.9959 |
| No log | 1.0337 | 92 | 1.1130 | 0.5634 | 1.1130 | 1.0550 |
| No log | 1.0562 | 94 | 0.9977 | 0.6056 | 0.9977 | 0.9988 |
| No log | 1.0787 | 96 | 1.0509 | 0.6065 | 1.0509 | 1.0251 |
| No log | 1.1011 | 98 | 1.3156 | 0.6145 | 1.3156 | 1.1470 |
| No log | 1.1236 | 100 | 1.4242 | 0.5780 | 1.4242 | 1.1934 |
| No log | 1.1461 | 102 | 1.2445 | 0.6380 | 1.2445 | 1.1156 |
| No log | 1.1685 | 104 | 1.1225 | 0.5772 | 1.1225 | 1.0595 |
| No log | 1.1910 | 106 | 1.0577 | 0.5571 | 1.0577 | 1.0284 |
| No log | 1.2135 | 108 | 1.0190 | 0.6029 | 1.0190 | 1.0095 |
| No log | 1.2360 | 110 | 1.0296 | 0.5693 | 1.0296 | 1.0147 |
| No log | 1.2584 | 112 | 1.1317 | 0.5833 | 1.1317 | 1.0638 |
| No log | 1.2809 | 114 | 1.1935 | 0.5850 | 1.1935 | 1.0925 |
| No log | 1.3034 | 116 | 1.0900 | 0.6143 | 1.0900 | 1.0440 |
| No log | 1.3258 | 118 | 1.0439 | 0.6143 | 1.0439 | 1.0217 |
| No log | 1.3483 | 120 | 1.1155 | 0.5906 | 1.1155 | 1.0562 |
| No log | 1.3708 | 122 | 1.0796 | 0.6081 | 1.0796 | 1.0391 |
| No log | 1.3933 | 124 | 0.9929 | 0.6267 | 0.9929 | 0.9965 |
| No log | 1.4157 | 126 | 0.8937 | 0.6759 | 0.8937 | 0.9454 |
| No log | 1.4382 | 128 | 0.8898 | 0.6528 | 0.8898 | 0.9433 |
| No log | 1.4607 | 130 | 1.0211 | 0.6410 | 1.0211 | 1.0105 |
| No log | 1.4831 | 132 | 1.4143 | 0.5294 | 1.4143 | 1.1893 |
| No log | 1.5056 | 134 | 1.5741 | 0.5172 | 1.5741 | 1.2546 |
| No log | 1.5281 | 136 | 1.2332 | 0.6203 | 1.2332 | 1.1105 |
| No log | 1.5506 | 138 | 0.8877 | 0.6056 | 0.8877 | 0.9422 |
| No log | 1.5730 | 140 | 0.8260 | 0.6853 | 0.8260 | 0.9089 |
| No log | 1.5955 | 142 | 0.8399 | 0.7083 | 0.8399 | 0.9164 |
| No log | 1.6180 | 144 | 0.9279 | 0.6494 | 0.9279 | 0.9633 |
| No log | 1.6404 | 146 | 1.1181 | 0.6047 | 1.1181 | 1.0574 |
| No log | 1.6629 | 148 | 1.0097 | 0.6584 | 1.0097 | 1.0048 |
| No log | 1.6854 | 150 | 0.9732 | 0.6456 | 0.9732 | 0.9865 |
| No log | 1.7079 | 152 | 1.1485 | 0.5698 | 1.1485 | 1.0717 |
| No log | 1.7303 | 154 | 1.2391 | 0.5663 | 1.2391 | 1.1131 |
| No log | 1.7528 | 156 | 1.4465 | 0.5909 | 1.4465 | 1.2027 |
| No log | 1.7753 | 158 | 1.1367 | 0.6013 | 1.1367 | 1.0661 |
| No log | 1.7978 | 160 | 0.8003 | 0.6712 | 0.8003 | 0.8946 |
| No log | 1.8202 | 162 | 0.7855 | 0.7123 | 0.7855 | 0.8863 |
| No log | 1.8427 | 164 | 0.8127 | 0.7248 | 0.8127 | 0.9015 |
| No log | 1.8652 | 166 | 0.9137 | 0.6490 | 0.9137 | 0.9559 |
| No log | 1.8876 | 168 | 1.2149 | 0.5697 | 1.2149 | 1.1022 |
| No log | 1.9101 | 170 | 1.3223 | 0.5731 | 1.3223 | 1.1499 |
| No log | 1.9326 | 172 | 1.2370 | 0.6057 | 1.2370 | 1.1122 |
| No log | 1.9551 | 174 | 1.0690 | 0.6235 | 1.0690 | 1.0339 |
| No log | 1.9775 | 176 | 0.8758 | 0.6800 | 0.8758 | 0.9359 |
| No log | 2.0 | 178 | 0.8399 | 0.6974 | 0.8399 | 0.9165 |
| No log | 2.0225 | 180 | 0.7908 | 0.7273 | 0.7908 | 0.8893 |
| No log | 2.0449 | 182 | 0.7553 | 0.7368 | 0.7553 | 0.8691 |
| No log | 2.0674 | 184 | 0.9469 | 0.6875 | 0.9469 | 0.9731 |
| No log | 2.0899 | 186 | 1.0288 | 0.6582 | 1.0288 | 1.0143 |
| No log | 2.1124 | 188 | 0.9775 | 0.6582 | 0.9775 | 0.9887 |
| No log | 2.1348 | 190 | 0.9110 | 0.6625 | 0.9110 | 0.9545 |
| No log | 2.1573 | 192 | 0.9113 | 0.6905 | 0.9113 | 0.9546 |
| No log | 2.1798 | 194 | 0.8910 | 0.7093 | 0.8910 | 0.9439 |
| No log | 2.2022 | 196 | 0.9157 | 0.6901 | 0.9157 | 0.9569 |
| No log | 2.2247 | 198 | 0.9856 | 0.7079 | 0.9856 | 0.9928 |
| No log | 2.2472 | 200 | 0.9730 | 0.6936 | 0.9730 | 0.9864 |
| No log | 2.2697 | 202 | 0.8251 | 0.7237 | 0.8251 | 0.9084 |
| No log | 2.2921 | 204 | 0.8114 | 0.6939 | 0.8114 | 0.9008 |
| No log | 2.3146 | 206 | 0.8341 | 0.7190 | 0.8341 | 0.9133 |
| No log | 2.3371 | 208 | 0.8705 | 0.6131 | 0.8705 | 0.9330 |
| No log | 2.3596 | 210 | 0.9179 | 0.6331 | 0.9179 | 0.9581 |
| No log | 2.3820 | 212 | 0.9431 | 0.6429 | 0.9431 | 0.9711 |
| No log | 2.4045 | 214 | 1.0174 | 0.6122 | 1.0174 | 1.0087 |
| No log | 2.4270 | 216 | 1.2815 | 0.5732 | 1.2815 | 1.1320 |
| No log | 2.4494 | 218 | 1.4477 | 0.5650 | 1.4477 | 1.2032 |
| No log | 2.4719 | 220 | 1.1777 | 0.6433 | 1.1777 | 1.0852 |
| No log | 2.4944 | 222 | 0.8770 | 0.7229 | 0.8770 | 0.9365 |
| No log | 2.5169 | 224 | 0.7887 | 0.7317 | 0.7887 | 0.8881 |
| No log | 2.5393 | 226 | 0.8828 | 0.7011 | 0.8828 | 0.9396 |
| No log | 2.5618 | 228 | 0.9417 | 0.6821 | 0.9417 | 0.9704 |
| No log | 2.5843 | 230 | 0.8212 | 0.7093 | 0.8212 | 0.9062 |
| No log | 2.6067 | 232 | 0.7767 | 0.7436 | 0.7767 | 0.8813 |
| No log | 2.6292 | 234 | 0.8518 | 0.7162 | 0.8518 | 0.9229 |
| No log | 2.6517 | 236 | 0.8481 | 0.7162 | 0.8481 | 0.9209 |
| No log | 2.6742 | 238 | 0.8264 | 0.7261 | 0.8264 | 0.9091 |
| No log | 2.6966 | 240 | 0.9441 | 0.6893 | 0.9441 | 0.9716 |
| No log | 2.7191 | 242 | 1.0546 | 0.6704 | 1.0546 | 1.0269 |
| No log | 2.7416 | 244 | 0.9842 | 0.6816 | 0.9842 | 0.9921 |
| No log | 2.7640 | 246 | 0.8624 | 0.7453 | 0.8624 | 0.9286 |
| No log | 2.7865 | 248 | 0.8618 | 0.6803 | 0.8618 | 0.9283 |
| No log | 2.8090 | 250 | 0.8531 | 0.6667 | 0.8531 | 0.9236 |
| No log | 2.8315 | 252 | 0.8015 | 0.7368 | 0.8015 | 0.8953 |
| No log | 2.8539 | 254 | 0.8789 | 0.6962 | 0.8789 | 0.9375 |
| No log | 2.8764 | 256 | 0.9868 | 0.6824 | 0.9868 | 0.9934 |
| No log | 2.8989 | 258 | 0.9168 | 0.6667 | 0.9168 | 0.9575 |
| No log | 2.9213 | 260 | 0.7860 | 0.6968 | 0.7860 | 0.8866 |
| No log | 2.9438 | 262 | 0.7684 | 0.7320 | 0.7684 | 0.8766 |
| No log | 2.9663 | 264 | 0.8350 | 0.7273 | 0.8350 | 0.9138 |
| No log | 2.9888 | 266 | 0.9936 | 0.6497 | 0.9936 | 0.9968 |
| No log | 3.0112 | 268 | 1.1765 | 0.6420 | 1.1765 | 1.0847 |
| No log | 3.0337 | 270 | 1.3074 | 0.6127 | 1.3074 | 1.1434 |
| No log | 3.0562 | 272 | 1.1744 | 0.6627 | 1.1744 | 1.0837 |
| No log | 3.0787 | 274 | 0.8994 | 0.6711 | 0.8994 | 0.9484 |
| No log | 3.1011 | 276 | 0.7643 | 0.6980 | 0.7643 | 0.8743 |
| No log | 3.1236 | 278 | 0.7656 | 0.6901 | 0.7656 | 0.8750 |
| No log | 3.1461 | 280 | 0.7889 | 0.6621 | 0.7889 | 0.8882 |
| No log | 3.1685 | 282 | 0.9606 | 0.6928 | 0.9606 | 0.9801 |
| No log | 3.1910 | 284 | 1.1225 | 0.6667 | 1.1225 | 1.0595 |
| No log | 3.2135 | 286 | 1.2142 | 0.6463 | 1.2142 | 1.1019 |
| No log | 3.2360 | 288 | 1.0268 | 0.6923 | 1.0268 | 1.0133 |
| No log | 3.2584 | 290 | 0.7790 | 0.6928 | 0.7790 | 0.8826 |
| No log | 3.2809 | 292 | 0.7135 | 0.7368 | 0.7135 | 0.8447 |
| No log | 3.3034 | 294 | 0.7415 | 0.7484 | 0.7415 | 0.8611 |
| No log | 3.3258 | 296 | 0.8153 | 0.7317 | 0.8153 | 0.9030 |
| No log | 3.3483 | 298 | 0.9019 | 0.7337 | 0.9019 | 0.9497 |
| No log | 3.3708 | 300 | 0.8978 | 0.7305 | 0.8978 | 0.9475 |
| No log | 3.3933 | 302 | 0.8980 | 0.7229 | 0.8980 | 0.9476 |
| No log | 3.4157 | 304 | 0.9298 | 0.7030 | 0.9298 | 0.9643 |
| No log | 3.4382 | 306 | 0.8938 | 0.7117 | 0.8938 | 0.9454 |
| No log | 3.4607 | 308 | 0.7981 | 0.7097 | 0.7981 | 0.8933 |
| No log | 3.4831 | 310 | 0.7560 | 0.6853 | 0.7560 | 0.8695 |
| No log | 3.5056 | 312 | 0.7667 | 0.7361 | 0.7667 | 0.8756 |
| No log | 3.5281 | 314 | 0.7564 | 0.7133 | 0.7564 | 0.8697 |
| No log | 3.5506 | 316 | 0.7166 | 0.7333 | 0.7166 | 0.8465 |
| No log | 3.5730 | 318 | 0.8723 | 0.7195 | 0.8723 | 0.9340 |
| No log | 3.5955 | 320 | 1.1398 | 0.6556 | 1.1398 | 1.0676 |
| No log | 3.6180 | 322 | 1.1866 | 0.6145 | 1.1866 | 1.0893 |
| No log | 3.6404 | 324 | 1.0065 | 0.6420 | 1.0065 | 1.0032 |
| No log | 3.6629 | 326 | 0.7958 | 0.7059 | 0.7958 | 0.8921 |
| No log | 3.6854 | 328 | 0.7029 | 0.7586 | 0.7029 | 0.8384 |
| No log | 3.7079 | 330 | 0.6871 | 0.7361 | 0.6871 | 0.8289 |
| No log | 3.7303 | 332 | 0.6626 | 0.7671 | 0.6626 | 0.8140 |
| No log | 3.7528 | 334 | 0.6622 | 0.7871 | 0.6622 | 0.8137 |
| No log | 3.7753 | 336 | 0.7631 | 0.7515 | 0.7631 | 0.8736 |
| No log | 3.7978 | 338 | 0.8749 | 0.7052 | 0.8749 | 0.9354 |
| No log | 3.8202 | 340 | 0.8515 | 0.7126 | 0.8515 | 0.9228 |
| No log | 3.8427 | 342 | 0.7951 | 0.7647 | 0.7951 | 0.8917 |
| No log | 3.8652 | 344 | 0.8031 | 0.7389 | 0.8031 | 0.8962 |
| No log | 3.8876 | 346 | 0.8594 | 0.6901 | 0.8594 | 0.9270 |
| No log | 3.9101 | 348 | 0.8294 | 0.6809 | 0.8294 | 0.9107 |
| No log | 3.9326 | 350 | 0.7983 | 0.7162 | 0.7983 | 0.8935 |
| No log | 3.9551 | 352 | 0.8242 | 0.7362 | 0.8242 | 0.9078 |
| No log | 3.9775 | 354 | 0.8498 | 0.7305 | 0.8498 | 0.9218 |
| No log | 4.0 | 356 | 0.9308 | 0.7030 | 0.9308 | 0.9648 |
| No log | 4.0225 | 358 | 0.9551 | 0.7030 | 0.9551 | 0.9773 |
| No log | 4.0449 | 360 | 0.8899 | 0.6914 | 0.8899 | 0.9433 |
| No log | 4.0674 | 362 | 0.8509 | 0.6951 | 0.8509 | 0.9224 |
| No log | 4.0899 | 364 | 0.7797 | 0.7229 | 0.7797 | 0.8830 |
| No log | 4.1124 | 366 | 0.7379 | 0.75 | 0.7379 | 0.8590 |
| No log | 4.1348 | 368 | 0.7696 | 0.75 | 0.7696 | 0.8773 |
| No log | 4.1573 | 370 | 0.8279 | 0.6909 | 0.8279 | 0.9099 |
| No log | 4.1798 | 372 | 0.8621 | 0.7317 | 0.8621 | 0.9285 |
| No log | 4.2022 | 374 | 0.9652 | 0.6747 | 0.9652 | 0.9825 |
| No log | 4.2247 | 376 | 0.9987 | 0.6296 | 0.9987 | 0.9994 |
| No log | 4.2472 | 378 | 1.0098 | 0.6883 | 1.0098 | 1.0049 |
| No log | 4.2697 | 380 | 1.0363 | 0.625 | 1.0363 | 1.0180 |
| No log | 4.2921 | 382 | 0.9867 | 0.5899 | 0.9867 | 0.9933 |
| No log | 4.3146 | 384 | 0.9271 | 0.6846 | 0.9271 | 0.9629 |
| No log | 4.3371 | 386 | 0.9243 | 0.6452 | 0.9243 | 0.9614 |
| No log | 4.3596 | 388 | 1.0277 | 0.6258 | 1.0277 | 1.0138 |
| No log | 4.3820 | 390 | 1.1102 | 0.6310 | 1.1102 | 1.0537 |
| No log | 4.4045 | 392 | 0.8984 | 0.6420 | 0.8984 | 0.9478 |
| No log | 4.4270 | 394 | 0.6753 | 0.8125 | 0.6753 | 0.8218 |
| No log | 4.4494 | 396 | 0.6416 | 0.7785 | 0.6416 | 0.8010 |
| No log | 4.4719 | 398 | 0.6401 | 0.7785 | 0.6401 | 0.8001 |
| No log | 4.4944 | 400 | 0.6687 | 0.7785 | 0.6687 | 0.8178 |
| No log | 4.5169 | 402 | 0.7273 | 0.7075 | 0.7273 | 0.8528 |
| No log | 4.5393 | 404 | 0.7722 | 0.7020 | 0.7722 | 0.8787 |
| No log | 4.5618 | 406 | 0.7629 | 0.7105 | 0.7629 | 0.8734 |
| No log | 4.5843 | 408 | 0.7780 | 0.6974 | 0.7780 | 0.8820 |
| No log | 4.6067 | 410 | 0.7689 | 0.6974 | 0.7689 | 0.8769 |
| No log | 4.6292 | 412 | 0.7241 | 0.75 | 0.7241 | 0.8510 |
| No log | 4.6517 | 414 | 0.6443 | 0.7785 | 0.6443 | 0.8027 |
| No log | 4.6742 | 416 | 0.6238 | 0.7785 | 0.6238 | 0.7898 |
| No log | 4.6966 | 418 | 0.6294 | 0.7867 | 0.6294 | 0.7934 |
| No log | 4.7191 | 420 | 0.6906 | 0.7925 | 0.6906 | 0.8310 |
| No log | 4.7416 | 422 | 0.8316 | 0.7284 | 0.8316 | 0.9119 |
| No log | 4.7640 | 424 | 0.7839 | 0.7425 | 0.7839 | 0.8854 |
| No log | 4.7865 | 426 | 0.6938 | 0.7904 | 0.6938 | 0.8330 |
| No log | 4.8090 | 428 | 0.6855 | 0.7904 | 0.6855 | 0.8280 |
| No log | 4.8315 | 430 | 0.6866 | 0.7952 | 0.6866 | 0.8286 |
| No log | 4.8539 | 432 | 0.7095 | 0.8125 | 0.7095 | 0.8423 |
| No log | 4.8764 | 434 | 0.7935 | 0.7654 | 0.7935 | 0.8908 |
| No log | 4.8989 | 436 | 0.9087 | 0.6829 | 0.9087 | 0.9533 |
| No log | 4.9213 | 438 | 0.9127 | 0.6667 | 0.9127 | 0.9553 |
| No log | 4.9438 | 440 | 0.8895 | 0.6627 | 0.8895 | 0.9431 |
| No log | 4.9663 | 442 | 0.7709 | 0.7619 | 0.7709 | 0.8780 |
| No log | 4.9888 | 444 | 0.6746 | 0.7904 | 0.6746 | 0.8213 |
| No log | 5.0112 | 446 | 0.6282 | 0.7662 | 0.6282 | 0.7926 |
| No log | 5.0337 | 448 | 0.6500 | 0.75 | 0.6500 | 0.8062 |
| No log | 5.0562 | 450 | 0.6861 | 0.7550 | 0.6861 | 0.8283 |
| No log | 5.0787 | 452 | 0.7197 | 0.7417 | 0.7197 | 0.8483 |
| No log | 5.1011 | 454 | 0.7811 | 0.7123 | 0.7811 | 0.8838 |
| No log | 5.1236 | 456 | 0.8052 | 0.6713 | 0.8052 | 0.8973 |
| No log | 5.1461 | 458 | 0.7841 | 0.6806 | 0.7841 | 0.8855 |
| No log | 5.1685 | 460 | 0.7582 | 0.6986 | 0.7582 | 0.8708 |
| No log | 5.1910 | 462 | 0.7210 | 0.7383 | 0.7210 | 0.8491 |
| No log | 5.2135 | 464 | 0.6827 | 0.7432 | 0.6827 | 0.8262 |
| No log | 5.2360 | 466 | 0.6956 | 0.7383 | 0.6956 | 0.8340 |
| No log | 5.2584 | 468 | 0.7954 | 0.72 | 0.7954 | 0.8919 |
| No log | 5.2809 | 470 | 0.8401 | 0.7355 | 0.8401 | 0.9166 |
| No log | 5.3034 | 472 | 0.7855 | 0.7564 | 0.7855 | 0.8863 |
| No log | 5.3258 | 474 | 0.7808 | 0.7643 | 0.7808 | 0.8836 |
| No log | 5.3483 | 476 | 0.7830 | 0.7485 | 0.7830 | 0.8849 |
| No log | 5.3708 | 478 | 0.6540 | 0.7636 | 0.6540 | 0.8087 |
| No log | 5.3933 | 480 | 0.6426 | 0.8171 | 0.6426 | 0.8016 |
| No log | 5.4157 | 482 | 0.7162 | 0.7561 | 0.7162 | 0.8463 |
| No log | 5.4382 | 484 | 0.8179 | 0.7470 | 0.8179 | 0.9044 |
| No log | 5.4607 | 486 | 0.8560 | 0.6871 | 0.8560 | 0.9252 |
| No log | 5.4831 | 488 | 0.8279 | 0.7044 | 0.8279 | 0.9099 |
| No log | 5.5056 | 490 | 0.7908 | 0.7692 | 0.7908 | 0.8892 |
| No log | 5.5281 | 492 | 0.8189 | 0.72 | 0.8189 | 0.9049 |
| No log | 5.5506 | 494 | 0.8573 | 0.6712 | 0.8573 | 0.9259 |
| No log | 5.5730 | 496 | 0.9023 | 0.6622 | 0.9023 | 0.9499 |
| No log | 5.5955 | 498 | 0.9562 | 0.6395 | 0.9562 | 0.9779 |
| 0.4108 | 5.6180 | 500 | 0.9236 | 0.6154 | 0.9236 | 0.9610 |
| 0.4108 | 5.6404 | 502 | 0.8723 | 0.6429 | 0.8723 | 0.9340 |
| 0.4108 | 5.6629 | 504 | 0.8019 | 0.7211 | 0.8019 | 0.8955 |
| 0.4108 | 5.6854 | 506 | 0.7450 | 0.7662 | 0.7450 | 0.8631 |
| 0.4108 | 5.7079 | 508 | 0.7097 | 0.7848 | 0.7097 | 0.8424 |
| 0.4108 | 5.7303 | 510 | 0.7363 | 0.7831 | 0.7363 | 0.8581 |
| 0.4108 | 5.7528 | 512 | 0.7654 | 0.75 | 0.7654 | 0.8749 |
| 0.4108 | 5.7753 | 514 | 0.7439 | 0.75 | 0.7439 | 0.8625 |
| 0.4108 | 5.7978 | 516 | 0.6668 | 0.7853 | 0.6668 | 0.8166 |
| 0.4108 | 5.8202 | 518 | 0.6113 | 0.8125 | 0.6113 | 0.7819 |
| 0.4108 | 5.8427 | 520 | 0.6067 | 0.8050 | 0.6067 | 0.7789 |
| 0.4108 | 5.8652 | 522 | 0.6229 | 0.8125 | 0.6229 | 0.7893 |
| 0.4108 | 5.8876 | 524 | 0.6233 | 0.7848 | 0.6233 | 0.7895 |
| 0.4108 | 5.9101 | 526 | 0.6430 | 0.7925 | 0.6430 | 0.8019 |
| 0.4108 | 5.9326 | 528 | 0.6314 | 0.7925 | 0.6314 | 0.7946 |
| 0.4108 | 5.9551 | 530 | 0.6136 | 0.8 | 0.6136 | 0.7833 |
| 0.4108 | 5.9775 | 532 | 0.6032 | 0.8125 | 0.6032 | 0.7766 |
| 0.4108 | 6.0 | 534 | 0.6027 | 0.8025 | 0.6027 | 0.7764 |
| 0.4108 | 6.0225 | 536 | 0.6238 | 0.8025 | 0.6238 | 0.7898 |
| 0.4108 | 6.0449 | 538 | 0.6663 | 0.8121 | 0.6663 | 0.8163 |
| 0.4108 | 6.0674 | 540 | 0.6986 | 0.8047 | 0.6986 | 0.8358 |
| 0.4108 | 6.0899 | 542 | 0.7147 | 0.7886 | 0.7147 | 0.8454 |
| 0.4108 | 6.1124 | 544 | 0.7203 | 0.7953 | 0.7203 | 0.8487 |
| 0.4108 | 6.1348 | 546 | 0.7287 | 0.7904 | 0.7287 | 0.8536 |
| 0.4108 | 6.1573 | 548 | 0.7130 | 0.7925 | 0.7130 | 0.8444 |
| 0.4108 | 6.1798 | 550 | 0.6790 | 0.7949 | 0.6790 | 0.8240 |
| 0.4108 | 6.2022 | 552 | 0.6462 | 0.7949 | 0.6462 | 0.8039 |
| 0.4108 | 6.2247 | 554 | 0.6340 | 0.7949 | 0.6340 | 0.7963 |
| 0.4108 | 6.2472 | 556 | 0.6552 | 0.7901 | 0.6552 | 0.8094 |
| 0.4108 | 6.2697 | 558 | 0.6958 | 0.7879 | 0.6958 | 0.8342 |
| 0.4108 | 6.2921 | 560 | 0.7199 | 0.7784 | 0.7199 | 0.8485 |
| 0.4108 | 6.3146 | 562 | 0.7557 | 0.7590 | 0.7557 | 0.8693 |
| 0.4108 | 6.3371 | 564 | 0.7634 | 0.7848 | 0.7634 | 0.8738 |
| 0.4108 | 6.3596 | 566 | 0.7853 | 0.7564 | 0.7853 | 0.8862 |
| 0.4108 | 6.3820 | 568 | 0.8368 | 0.6788 | 0.8368 | 0.9148 |
| 0.4108 | 6.4045 | 570 | 0.9209 | 0.6941 | 0.9209 | 0.9596 |
| 0.4108 | 6.4270 | 572 | 1.0415 | 0.6630 | 1.0415 | 1.0205 |
| 0.4108 | 6.4494 | 574 | 0.9611 | 0.6952 | 0.9611 | 0.9804 |
| 0.4108 | 6.4719 | 576 | 0.8231 | 0.7667 | 0.8231 | 0.9073 |
| 0.4108 | 6.4944 | 578 | 0.7428 | 0.7701 | 0.7428 | 0.8619 |
| 0.4108 | 6.5169 | 580 | 0.6969 | 0.7952 | 0.6969 | 0.8348 |
| 0.4108 | 6.5393 | 582 | 0.7073 | 0.8050 | 0.7073 | 0.8410 |
| 0.4108 | 6.5618 | 584 | 0.7410 | 0.7564 | 0.7410 | 0.8608 |
| 0.4108 | 6.5843 | 586 | 0.7670 | 0.6993 | 0.7670 | 0.8758 |
| 0.4108 | 6.6067 | 588 | 0.7922 | 0.6763 | 0.7922 | 0.8900 |
| 0.4108 | 6.6292 | 590 | 0.8097 | 0.6763 | 0.8097 | 0.8998 |
| 0.4108 | 6.6517 | 592 | 0.8084 | 0.6667 | 0.8084 | 0.8991 |
| 0.4108 | 6.6742 | 594 | 0.8051 | 0.6423 | 0.8051 | 0.8973 |
| 0.4108 | 6.6966 | 596 | 0.7751 | 0.6761 | 0.7751 | 0.8804 |
| 0.4108 | 6.7191 | 598 | 0.7358 | 0.7248 | 0.7358 | 0.8578 |
| 0.4108 | 6.7416 | 600 | 0.7432 | 0.7484 | 0.7432 | 0.8621 |
| 0.4108 | 6.7640 | 602 | 0.7627 | 0.7329 | 0.7627 | 0.8733 |
| 0.4108 | 6.7865 | 604 | 0.8505 | 0.7101 | 0.8505 | 0.9222 |
| 0.4108 | 6.8090 | 606 | 0.9321 | 0.7052 | 0.9321 | 0.9655 |
| 0.4108 | 6.8315 | 608 | 0.9064 | 0.6977 | 0.9064 | 0.9521 |
| 0.4108 | 6.8539 | 610 | 0.8078 | 0.7368 | 0.8078 | 0.8988 |
| 0.4108 | 6.8764 | 612 | 0.7928 | 0.7561 | 0.7928 | 0.8904 |
| 0.4108 | 6.8989 | 614 | 0.8223 | 0.7485 | 0.8223 | 0.9068 |
| 0.4108 | 6.9213 | 616 | 0.8761 | 0.6988 | 0.8761 | 0.9360 |
| 0.4108 | 6.9438 | 618 | 0.9445 | 0.6258 | 0.9445 | 0.9719 |
| 0.4108 | 6.9663 | 620 | 0.9800 | 0.6258 | 0.9800 | 0.9900 |
| 0.4108 | 6.9888 | 622 | 0.9661 | 0.6258 | 0.9661 | 0.9829 |
| 0.4108 | 7.0112 | 624 | 0.8560 | 0.6709 | 0.8560 | 0.9252 |
| 0.4108 | 7.0337 | 626 | 0.7641 | 0.7871 | 0.7641 | 0.8741 |
| 0.4108 | 7.0562 | 628 | 0.7324 | 0.7949 | 0.7324 | 0.8558 |
| 0.4108 | 7.0787 | 630 | 0.7149 | 0.7949 | 0.7149 | 0.8455 |
| 0.4108 | 7.1011 | 632 | 0.7211 | 0.8025 | 0.7211 | 0.8492 |
| 0.4108 | 7.1236 | 634 | 0.8127 | 0.7105 | 0.8127 | 0.9015 |
| 0.4108 | 7.1461 | 636 | 0.9362 | 0.6538 | 0.9362 | 0.9675 |
| 0.4108 | 7.1685 | 638 | 1.0110 | 0.6364 | 1.0110 | 1.0055 |
| 0.4108 | 7.1910 | 640 | 0.9426 | 0.6216 | 0.9426 | 0.9709 |
| 0.4108 | 7.2135 | 642 | 0.8219 | 0.6667 | 0.8219 | 0.9066 |
| 0.4108 | 7.2360 | 644 | 0.7717 | 0.7368 | 0.7717 | 0.8784 |
| 0.4108 | 7.2584 | 646 | 0.7557 | 0.7949 | 0.7557 | 0.8693 |
| 0.4108 | 7.2809 | 648 | 0.7469 | 0.7662 | 0.7469 | 0.8642 |
| 0.4108 | 7.3034 | 650 | 0.7613 | 0.7484 | 0.7613 | 0.8725 |
| 0.4108 | 7.3258 | 652 | 0.7972 | 0.6842 | 0.7972 | 0.8929 |
| 0.4108 | 7.3483 | 654 | 0.8835 | 0.6667 | 0.8835 | 0.9399 |
| 0.4108 | 7.3708 | 656 | 0.9643 | 0.6627 | 0.9643 | 0.9820 |
| 0.4108 | 7.3933 | 658 | 0.9340 | 0.7006 | 0.9340 | 0.9664 |
| 0.4108 | 7.4157 | 660 | 0.8235 | 0.6797 | 0.8235 | 0.9075 |
| 0.4108 | 7.4382 | 662 | 0.7339 | 0.7742 | 0.7339 | 0.8567 |
| 0.4108 | 7.4607 | 664 | 0.7120 | 0.7871 | 0.7120 | 0.8438 |
| 0.4108 | 7.4831 | 666 | 0.7130 | 0.76 | 0.7130 | 0.8444 |
| 0.4108 | 7.5056 | 668 | 0.7107 | 0.7871 | 0.7107 | 0.8430 |
| 0.4108 | 7.5281 | 670 | 0.7281 | 0.7949 | 0.7281 | 0.8533 |
| 0.4108 | 7.5506 | 672 | 0.7606 | 0.7742 | 0.7606 | 0.8721 |
| 0.4108 | 7.5730 | 674 | 0.8003 | 0.7179 | 0.8003 | 0.8946 |
| 0.4108 | 7.5955 | 676 | 0.8399 | 0.6709 | 0.8399 | 0.9165 |
| 0.4108 | 7.6180 | 678 | 0.8613 | 0.6463 | 0.8613 | 0.9281 |
| 0.4108 | 7.6404 | 680 | 0.8270 | 0.6871 | 0.8270 | 0.9094 |
| 0.4108 | 7.6629 | 682 | 0.7780 | 0.7215 | 0.7780 | 0.8820 |
| 0.4108 | 7.6854 | 684 | 0.7576 | 0.7742 | 0.7576 | 0.8704 |
| 0.4108 | 7.7079 | 686 | 0.7672 | 0.7742 | 0.7672 | 0.8759 |
| 0.4108 | 7.7303 | 688 | 0.7841 | 0.7451 | 0.7841 | 0.8855 |
| 0.4108 | 7.7528 | 690 | 0.7796 | 0.7421 | 0.7796 | 0.8829 |
| 0.4108 | 7.7753 | 692 | 0.8324 | 0.6709 | 0.8324 | 0.9124 |
| 0.4108 | 7.7978 | 694 | 0.8602 | 0.6871 | 0.8602 | 0.9275 |
| 0.4108 | 7.8202 | 696 | 0.8447 | 0.7073 | 0.8447 | 0.9191 |
| 0.4108 | 7.8427 | 698 | 0.7853 | 0.7273 | 0.7853 | 0.8862 |
| 0.4108 | 7.8652 | 700 | 0.7673 | 0.75 | 0.7673 | 0.8760 |
| 0.4108 | 7.8876 | 702 | 0.7676 | 0.7574 | 0.7676 | 0.8762 |
| 0.4108 | 7.9101 | 704 | 0.7984 | 0.7152 | 0.7984 | 0.8935 |
| 0.4108 | 7.9326 | 706 | 0.8097 | 0.6790 | 0.8097 | 0.8998 |
| 0.4108 | 7.9551 | 708 | 0.8056 | 0.6923 | 0.8056 | 0.8975 |
| 0.4108 | 7.9775 | 710 | 0.7782 | 0.6933 | 0.7782 | 0.8821 |
| 0.4108 | 8.0 | 712 | 0.7368 | 0.7582 | 0.7368 | 0.8583 |
| 0.4108 | 8.0225 | 714 | 0.7258 | 0.7682 | 0.7258 | 0.8520 |
| 0.4108 | 8.0449 | 716 | 0.7294 | 0.7682 | 0.7294 | 0.8541 |
| 0.4108 | 8.0674 | 718 | 0.7498 | 0.7162 | 0.7498 | 0.8659 |
| 0.4108 | 8.0899 | 720 | 0.8073 | 0.6667 | 0.8073 | 0.8985 |
| 0.4108 | 8.1124 | 722 | 0.8861 | 0.6395 | 0.8861 | 0.9413 |
| 0.4108 | 8.1348 | 724 | 0.8883 | 0.6438 | 0.8883 | 0.9425 |
| 0.4108 | 8.1573 | 726 | 0.8196 | 0.6338 | 0.8196 | 0.9053 |
| 0.4108 | 8.1798 | 728 | 0.7533 | 0.6950 | 0.7533 | 0.8679 |
| 0.4108 | 8.2022 | 730 | 0.7504 | 0.7234 | 0.7504 | 0.8662 |
| 0.4108 | 8.2247 | 732 | 0.7668 | 0.7234 | 0.7668 | 0.8757 |
| 0.4108 | 8.2472 | 734 | 0.7634 | 0.7234 | 0.7634 | 0.8737 |
| 0.4108 | 8.2697 | 736 | 0.7420 | 0.7143 | 0.7420 | 0.8614 |
| 0.4108 | 8.2921 | 738 | 0.7211 | 0.7586 | 0.7211 | 0.8492 |
| 0.4108 | 8.3146 | 740 | 0.7228 | 0.7534 | 0.7228 | 0.8502 |
| 0.4108 | 8.3371 | 742 | 0.7666 | 0.6944 | 0.7666 | 0.8756 |
| 0.4108 | 8.3596 | 744 | 0.7875 | 0.6286 | 0.7875 | 0.8874 |
| 0.4108 | 8.3820 | 746 | 0.7802 | 0.6475 | 0.7802 | 0.8833 |
| 0.4108 | 8.4045 | 748 | 0.7427 | 0.7092 | 0.7427 | 0.8618 |
| 0.4108 | 8.4270 | 750 | 0.7343 | 0.7234 | 0.7343 | 0.8569 |
| 0.4108 | 8.4494 | 752 | 0.7432 | 0.7234 | 0.7432 | 0.8621 |
| 0.4108 | 8.4719 | 754 | 0.7555 | 0.7310 | 0.7555 | 0.8692 |
| 0.4108 | 8.4944 | 756 | 0.7775 | 0.7467 | 0.7775 | 0.8818 |
| 0.4108 | 8.5169 | 758 | 0.7997 | 0.7582 | 0.7997 | 0.8943 |
| 0.4108 | 8.5393 | 760 | 0.8284 | 0.6923 | 0.8284 | 0.9102 |
| 0.4108 | 8.5618 | 762 | 0.8917 | 0.6875 | 0.8917 | 0.9443 |
| 0.4108 | 8.5843 | 764 | 0.9260 | 0.6624 | 0.9260 | 0.9623 |
| 0.4108 | 8.6067 | 766 | 0.8834 | 0.6711 | 0.8834 | 0.9399 |
| 0.4108 | 8.6292 | 768 | 0.7681 | 0.7152 | 0.7681 | 0.8764 |
| 0.4108 | 8.6517 | 770 | 0.6925 | 0.7843 | 0.6925 | 0.8322 |
| 0.4108 | 8.6742 | 772 | 0.6811 | 0.7733 | 0.6811 | 0.8253 |
| 0.4108 | 8.6966 | 774 | 0.6814 | 0.7949 | 0.6814 | 0.8255 |
| 0.4108 | 8.7191 | 776 | 0.6917 | 0.7949 | 0.6917 | 0.8317 |
| 0.4108 | 8.7416 | 778 | 0.7070 | 0.7742 | 0.7070 | 0.8408 |
| 0.4108 | 8.7640 | 780 | 0.7199 | 0.8 | 0.7199 | 0.8484 |
| 0.4108 | 8.7865 | 782 | 0.7148 | 0.8 | 0.7148 | 0.8454 |
| 0.4108 | 8.8090 | 784 | 0.7058 | 0.7949 | 0.7058 | 0.8401 |
| 0.4108 | 8.8315 | 786 | 0.7053 | 0.8050 | 0.7053 | 0.8398 |
| 0.4108 | 8.8539 | 788 | 0.7390 | 0.7853 | 0.7390 | 0.8596 |
| 0.4108 | 8.8764 | 790 | 0.7897 | 0.7407 | 0.7897 | 0.8887 |
| 0.4108 | 8.8989 | 792 | 0.8494 | 0.6941 | 0.8494 | 0.9216 |
| 0.4108 | 8.9213 | 794 | 0.8960 | 0.6667 | 0.8960 | 0.9466 |
| 0.4108 | 8.9438 | 796 | 0.9026 | 0.6746 | 0.9026 | 0.9501 |
| 0.4108 | 8.9663 | 798 | 0.8892 | 0.7018 | 0.8892 | 0.9430 |
| 0.4108 | 8.9888 | 800 | 0.8590 | 0.7135 | 0.8590 | 0.9268 |
| 0.4108 | 9.0112 | 802 | 0.8189 | 0.7024 | 0.8189 | 0.9049 |
| 0.4108 | 9.0337 | 804 | 0.8683 | 0.6941 | 0.8683 | 0.9318 |
| 0.4108 | 9.0562 | 806 | 0.8950 | 0.6747 | 0.8950 | 0.9461 |
| 0.4108 | 9.0787 | 808 | 0.9466 | 0.6386 | 0.9466 | 0.9729 |
| 0.4108 | 9.1011 | 810 | 0.8961 | 0.6709 | 0.8961 | 0.9466 |
| 0.4108 | 9.1236 | 812 | 0.7832 | 0.7190 | 0.7832 | 0.8850 |
| 0.4108 | 9.1461 | 814 | 0.7159 | 0.7451 | 0.7159 | 0.8461 |
| 0.4108 | 9.1685 | 816 | 0.7033 | 0.7582 | 0.7033 | 0.8386 |
| 0.4108 | 9.1910 | 818 | 0.7130 | 0.7582 | 0.7130 | 0.8444 |
| 0.4108 | 9.2135 | 820 | 0.7455 | 0.7237 | 0.7455 | 0.8634 |
| 0.4108 | 9.2360 | 822 | 0.7779 | 0.7190 | 0.7779 | 0.8820 |
| 0.4108 | 9.2584 | 824 | 0.7932 | 0.7237 | 0.7932 | 0.8906 |
| 0.4108 | 9.2809 | 826 | 0.7900 | 0.7237 | 0.7900 | 0.8888 |
| 0.4108 | 9.3034 | 828 | 0.7522 | 0.7105 | 0.7522 | 0.8673 |
| 0.4108 | 9.3258 | 830 | 0.7069 | 0.7333 | 0.7069 | 0.8408 |
| 0.4108 | 9.3483 | 832 | 0.7039 | 0.7467 | 0.7039 | 0.8390 |
| 0.4108 | 9.3708 | 834 | 0.7188 | 0.7361 | 0.7188 | 0.8478 |
| 0.4108 | 9.3933 | 836 | 0.7636 | 0.6950 | 0.7636 | 0.8738 |
| 0.4108 | 9.4157 | 838 | 0.7806 | 0.6812 | 0.7806 | 0.8835 |
| 0.4108 | 9.4382 | 840 | 0.7602 | 0.6667 | 0.7602 | 0.8719 |
| 0.4108 | 9.4607 | 842 | 0.7421 | 0.7133 | 0.7421 | 0.8614 |
| 0.4108 | 9.4831 | 844 | 0.7280 | 0.7248 | 0.7280 | 0.8532 |
| 0.4108 | 9.5056 | 846 | 0.7300 | 0.7248 | 0.7300 | 0.8544 |
| 0.4108 | 9.5281 | 848 | 0.7692 | 0.7190 | 0.7692 | 0.8770 |
| 0.4108 | 9.5506 | 850 | 0.7942 | 0.7190 | 0.7942 | 0.8912 |
| 0.4108 | 9.5730 | 852 | 0.7760 | 0.7075 | 0.7760 | 0.8809 |
| 0.4108 | 9.5955 | 854 | 0.7386 | 0.7248 | 0.7386 | 0.8594 |
| 0.4108 | 9.6180 | 856 | 0.7129 | 0.7248 | 0.7129 | 0.8444 |
| 0.4108 | 9.6404 | 858 | 0.7050 | 0.7333 | 0.7050 | 0.8396 |
| 0.4108 | 9.6629 | 860 | 0.7291 | 0.7368 | 0.7291 | 0.8539 |
| 0.4108 | 9.6854 | 862 | 0.7425 | 0.7451 | 0.7425 | 0.8617 |
| 0.4108 | 9.7079 | 864 | 0.7430 | 0.7368 | 0.7430 | 0.8620 |
| 0.4108 | 9.7303 | 866 | 0.7808 | 0.7368 | 0.7808 | 0.8837 |
| 0.4108 | 9.7528 | 868 | 0.9250 | 0.6429 | 0.9250 | 0.9618 |
| 0.4108 | 9.7753 | 870 | 0.9775 | 0.6347 | 0.9775 | 0.9887 |
| 0.4108 | 9.7978 | 872 | 0.8964 | 0.6503 | 0.8964 | 0.9468 |
| 0.4108 | 9.8202 | 874 | 0.7878 | 0.7436 | 0.7878 | 0.8876 |
| 0.4108 | 9.8427 | 876 | 0.7188 | 0.7662 | 0.7188 | 0.8478 |
| 0.4108 | 9.8652 | 878 | 0.6850 | 0.7871 | 0.6850 | 0.8276 |
| 0.4108 | 9.8876 | 880 | 0.6926 | 0.7662 | 0.6926 | 0.8322 |
| 0.4108 | 9.9101 | 882 | 0.7107 | 0.7550 | 0.7107 | 0.8430 |
| 0.4108 | 9.9326 | 884 | 0.7245 | 0.7432 | 0.7245 | 0.8512 |
| 0.4108 | 9.9551 | 886 | 0.7322 | 0.7310 | 0.7322 | 0.8557 |
| 0.4108 | 9.9775 | 888 | 0.7489 | 0.7361 | 0.7489 | 0.8654 |
| 0.4108 | 10.0 | 890 | 0.7577 | 0.7361 | 0.7577 | 0.8705 |
| 0.4108 | 10.0225 | 892 | 0.7513 | 0.7361 | 0.7513 | 0.8668 |
| 0.4108 | 10.0449 | 894 | 0.7598 | 0.7133 | 0.7598 | 0.8717 |
| 0.4108 | 10.0674 | 896 | 0.7931 | 0.6522 | 0.7931 | 0.8906 |
| 0.4108 | 10.0899 | 898 | 0.8341 | 0.6423 | 0.8341 | 0.9133 |
| 0.4108 | 10.1124 | 900 | 0.8211 | 0.6522 | 0.8211 | 0.9061 |
| 0.4108 | 10.1348 | 902 | 0.7625 | 0.6763 | 0.7625 | 0.8732 |
| 0.4108 | 10.1573 | 904 | 0.7206 | 0.7234 | 0.7206 | 0.8489 |
| 0.4108 | 10.1798 | 906 | 0.6887 | 0.7234 | 0.6887 | 0.8299 |
| 0.4108 | 10.2022 | 908 | 0.6599 | 0.7448 | 0.6599 | 0.8124 |
| 0.4108 | 10.2247 | 910 | 0.6414 | 0.7671 | 0.6414 | 0.8009 |
| 0.4108 | 10.2472 | 912 | 0.6461 | 0.7586 | 0.6461 | 0.8038 |
| 0.4108 | 10.2697 | 914 | 0.6742 | 0.7413 | 0.6742 | 0.8211 |
| 0.4108 | 10.2921 | 916 | 0.7835 | 0.6809 | 0.7835 | 0.8851 |
| 0.4108 | 10.3146 | 918 | 0.8354 | 0.6475 | 0.8354 | 0.9140 |
| 0.4108 | 10.3371 | 920 | 0.7677 | 0.6761 | 0.7677 | 0.8762 |
| 0.4108 | 10.3596 | 922 | 0.6541 | 0.7222 | 0.6541 | 0.8088 |
| 0.4108 | 10.3820 | 924 | 0.6368 | 0.7733 | 0.6368 | 0.7980 |
| 0.4108 | 10.4045 | 926 | 0.6619 | 0.7949 | 0.6619 | 0.8136 |
| 0.4108 | 10.4270 | 928 | 0.7138 | 0.7975 | 0.7138 | 0.8449 |
| 0.4108 | 10.4494 | 930 | 0.7559 | 0.7692 | 0.7559 | 0.8694 |
| 0.4108 | 10.4719 | 932 | 0.7835 | 0.75 | 0.7836 | 0.8852 |
| 0.4108 | 10.4944 | 934 | 0.7897 | 0.7425 | 0.7897 | 0.8887 |
| 0.4108 | 10.5169 | 936 | 0.8144 | 0.7349 | 0.8144 | 0.9024 |
| 0.4108 | 10.5393 | 938 | 0.8365 | 0.7051 | 0.8365 | 0.9146 |
| 0.4108 | 10.5618 | 940 | 0.8405 | 0.6928 | 0.8405 | 0.9168 |
| 0.4108 | 10.5843 | 942 | 0.8292 | 0.6579 | 0.8292 | 0.9106 |
| 0.4108 | 10.6067 | 944 | 0.7814 | 0.7143 | 0.7814 | 0.8840 |
| 0.4108 | 10.6292 | 946 | 0.7252 | 0.7516 | 0.7252 | 0.8516 |
| 0.4108 | 10.6517 | 948 | 0.6647 | 0.7722 | 0.6647 | 0.8153 |
| 0.4108 | 10.6742 | 950 | 0.6261 | 0.7843 | 0.6261 | 0.7913 |
| 0.4108 | 10.6966 | 952 | 0.6143 | 0.7517 | 0.6143 | 0.7838 |
| 0.4108 | 10.7191 | 954 | 0.6057 | 0.7733 | 0.6057 | 0.7783 |
| 0.4108 | 10.7416 | 956 | 0.5955 | 0.7799 | 0.5955 | 0.7717 |
| 0.4108 | 10.7640 | 958 | 0.6243 | 0.7975 | 0.6243 | 0.7901 |
| 0.4108 | 10.7865 | 960 | 0.7047 | 0.7453 | 0.7047 | 0.8395 |
| 0.4108 | 10.8090 | 962 | 0.8036 | 0.725 | 0.8036 | 0.8964 |
| 0.4108 | 10.8315 | 964 | 0.8204 | 0.7006 | 0.8204 | 0.9058 |
| 0.4108 | 10.8539 | 966 | 0.7663 | 0.6974 | 0.7663 | 0.8754 |
| 0.4108 | 10.8764 | 968 | 0.6950 | 0.7397 | 0.6950 | 0.8336 |
| 0.4108 | 10.8989 | 970 | 0.6610 | 0.7397 | 0.6610 | 0.8130 |
| 0.4108 | 10.9213 | 972 | 0.6401 | 0.7733 | 0.6401 | 0.8001 |
| 0.4108 | 10.9438 | 974 | 0.6417 | 0.8050 | 0.6417 | 0.8011 |
| 0.4108 | 10.9663 | 976 | 0.7302 | 0.7362 | 0.7302 | 0.8545 |
| 0.4108 | 10.9888 | 978 | 0.8548 | 0.7186 | 0.8548 | 0.9245 |
| 0.4108 | 11.0112 | 980 | 0.9575 | 0.6747 | 0.9575 | 0.9785 |
| 0.4108 | 11.0337 | 982 | 0.9180 | 0.6875 | 0.9180 | 0.9581 |
| 0.4108 | 11.0562 | 984 | 0.8190 | 0.7308 | 0.8190 | 0.9050 |
| 0.4108 | 11.0787 | 986 | 0.7186 | 0.7226 | 0.7186 | 0.8477 |
| 0.4108 | 11.1011 | 988 | 0.6561 | 0.7534 | 0.6561 | 0.8100 |
| 0.4108 | 11.1236 | 990 | 0.6681 | 0.7586 | 0.6681 | 0.8174 |
| 0.4108 | 11.1461 | 992 | 0.6943 | 0.7234 | 0.6943 | 0.8332 |
| 0.4108 | 11.1685 | 994 | 0.7027 | 0.7234 | 0.7027 | 0.8383 |
| 0.4108 | 11.1910 | 996 | 0.7129 | 0.75 | 0.7129 | 0.8444 |
| 0.4108 | 11.2135 | 998 | 0.7362 | 0.7133 | 0.7362 | 0.8580 |
| 0.0902 | 11.2360 | 1000 | 0.7913 | 0.6331 | 0.7913 | 0.8895 |
| 0.0902 | 11.2584 | 1002 | 0.8354 | 0.6197 | 0.8354 | 0.9140 |
| 0.0902 | 11.2809 | 1004 | 0.8424 | 0.6438 | 0.8424 | 0.9178 |
| 0.0902 | 11.3034 | 1006 | 0.7741 | 0.6846 | 0.7741 | 0.8798 |
| 0.0902 | 11.3258 | 1008 | 0.6986 | 0.7222 | 0.6986 | 0.8358 |
| 0.0902 | 11.3483 | 1010 | 0.6959 | 0.7222 | 0.6959 | 0.8342 |
| 0.0902 | 11.3708 | 1012 | 0.7097 | 0.7222 | 0.7097 | 0.8424 |
| 0.0902 | 11.3933 | 1014 | 0.7227 | 0.7222 | 0.7227 | 0.8501 |
| 0.0902 | 11.4157 | 1016 | 0.7240 | 0.7222 | 0.7240 | 0.8509 |
| 0.0902 | 11.4382 | 1018 | 0.7231 | 0.7183 | 0.7231 | 0.8504 |
| 0.0902 | 11.4607 | 1020 | 0.7425 | 0.7183 | 0.7425 | 0.8617 |
| 0.0902 | 11.4831 | 1022 | 0.7399 | 0.6857 | 0.7399 | 0.8602 |
| 0.0902 | 11.5056 | 1024 | 0.7269 | 0.6950 | 0.7269 | 0.8526 |
| 0.0902 | 11.5281 | 1026 | 0.7119 | 0.6901 | 0.7119 | 0.8437 |
| 0.0902 | 11.5506 | 1028 | 0.7208 | 0.6939 | 0.7208 | 0.8490 |
| 0.0902 | 11.5730 | 1030 | 0.7919 | 0.6839 | 0.7919 | 0.8899 |
| 0.0902 | 11.5955 | 1032 | 0.8472 | 0.6753 | 0.8472 | 0.9204 |
| 0.0902 | 11.6180 | 1034 | 0.8890 | 0.6623 | 0.8890 | 0.9428 |
| 0.0902 | 11.6404 | 1036 | 0.8065 | 0.6839 | 0.8065 | 0.8980 |
| 0.0902 | 11.6629 | 1038 | 0.7126 | 0.7403 | 0.7126 | 0.8442 |
| 0.0902 | 11.6854 | 1040 | 0.6552 | 0.7742 | 0.6552 | 0.8095 |
| 0.0902 | 11.7079 | 1042 | 0.6468 | 0.7949 | 0.6468 | 0.8043 |
| 0.0902 | 11.7303 | 1044 | 0.6622 | 0.7949 | 0.6622 | 0.8137 |
| 0.0902 | 11.7528 | 1046 | 0.6878 | 0.7692 | 0.6878 | 0.8294 |
| 0.0902 | 11.7753 | 1048 | 0.6873 | 0.7742 | 0.6873 | 0.8290 |
| 0.0902 | 11.7978 | 1050 | 0.6694 | 0.7662 | 0.6694 | 0.8182 |
| 0.0902 | 11.8202 | 1052 | 0.6394 | 0.7843 | 0.6394 | 0.7996 |
| 0.0902 | 11.8427 | 1054 | 0.6176 | 0.7843 | 0.6176 | 0.7859 |
| 0.0902 | 11.8652 | 1056 | 0.6236 | 0.7763 | 0.6236 | 0.7897 |
| 0.0902 | 11.8876 | 1058 | 0.6418 | 0.7792 | 0.6418 | 0.8011 |
| 0.0902 | 11.9101 | 1060 | 0.6360 | 0.7722 | 0.6360 | 0.7975 |
| 0.0902 | 11.9326 | 1062 | 0.6413 | 0.775 | 0.6413 | 0.8008 |
| 0.0902 | 11.9551 | 1064 | 0.6599 | 0.775 | 0.6599 | 0.8123 |
| 0.0902 | 11.9775 | 1066 | 0.6832 | 0.775 | 0.6832 | 0.8266 |
| 0.0902 | 12.0 | 1068 | 0.7074 | 0.7799 | 0.7074 | 0.8411 |
| 0.0902 | 12.0225 | 1070 | 0.7394 | 0.7703 | 0.7394 | 0.8599 |
| 0.0902 | 12.0449 | 1072 | 0.7905 | 0.6944 | 0.7905 | 0.8891 |
| 0.0902 | 12.0674 | 1074 | 0.8131 | 0.7133 | 0.8131 | 0.9017 |
| 0.0902 | 12.0899 | 1076 | 0.7863 | 0.7273 | 0.7863 | 0.8867 |
| 0.0902 | 12.1124 | 1078 | 0.7514 | 0.7534 | 0.7514 | 0.8668 |
| 0.0902 | 12.1348 | 1080 | 0.7385 | 0.6667 | 0.7385 | 0.8594 |
| 0.0902 | 12.1573 | 1082 | 0.7411 | 0.6667 | 0.7411 | 0.8609 |
| 0.0902 | 12.1798 | 1084 | 0.7129 | 0.6757 | 0.7129 | 0.8443 |
| 0.0902 | 12.2022 | 1086 | 0.6768 | 0.6892 | 0.6768 | 0.8227 |
| 0.0902 | 12.2247 | 1088 | 0.6493 | 0.7413 | 0.6493 | 0.8058 |
| 0.0902 | 12.2472 | 1090 | 0.6430 | 0.7671 | 0.6430 | 0.8019 |
| 0.0902 | 12.2697 | 1092 | 0.6493 | 0.7586 | 0.6493 | 0.8058 |
| 0.0902 | 12.2921 | 1094 | 0.6675 | 0.7413 | 0.6675 | 0.8170 |
| 0.0902 | 12.3146 | 1096 | 0.7245 | 0.6901 | 0.7245 | 0.8512 |
| 0.0902 | 12.3371 | 1098 | 0.8463 | 0.6575 | 0.8463 | 0.9199 |
| 0.0902 | 12.3596 | 1100 | 0.9309 | 0.6483 | 0.9309 | 0.9648 |
| 0.0902 | 12.3820 | 1102 | 0.9031 | 0.6667 | 0.9031 | 0.9503 |
| 0.0902 | 12.4045 | 1104 | 0.7976 | 0.6761 | 0.7976 | 0.8931 |
| 0.0902 | 12.4270 | 1106 | 0.6976 | 0.7448 | 0.6976 | 0.8352 |
| 0.0902 | 12.4494 | 1108 | 0.6741 | 0.7671 | 0.6741 | 0.8210 |
| 0.0902 | 12.4719 | 1110 | 0.6870 | 0.7050 | 0.6870 | 0.8289 |
| 0.0902 | 12.4944 | 1112 | 0.6879 | 0.7413 | 0.6879 | 0.8294 |
| 0.0902 | 12.5169 | 1114 | 0.6805 | 0.75 | 0.6805 | 0.8249 |
| 0.0902 | 12.5393 | 1116 | 0.6757 | 0.7586 | 0.6757 | 0.8220 |
| 0.0902 | 12.5618 | 1118 | 0.6882 | 0.7534 | 0.6882 | 0.8296 |
| 0.0902 | 12.5843 | 1120 | 0.7040 | 0.7123 | 0.7040 | 0.8391 |
| 0.0902 | 12.6067 | 1122 | 0.6870 | 0.7432 | 0.6870 | 0.8289 |
| 0.0902 | 12.6292 | 1124 | 0.6693 | 0.7619 | 0.6693 | 0.8181 |
| 0.0902 | 12.6517 | 1126 | 0.6815 | 0.7361 | 0.6815 | 0.8256 |
| 0.0902 | 12.6742 | 1128 | 0.7148 | 0.7133 | 0.7148 | 0.8455 |
| 0.0902 | 12.6966 | 1130 | 0.7334 | 0.6901 | 0.7334 | 0.8564 |
| 0.0902 | 12.7191 | 1132 | 0.7092 | 0.6944 | 0.7092 | 0.8421 |
| 0.0902 | 12.7416 | 1134 | 0.6664 | 0.7361 | 0.6664 | 0.8164 |
| 0.0902 | 12.7640 | 1136 | 0.6453 | 0.7703 | 0.6453 | 0.8033 |
| 0.0902 | 12.7865 | 1138 | 0.6592 | 0.7682 | 0.6592 | 0.8119 |
| 0.0902 | 12.8090 | 1140 | 0.6804 | 0.7448 | 0.6804 | 0.8249 |
| 0.0902 | 12.8315 | 1142 | 0.7077 | 0.75 | 0.7077 | 0.8413 |
| 0.0902 | 12.8539 | 1144 | 0.7501 | 0.6950 | 0.7501 | 0.8661 |
| 0.0902 | 12.8764 | 1146 | 0.7861 | 0.6620 | 0.7861 | 0.8866 |
| 0.0902 | 12.8989 | 1148 | 0.7880 | 0.6620 | 0.7880 | 0.8877 |
| 0.0902 | 12.9213 | 1150 | 0.7739 | 0.6986 | 0.7739 | 0.8797 |
| 0.0902 | 12.9438 | 1152 | 0.7630 | 0.7114 | 0.7630 | 0.8735 |
| 0.0902 | 12.9663 | 1154 | 0.7492 | 0.7067 | 0.7492 | 0.8656 |
| 0.0902 | 12.9888 | 1156 | 0.7518 | 0.7105 | 0.7518 | 0.8670 |
| 0.0902 | 13.0112 | 1158 | 0.7827 | 0.6968 | 0.7827 | 0.8847 |
| 0.0902 | 13.0337 | 1160 | 0.7917 | 0.7037 | 0.7917 | 0.8898 |
| 0.0902 | 13.0562 | 1162 | 0.7680 | 0.7108 | 0.7680 | 0.8764 |
| 0.0902 | 13.0787 | 1164 | 0.7283 | 0.7545 | 0.7283 | 0.8534 |
| 0.0902 | 13.1011 | 1166 | 0.6815 | 0.7619 | 0.6815 | 0.8255 |
| 0.0902 | 13.1236 | 1168 | 0.6469 | 0.7799 | 0.6469 | 0.8043 |
| 0.0902 | 13.1461 | 1170 | 0.6309 | 0.7949 | 0.6309 | 0.7943 |
| 0.0902 | 13.1685 | 1172 | 0.6314 | 0.7949 | 0.6314 | 0.7946 |
| 0.0902 | 13.1910 | 1174 | 0.6307 | 0.7949 | 0.6307 | 0.7942 |
| 0.0902 | 13.2135 | 1176 | 0.6444 | 0.7871 | 0.6444 | 0.8028 |
| 0.0902 | 13.2360 | 1178 | 0.6747 | 0.7792 | 0.6747 | 0.8214 |
| 0.0902 | 13.2584 | 1180 | 0.6891 | 0.7712 | 0.6891 | 0.8301 |
| 0.0902 | 13.2809 | 1182 | 0.6867 | 0.7712 | 0.6867 | 0.8287 |
| 0.0902 | 13.3034 | 1184 | 0.6621 | 0.76 | 0.6621 | 0.8137 |
| 0.0902 | 13.3258 | 1186 | 0.6470 | 0.7682 | 0.6470 | 0.8043 |
| 0.0902 | 13.3483 | 1188 | 0.6416 | 0.7763 | 0.6416 | 0.8010 |
| 0.0902 | 13.3708 | 1190 | 0.6360 | 0.7763 | 0.6360 | 0.7975 |
| 0.0902 | 13.3933 | 1192 | 0.6297 | 0.7682 | 0.6297 | 0.7936 |
| 0.0902 | 13.4157 | 1194 | 0.6456 | 0.7682 | 0.6456 | 0.8035 |
| 0.0902 | 13.4382 | 1196 | 0.6563 | 0.7771 | 0.6563 | 0.8101 |
| 0.0902 | 13.4607 | 1198 | 0.6641 | 0.7692 | 0.6641 | 0.8149 |
| 0.0902 | 13.4831 | 1200 | 0.6807 | 0.7484 | 0.6807 | 0.8251 |
| 0.0902 | 13.5056 | 1202 | 0.6735 | 0.7692 | 0.6735 | 0.8207 |
| 0.0902 | 13.5281 | 1204 | 0.6445 | 0.7582 | 0.6445 | 0.8028 |
| 0.0902 | 13.5506 | 1206 | 0.6326 | 0.7792 | 0.6326 | 0.7953 |
| 0.0902 | 13.5730 | 1208 | 0.6318 | 0.7792 | 0.6318 | 0.7948 |
| 0.0902 | 13.5955 | 1210 | 0.6403 | 0.7692 | 0.6403 | 0.8002 |
| 0.0902 | 13.6180 | 1212 | 0.6905 | 0.7547 | 0.6905 | 0.8310 |
| 0.0902 | 13.6404 | 1214 | 0.7628 | 0.7425 | 0.7628 | 0.8734 |
| 0.0902 | 13.6629 | 1216 | 0.8088 | 0.7045 | 0.8088 | 0.8994 |
| 0.0902 | 13.6854 | 1218 | 0.8127 | 0.7293 | 0.8127 | 0.9015 |
| 0.0902 | 13.7079 | 1220 | 0.8216 | 0.7363 | 0.8216 | 0.9064 |
| 0.0902 | 13.7303 | 1222 | 0.8397 | 0.7363 | 0.8397 | 0.9164 |
| 0.0902 | 13.7528 | 1224 | 0.8094 | 0.7541 | 0.8094 | 0.8997 |
| 0.0902 | 13.7753 | 1226 | 0.7963 | 0.7444 | 0.7963 | 0.8923 |
| 0.0902 | 13.7978 | 1228 | 0.8017 | 0.7444 | 0.8017 | 0.8954 |
| 0.0902 | 13.8202 | 1230 | 0.8622 | 0.7263 | 0.8622 | 0.9286 |
| 0.0902 | 13.8427 | 1232 | 0.9431 | 0.6742 | 0.9431 | 0.9711 |
| 0.0902 | 13.8652 | 1234 | 0.9269 | 0.6857 | 0.9269 | 0.9628 |
| 0.0902 | 13.8876 | 1236 | 0.8206 | 0.7093 | 0.8206 | 0.9059 |
| 0.0902 | 13.9101 | 1238 | 0.7411 | 0.7468 | 0.7411 | 0.8609 |
| 0.0902 | 13.9326 | 1240 | 0.7126 | 0.7368 | 0.7126 | 0.8442 |
| 0.0902 | 13.9551 | 1242 | 0.7678 | 0.6713 | 0.7678 | 0.8762 |
| 0.0902 | 13.9775 | 1244 | 0.8350 | 0.6475 | 0.8350 | 0.9138 |
| 0.0902 | 14.0 | 1246 | 0.8394 | 0.6475 | 0.8394 | 0.9162 |
| 0.0902 | 14.0225 | 1248 | 0.8266 | 0.5985 | 0.8266 | 0.9092 |
| 0.0902 | 14.0449 | 1250 | 0.7328 | 0.6618 | 0.7328 | 0.8561 |
| 0.0902 | 14.0674 | 1252 | 0.6881 | 0.7 | 0.6881 | 0.8295 |
| 0.0902 | 14.0899 | 1254 | 0.6485 | 0.6993 | 0.6485 | 0.8053 |
| 0.0902 | 14.1124 | 1256 | 0.6188 | 0.7568 | 0.6188 | 0.7867 |
| 0.0902 | 14.1348 | 1258 | 0.6284 | 0.7763 | 0.6284 | 0.7927 |
| 0.0902 | 14.1573 | 1260 | 0.6399 | 0.7763 | 0.6399 | 0.8000 |
| 0.0902 | 14.1798 | 1262 | 0.6428 | 0.7843 | 0.6428 | 0.8017 |
| 0.0902 | 14.2022 | 1264 | 0.6348 | 0.7843 | 0.6348 | 0.7967 |
| 0.0902 | 14.2247 | 1266 | 0.6232 | 0.7733 | 0.6232 | 0.7894 |
| 0.0902 | 14.2472 | 1268 | 0.6153 | 0.7534 | 0.6153 | 0.7844 |
| 0.0902 | 14.2697 | 1270 | 0.6003 | 0.7534 | 0.6003 | 0.7748 |
| 0.0902 | 14.2921 | 1272 | 0.5946 | 0.7619 | 0.5946 | 0.7711 |
| 0.0902 | 14.3146 | 1274 | 0.5945 | 0.7619 | 0.5945 | 0.7711 |
| 0.0902 | 14.3371 | 1276 | 0.5865 | 0.7619 | 0.5865 | 0.7658 |
| 0.0902 | 14.3596 | 1278 | 0.5915 | 0.7534 | 0.5915 | 0.7691 |
| 0.0902 | 14.3820 | 1280 | 0.6255 | 0.7534 | 0.6255 | 0.7909 |
| 0.0902 | 14.4045 | 1282 | 0.6337 | 0.7534 | 0.6337 | 0.7961 |
| 0.0902 | 14.4270 | 1284 | 0.6298 | 0.7534 | 0.6298 | 0.7936 |
| 0.0902 | 14.4494 | 1286 | 0.6248 | 0.7651 | 0.6248 | 0.7905 |
| 0.0902 | 14.4719 | 1288 | 0.6185 | 0.7871 | 0.6185 | 0.7865 |
| 0.0902 | 14.4944 | 1290 | 0.6037 | 0.7949 | 0.6037 | 0.7770 |
| 0.0902 | 14.5169 | 1292 | 0.5979 | 0.7949 | 0.5979 | 0.7733 |
| 0.0902 | 14.5393 | 1294 | 0.6044 | 0.7949 | 0.6044 | 0.7774 |
| 0.0902 | 14.5618 | 1296 | 0.6157 | 0.8050 | 0.6157 | 0.7846 |
| 0.0902 | 14.5843 | 1298 | 0.6444 | 0.7879 | 0.6444 | 0.8028 |
| 0.0902 | 14.6067 | 1300 | 0.6733 | 0.7738 | 0.6733 | 0.8205 |
| 0.0902 | 14.6292 | 1302 | 0.7218 | 0.7674 | 0.7218 | 0.8496 |
| 0.0902 | 14.6517 | 1304 | 0.7525 | 0.7543 | 0.7525 | 0.8675 |
| 0.0902 | 14.6742 | 1306 | 0.7656 | 0.7640 | 0.7656 | 0.8750 |
| 0.0902 | 14.6966 | 1308 | 0.7987 | 0.7086 | 0.7987 | 0.8937 |
| 0.0902 | 14.7191 | 1310 | 0.8112 | 0.7045 | 0.8112 | 0.9007 |
| 0.0902 | 14.7416 | 1312 | 0.8417 | 0.6932 | 0.8417 | 0.9174 |
| 0.0902 | 14.7640 | 1314 | 0.7978 | 0.6941 | 0.7978 | 0.8932 |
| 0.0902 | 14.7865 | 1316 | 0.7087 | 0.7179 | 0.7087 | 0.8419 |
| 0.0902 | 14.8090 | 1318 | 0.6274 | 0.7792 | 0.6274 | 0.7921 |
| 0.0902 | 14.8315 | 1320 | 0.6026 | 0.7733 | 0.6026 | 0.7762 |
| 0.0902 | 14.8539 | 1322 | 0.6035 | 0.7651 | 0.6035 | 0.7769 |
| 0.0902 | 14.8764 | 1324 | 0.6007 | 0.7651 | 0.6007 | 0.7751 |
| 0.0902 | 14.8989 | 1326 | 0.5988 | 0.7534 | 0.5988 | 0.7738 |
| 0.0902 | 14.9213 | 1328 | 0.6025 | 0.7534 | 0.6025 | 0.7762 |
| 0.0902 | 14.9438 | 1330 | 0.6074 | 0.7534 | 0.6074 | 0.7794 |
| 0.0902 | 14.9663 | 1332 | 0.6149 | 0.7534 | 0.6149 | 0.7842 |
| 0.0902 | 14.9888 | 1334 | 0.6369 | 0.7534 | 0.6369 | 0.7981 |
| 0.0902 | 15.0112 | 1336 | 0.6929 | 0.6901 | 0.6929 | 0.8324 |
| 0.0902 | 15.0337 | 1338 | 0.7589 | 0.6429 | 0.7589 | 0.8712 |
| 0.0902 | 15.0562 | 1340 | 0.8354 | 0.6429 | 0.8354 | 0.9140 |
| 0.0902 | 15.0787 | 1342 | 0.8459 | 0.6429 | 0.8459 | 0.9197 |
| 0.0902 | 15.1011 | 1344 | 0.7767 | 0.6429 | 0.7767 | 0.8813 |
| 0.0902 | 15.1236 | 1346 | 0.6776 | 0.7133 | 0.6776 | 0.8232 |
| 0.0902 | 15.1461 | 1348 | 0.6352 | 0.7534 | 0.6352 | 0.7970 |
| 0.0902 | 15.1685 | 1350 | 0.6148 | 0.7534 | 0.6148 | 0.7841 |
| 0.0902 | 15.1910 | 1352 | 0.6157 | 0.7651 | 0.6157 | 0.7847 |
| 0.0902 | 15.2135 | 1354 | 0.6375 | 0.7448 | 0.6375 | 0.7984 |
| 0.0902 | 15.2360 | 1356 | 0.6892 | 0.6993 | 0.6892 | 0.8302 |
| 0.0902 | 15.2584 | 1358 | 0.7375 | 0.6429 | 0.7375 | 0.8588 |
| 0.0902 | 15.2809 | 1360 | 0.7440 | 0.6277 | 0.7440 | 0.8626 |
| 0.0902 | 15.3034 | 1362 | 0.7051 | 0.6857 | 0.7051 | 0.8397 |
| 0.0902 | 15.3258 | 1364 | 0.6784 | 0.7324 | 0.6784 | 0.8236 |
| 0.0902 | 15.3483 | 1366 | 0.6693 | 0.7413 | 0.6693 | 0.8181 |
| 0.0902 | 15.3708 | 1368 | 0.6865 | 0.7092 | 0.6865 | 0.8285 |
| 0.0902 | 15.3933 | 1370 | 0.7129 | 0.7034 | 0.7129 | 0.8443 |
| 0.0902 | 15.4157 | 1372 | 0.7180 | 0.6806 | 0.7180 | 0.8473 |
| 0.0902 | 15.4382 | 1374 | 0.7093 | 0.6713 | 0.7093 | 0.8422 |
| 0.0902 | 15.4607 | 1376 | 0.7080 | 0.6939 | 0.7080 | 0.8414 |
| 0.0902 | 15.4831 | 1378 | 0.7029 | 0.6939 | 0.7029 | 0.8384 |
| 0.0902 | 15.5056 | 1380 | 0.6650 | 0.7467 | 0.6650 | 0.8155 |
| 0.0902 | 15.5281 | 1382 | 0.6446 | 0.7792 | 0.6446 | 0.8029 |
| 0.0902 | 15.5506 | 1384 | 0.6390 | 0.7871 | 0.6390 | 0.7994 |
| 0.0902 | 15.5730 | 1386 | 0.6562 | 0.7975 | 0.6562 | 0.8100 |
| 0.0902 | 15.5955 | 1388 | 0.6837 | 0.7771 | 0.6837 | 0.8269 |
| 0.0902 | 15.6180 | 1390 | 0.7191 | 0.7342 | 0.7191 | 0.8480 |
| 0.0902 | 15.6404 | 1392 | 0.7462 | 0.6957 | 0.7462 | 0.8638 |
| 0.0902 | 15.6629 | 1394 | 0.7638 | 0.6957 | 0.7638 | 0.8740 |
| 0.0902 | 15.6854 | 1396 | 0.7883 | 0.6914 | 0.7883 | 0.8879 |
| 0.0902 | 15.7079 | 1398 | 0.7537 | 0.7073 | 0.7537 | 0.8682 |
| 0.0902 | 15.7303 | 1400 | 0.6628 | 0.7561 | 0.6628 | 0.8141 |
| 0.0902 | 15.7528 | 1402 | 0.5868 | 0.7901 | 0.5868 | 0.7660 |
| 0.0902 | 15.7753 | 1404 | 0.5742 | 0.7853 | 0.5742 | 0.7577 |
| 0.0902 | 15.7978 | 1406 | 0.5841 | 0.8 | 0.5841 | 0.7642 |
| 0.0902 | 15.8202 | 1408 | 0.5942 | 0.7853 | 0.5942 | 0.7709 |
| 0.0902 | 15.8427 | 1410 | 0.6140 | 0.7647 | 0.6140 | 0.7836 |
| 0.0902 | 15.8652 | 1412 | 0.6513 | 0.7836 | 0.6513 | 0.8070 |
| 0.0902 | 15.8876 | 1414 | 0.6937 | 0.7791 | 0.6937 | 0.8329 |
| 0.0902 | 15.9101 | 1416 | 0.7189 | 0.7791 | 0.7189 | 0.8479 |
| 0.0902 | 15.9326 | 1418 | 0.7041 | 0.7619 | 0.7041 | 0.8391 |
| 0.0902 | 15.9551 | 1420 | 0.6912 | 0.7619 | 0.6912 | 0.8314 |
| 0.0902 | 15.9775 | 1422 | 0.6785 | 0.7654 | 0.6785 | 0.8237 |
| 0.0902 | 16.0 | 1424 | 0.6705 | 0.7898 | 0.6705 | 0.8188 |
| 0.0902 | 16.0225 | 1426 | 0.6644 | 0.7975 | 0.6644 | 0.8151 |
| 0.0902 | 16.0449 | 1428 | 0.6601 | 0.7654 | 0.6601 | 0.8125 |
| 0.0902 | 16.0674 | 1430 | 0.6674 | 0.7470 | 0.6674 | 0.8169 |
| 0.0902 | 16.0899 | 1432 | 0.6789 | 0.7317 | 0.6789 | 0.8240 |
| 0.0902 | 16.1124 | 1434 | 0.7078 | 0.6957 | 0.7078 | 0.8413 |
| 0.0902 | 16.1348 | 1436 | 0.7160 | 0.6957 | 0.7160 | 0.8462 |
| 0.0902 | 16.1573 | 1438 | 0.7170 | 0.6957 | 0.7170 | 0.8468 |
| 0.0902 | 16.1798 | 1440 | 0.6967 | 0.75 | 0.6967 | 0.8347 |
| 0.0902 | 16.2022 | 1442 | 0.6968 | 0.7643 | 0.6968 | 0.8348 |
| 0.0902 | 16.2247 | 1444 | 0.6936 | 0.7792 | 0.6936 | 0.8328 |
| 0.0902 | 16.2472 | 1446 | 0.7068 | 0.7792 | 0.7068 | 0.8407 |
| 0.0902 | 16.2697 | 1448 | 0.7058 | 0.7582 | 0.7058 | 0.8401 |
| 0.0902 | 16.2921 | 1450 | 0.7087 | 0.7582 | 0.7087 | 0.8418 |
| 0.0902 | 16.3146 | 1452 | 0.7318 | 0.7582 | 0.7318 | 0.8554 |
| 0.0902 | 16.3371 | 1454 | 0.7466 | 0.7417 | 0.7466 | 0.8641 |
| 0.0902 | 16.3596 | 1456 | 0.7406 | 0.7417 | 0.7406 | 0.8606 |
| 0.0902 | 16.3820 | 1458 | 0.7358 | 0.75 | 0.7358 | 0.8578 |
| 0.0902 | 16.4045 | 1460 | 0.7026 | 0.7285 | 0.7026 | 0.8382 |
| 0.0902 | 16.4270 | 1462 | 0.6924 | 0.7162 | 0.6923 | 0.8321 |
| 0.0902 | 16.4494 | 1464 | 0.7048 | 0.7285 | 0.7048 | 0.8395 |
| 0.0902 | 16.4719 | 1466 | 0.7310 | 0.7285 | 0.7310 | 0.8550 |
| 0.0902 | 16.4944 | 1468 | 0.7694 | 0.75 | 0.7694 | 0.8772 |
| 0.0902 | 16.5169 | 1470 | 0.8124 | 0.7067 | 0.8124 | 0.9013 |
| 0.0902 | 16.5393 | 1472 | 0.8399 | 0.6839 | 0.8399 | 0.9165 |
| 0.0902 | 16.5618 | 1474 | 0.8193 | 0.725 | 0.8193 | 0.9051 |
| 0.0902 | 16.5843 | 1476 | 0.7882 | 0.75 | 0.7882 | 0.8878 |
| 0.0902 | 16.6067 | 1478 | 0.7818 | 0.75 | 0.7818 | 0.8842 |
| 0.0902 | 16.6292 | 1480 | 0.7713 | 0.75 | 0.7713 | 0.8782 |
| 0.0902 | 16.6517 | 1482 | 0.7593 | 0.75 | 0.7593 | 0.8714 |
| 0.0902 | 16.6742 | 1484 | 0.7528 | 0.75 | 0.7528 | 0.8676 |
| 0.0902 | 16.6966 | 1486 | 0.7568 | 0.7582 | 0.7568 | 0.8699 |
| 0.0902 | 16.7191 | 1488 | 0.7323 | 0.7582 | 0.7323 | 0.8558 |
| 0.0902 | 16.7416 | 1490 | 0.7138 | 0.7662 | 0.7138 | 0.8449 |
| 0.0902 | 16.7640 | 1492 | 0.7014 | 0.7662 | 0.7014 | 0.8375 |
| 0.0902 | 16.7865 | 1494 | 0.6971 | 0.7368 | 0.6971 | 0.8349 |
| 0.0902 | 16.8090 | 1496 | 0.6984 | 0.7451 | 0.6984 | 0.8357 |
| 0.0902 | 16.8315 | 1498 | 0.7058 | 0.7368 | 0.7058 | 0.8401 |
| 0.054 | 16.8539 | 1500 | 0.7201 | 0.7368 | 0.7201 | 0.8486 |
| 0.054 | 16.8764 | 1502 | 0.7380 | 0.7285 | 0.7380 | 0.8591 |
| 0.054 | 16.8989 | 1504 | 0.7601 | 0.7285 | 0.7601 | 0.8718 |
| 0.054 | 16.9213 | 1506 | 0.7823 | 0.7285 | 0.7823 | 0.8845 |
| 0.054 | 16.9438 | 1508 | 0.8008 | 0.7285 | 0.8008 | 0.8949 |
| 0.054 | 16.9663 | 1510 | 0.8235 | 0.6980 | 0.8235 | 0.9075 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
shopitalic/fragance-set-roomspray-rafael
|
shopitalic
| 2025-01-15T15:33:20Z
| 35
| 0
|
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T15:33:09Z
|
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# fragance set roomspray rafael
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shopitalic/fragance-set-roomspray-rafael/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
nhung03/26927434-adaa-4a69-bc76-b2ea296c9bfc
|
nhung03
| 2025-01-15T15:32:22Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T15:09:24Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26927434-adaa-4a69-bc76-b2ea296c9bfc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 87f5261f601511ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/87f5261f601511ad_train_data.json
type:
field_input: HS
field_instruction: prefix
field_output: CN
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/26927434-adaa-4a69-bc76-b2ea296c9bfc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/87f5261f601511ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cefa18a5-e417-4b7f-b50e-7cfa90696def
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cefa18a5-e417-4b7f-b50e-7cfa90696def
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 26927434-adaa-4a69-bc76-b2ea296c9bfc
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.0856 | 0.1287 | 200 | 2.8336 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ell-hol/FTR170-flux-dbth-lr
|
ell-hol
| 2025-01-15T15:32:08Z
| 8
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"replicate",
"flux",
"flux-diffusers",
"template:sd-lora",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T14:43:46Z
|
---
license: other
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- replicate
- flux
- flux-diffusers
- template:sd-lora
base_model: FLUX.1-dev
instance_prompt: a photo of the FTR170 building
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - ell-hol/FTR170-flux-dbth-lr
<Gallery />
## Model description
These are ell-hol/FTR170-flux-dbth-lr DreamBooth LoRA weights for FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md) on [Replicate](https://replicate.com/lucataco/diffusers-dreambooth-lora).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of the FTR170 building` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](ell-hol/FTR170-flux-dbth-lr/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('ell-hol/FTR170-flux-dbth-lr', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of the FTR170 building').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
AntonioTH/Layout-finetuned-fr-model-50instances20-10epochs-5e-05lr-V2
|
AntonioTH
| 2025-01-15T15:29:57Z
| 8
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutxlm-base",
"base_model:finetune:microsoft/layoutxlm-base",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2025-01-15T14:35:02Z
|
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutxlm-base
tags:
- generated_from_trainer
model-index:
- name: Layout-finetuned-fr-model-50instances20-10epochs-5e-05lr-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Layout-finetuned-fr-model-50instances20-10epochs-5e-05lr-V2
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: reduce_lr_on_plateau
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2231 | 0.7692 | 10 | 0.7135 |
| 0.1961 | 1.5385 | 20 | 0.0019 |
| 0.002 | 2.3077 | 30 | 0.0004 |
| 0.0005 | 3.0769 | 40 | 0.0002 |
| 0.0003 | 3.8462 | 50 | 0.0001 |
| 0.0003 | 4.6154 | 60 | 0.0001 |
| 0.0002 | 5.3846 | 70 | 0.0001 |
| 0.0002 | 6.1538 | 80 | 0.0001 |
| 0.0002 | 6.9231 | 90 | 0.0001 |
| 0.0002 | 7.6923 | 100 | 0.0001 |
| 0.0002 | 8.4615 | 110 | 0.0001 |
| 0.0001 | 9.2308 | 120 | 0.0001 |
| 0.0001 | 10.0 | 130 | 0.0001 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.4.1.post100
- Datasets 3.2.0
- Tokenizers 0.21.0
|
exala/db_aca2_7.2
|
exala
| 2025-01-15T15:28:55Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T15:28:43Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k15_task1_organization
|
MayBashendy
| 2025-01-15T15:28:24Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T15:10:59Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k15_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k15_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7167
- Qwk: 0.3022
- Mse: 1.7167
- Rmse: 1.3102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0303 | 2 | 6.9944 | 0.0057 | 6.9944 | 2.6447 |
| No log | 0.0606 | 4 | 4.5153 | 0.0407 | 4.5153 | 2.1249 |
| No log | 0.0909 | 6 | 3.7162 | -0.0635 | 3.7162 | 1.9278 |
| No log | 0.1212 | 8 | 3.7143 | -0.1047 | 3.7143 | 1.9273 |
| No log | 0.1515 | 10 | 3.2285 | 0.0118 | 3.2285 | 1.7968 |
| No log | 0.1818 | 12 | 2.4530 | 0.0930 | 2.4530 | 1.5662 |
| No log | 0.2121 | 14 | 2.2921 | 0.1538 | 2.2921 | 1.5140 |
| No log | 0.2424 | 16 | 2.2405 | 0.2628 | 2.2405 | 1.4968 |
| No log | 0.2727 | 18 | 2.3051 | 0.1497 | 2.3051 | 1.5183 |
| No log | 0.3030 | 20 | 2.3160 | 0.1208 | 2.3160 | 1.5219 |
| No log | 0.3333 | 22 | 2.3371 | 0.1208 | 2.3371 | 1.5288 |
| No log | 0.3636 | 24 | 2.3923 | 0.0933 | 2.3923 | 1.5467 |
| No log | 0.3939 | 26 | 2.3991 | 0.0933 | 2.3991 | 1.5489 |
| No log | 0.4242 | 28 | 2.4278 | 0.1224 | 2.4278 | 1.5581 |
| No log | 0.4545 | 30 | 2.2243 | 0.2464 | 2.2243 | 1.4914 |
| No log | 0.4848 | 32 | 1.9640 | 0.2080 | 1.9640 | 1.4014 |
| No log | 0.5152 | 34 | 1.7294 | 0.0708 | 1.7294 | 1.3151 |
| No log | 0.5455 | 36 | 1.6212 | 0.1714 | 1.6212 | 1.2732 |
| No log | 0.5758 | 38 | 1.5596 | 0.1887 | 1.5596 | 1.2488 |
| No log | 0.6061 | 40 | 1.5678 | 0.2385 | 1.5678 | 1.2521 |
| No log | 0.6364 | 42 | 1.6246 | 0.2385 | 1.6246 | 1.2746 |
| No log | 0.6667 | 44 | 1.7130 | 0.2807 | 1.7130 | 1.3088 |
| No log | 0.6970 | 46 | 1.8307 | 0.2185 | 1.8307 | 1.3530 |
| No log | 0.7273 | 48 | 1.8662 | 0.1008 | 1.8662 | 1.3661 |
| No log | 0.7576 | 50 | 2.2266 | 0.0167 | 2.2266 | 1.4922 |
| No log | 0.7879 | 52 | 2.3084 | -0.0164 | 2.3084 | 1.5194 |
| No log | 0.8182 | 54 | 2.4142 | -0.0813 | 2.4142 | 1.5538 |
| No log | 0.8485 | 56 | 1.8711 | 0.0536 | 1.8711 | 1.3679 |
| No log | 0.8788 | 58 | 1.4019 | 0.2593 | 1.4019 | 1.1840 |
| No log | 0.9091 | 60 | 1.3756 | 0.3361 | 1.3756 | 1.1729 |
| No log | 0.9394 | 62 | 1.3331 | 0.3248 | 1.3331 | 1.1546 |
| No log | 0.9697 | 64 | 1.4205 | 0.4071 | 1.4205 | 1.1918 |
| No log | 1.0 | 66 | 1.7671 | 0.2679 | 1.7671 | 1.3293 |
| No log | 1.0303 | 68 | 1.8373 | 0.2364 | 1.8373 | 1.3555 |
| No log | 1.0606 | 70 | 1.5523 | 0.3393 | 1.5523 | 1.2459 |
| No log | 1.0909 | 72 | 1.3863 | 0.3243 | 1.3863 | 1.1774 |
| No log | 1.1212 | 74 | 1.3750 | 0.3540 | 1.3750 | 1.1726 |
| No log | 1.1515 | 76 | 1.3629 | 0.3860 | 1.3629 | 1.1674 |
| No log | 1.1818 | 78 | 1.4439 | 0.3393 | 1.4439 | 1.2016 |
| No log | 1.2121 | 80 | 1.5159 | 0.3604 | 1.5159 | 1.2312 |
| No log | 1.2424 | 82 | 1.7802 | 0.2321 | 1.7802 | 1.3342 |
| No log | 1.2727 | 84 | 1.8063 | 0.2456 | 1.8063 | 1.3440 |
| No log | 1.3030 | 86 | 1.5623 | 0.3793 | 1.5623 | 1.2499 |
| No log | 1.3333 | 88 | 1.4173 | 0.3478 | 1.4173 | 1.1905 |
| No log | 1.3636 | 90 | 1.2854 | 0.3684 | 1.2854 | 1.1338 |
| No log | 1.3939 | 92 | 1.3589 | 0.2936 | 1.3589 | 1.1657 |
| No log | 1.4242 | 94 | 1.3796 | 0.3393 | 1.3796 | 1.1745 |
| No log | 1.4545 | 96 | 1.4248 | 0.3966 | 1.4248 | 1.1937 |
| No log | 1.4848 | 98 | 1.4145 | 0.4035 | 1.4145 | 1.1893 |
| No log | 1.5152 | 100 | 1.5414 | 0.4068 | 1.5414 | 1.2415 |
| No log | 1.5455 | 102 | 1.6869 | 0.3051 | 1.6869 | 1.2988 |
| No log | 1.5758 | 104 | 1.6587 | 0.3115 | 1.6587 | 1.2879 |
| No log | 1.6061 | 106 | 1.7339 | 0.2810 | 1.7339 | 1.3168 |
| No log | 1.6364 | 108 | 1.6304 | 0.3415 | 1.6304 | 1.2769 |
| No log | 1.6667 | 110 | 1.7599 | 0.2835 | 1.7599 | 1.3266 |
| No log | 1.6970 | 112 | 1.8480 | 0.2258 | 1.8480 | 1.3594 |
| No log | 1.7273 | 114 | 1.5999 | 0.3333 | 1.5999 | 1.2649 |
| No log | 1.7576 | 116 | 1.4627 | 0.5079 | 1.4627 | 1.2094 |
| No log | 1.7879 | 118 | 1.4149 | 0.4 | 1.4149 | 1.1895 |
| No log | 1.8182 | 120 | 1.5311 | 0.5077 | 1.5311 | 1.2374 |
| No log | 1.8485 | 122 | 1.8821 | 0.2609 | 1.8821 | 1.3719 |
| No log | 1.8788 | 124 | 1.9892 | 0.1630 | 1.9892 | 1.4104 |
| No log | 1.9091 | 126 | 1.8187 | 0.3088 | 1.8187 | 1.3486 |
| No log | 1.9394 | 128 | 1.4761 | 0.4715 | 1.4761 | 1.2149 |
| No log | 1.9697 | 130 | 1.3188 | 0.4068 | 1.3188 | 1.1484 |
| No log | 2.0 | 132 | 1.3135 | 0.5082 | 1.3135 | 1.1461 |
| No log | 2.0303 | 134 | 1.4589 | 0.4733 | 1.4589 | 1.2079 |
| No log | 2.0606 | 136 | 1.6805 | 0.3284 | 1.6805 | 1.2963 |
| No log | 2.0909 | 138 | 2.0007 | 0.0916 | 2.0007 | 1.4144 |
| No log | 2.1212 | 140 | 1.9180 | 0.1493 | 1.9180 | 1.3849 |
| No log | 2.1515 | 142 | 1.5706 | 0.3556 | 1.5706 | 1.2532 |
| No log | 2.1818 | 144 | 1.2024 | 0.5873 | 1.2024 | 1.0965 |
| No log | 2.2121 | 146 | 1.0618 | 0.4370 | 1.0618 | 1.0305 |
| No log | 2.2424 | 148 | 1.0357 | 0.5484 | 1.0357 | 1.0177 |
| No log | 2.2727 | 150 | 1.1690 | 0.5512 | 1.1690 | 1.0812 |
| No log | 2.3030 | 152 | 1.3789 | 0.4496 | 1.3789 | 1.1743 |
| No log | 2.3333 | 154 | 1.2727 | 0.5197 | 1.2727 | 1.1282 |
| No log | 2.3636 | 156 | 1.1478 | 0.5528 | 1.1478 | 1.0713 |
| No log | 2.3939 | 158 | 1.1685 | 0.5323 | 1.1685 | 1.0810 |
| No log | 2.4242 | 160 | 1.2396 | 0.5161 | 1.2396 | 1.1134 |
| No log | 2.4545 | 162 | 1.3491 | 0.5039 | 1.3491 | 1.1615 |
| No log | 2.4848 | 164 | 1.3579 | 0.5156 | 1.3579 | 1.1653 |
| No log | 2.5152 | 166 | 1.4688 | 0.3721 | 1.4688 | 1.2119 |
| No log | 2.5455 | 168 | 1.4561 | 0.4127 | 1.4561 | 1.2067 |
| No log | 2.5758 | 170 | 1.4176 | 0.5 | 1.4176 | 1.1906 |
| No log | 2.6061 | 172 | 1.4266 | 0.4754 | 1.4266 | 1.1944 |
| No log | 2.6364 | 174 | 1.6812 | 0.2835 | 1.6812 | 1.2966 |
| No log | 2.6667 | 176 | 1.9125 | 0.2114 | 1.9125 | 1.3829 |
| No log | 2.6970 | 178 | 2.0021 | 0.1667 | 2.0021 | 1.4150 |
| No log | 2.7273 | 180 | 1.8599 | 0.2131 | 1.8599 | 1.3638 |
| No log | 2.7576 | 182 | 1.5087 | 0.4496 | 1.5087 | 1.2283 |
| No log | 2.7879 | 184 | 1.3008 | 0.5385 | 1.3008 | 1.1405 |
| No log | 2.8182 | 186 | 1.2706 | 0.5271 | 1.2706 | 1.1272 |
| No log | 2.8485 | 188 | 1.3617 | 0.5116 | 1.3617 | 1.1669 |
| No log | 2.8788 | 190 | 1.3683 | 0.4844 | 1.3683 | 1.1697 |
| No log | 2.9091 | 192 | 1.4496 | 0.4361 | 1.4496 | 1.2040 |
| No log | 2.9394 | 194 | 1.4260 | 0.4462 | 1.4260 | 1.1942 |
| No log | 2.9697 | 196 | 1.3760 | 0.5197 | 1.3760 | 1.1730 |
| No log | 3.0 | 198 | 1.3927 | 0.5156 | 1.3927 | 1.1801 |
| No log | 3.0303 | 200 | 1.4339 | 0.4844 | 1.4339 | 1.1974 |
| No log | 3.0606 | 202 | 1.4469 | 0.4961 | 1.4469 | 1.2029 |
| No log | 3.0909 | 204 | 1.4224 | 0.5344 | 1.4224 | 1.1926 |
| No log | 3.1212 | 206 | 1.4897 | 0.4179 | 1.4897 | 1.2205 |
| No log | 3.1515 | 208 | 1.5812 | 0.3485 | 1.5812 | 1.2575 |
| No log | 3.1818 | 210 | 1.6296 | 0.3256 | 1.6296 | 1.2766 |
| No log | 3.2121 | 212 | 1.4779 | 0.4252 | 1.4779 | 1.2157 |
| No log | 3.2424 | 214 | 1.3430 | 0.496 | 1.3430 | 1.1589 |
| No log | 3.2727 | 216 | 1.2147 | 0.528 | 1.2147 | 1.1021 |
| No log | 3.3030 | 218 | 1.3675 | 0.4844 | 1.3675 | 1.1694 |
| No log | 3.3333 | 220 | 1.5121 | 0.4 | 1.5121 | 1.2297 |
| No log | 3.3636 | 222 | 1.3824 | 0.4878 | 1.3824 | 1.1757 |
| No log | 3.3939 | 224 | 1.1374 | 0.4957 | 1.1374 | 1.0665 |
| No log | 3.4242 | 226 | 1.1127 | 0.4696 | 1.1127 | 1.0548 |
| No log | 3.4545 | 228 | 1.1710 | 0.4522 | 1.1710 | 1.0821 |
| No log | 3.4848 | 230 | 1.3114 | 0.4522 | 1.3114 | 1.1452 |
| No log | 3.5152 | 232 | 1.3567 | 0.4667 | 1.3567 | 1.1648 |
| No log | 3.5455 | 234 | 1.3247 | 0.4921 | 1.3247 | 1.1510 |
| No log | 3.5758 | 236 | 1.2507 | 0.5312 | 1.2507 | 1.1183 |
| No log | 3.6061 | 238 | 1.1976 | 0.5714 | 1.1976 | 1.0944 |
| No log | 3.6364 | 240 | 1.1839 | 0.5736 | 1.1839 | 1.0881 |
| No log | 3.6667 | 242 | 1.3254 | 0.5255 | 1.3254 | 1.1513 |
| No log | 3.6970 | 244 | 1.3419 | 0.4889 | 1.3419 | 1.1584 |
| No log | 3.7273 | 246 | 1.4214 | 0.4148 | 1.4214 | 1.1922 |
| No log | 3.7576 | 248 | 1.5377 | 0.3704 | 1.5377 | 1.2400 |
| No log | 3.7879 | 250 | 1.5166 | 0.3359 | 1.5166 | 1.2315 |
| No log | 3.8182 | 252 | 1.5304 | 0.368 | 1.5304 | 1.2371 |
| No log | 3.8485 | 254 | 1.5101 | 0.4034 | 1.5101 | 1.2289 |
| No log | 3.8788 | 256 | 1.4454 | 0.4202 | 1.4454 | 1.2022 |
| No log | 3.9091 | 258 | 1.5006 | 0.4034 | 1.5006 | 1.2250 |
| No log | 3.9394 | 260 | 1.6493 | 0.2581 | 1.6493 | 1.2843 |
| No log | 3.9697 | 262 | 1.8253 | 0.1760 | 1.8253 | 1.3510 |
| No log | 4.0 | 264 | 1.8408 | 0.2047 | 1.8408 | 1.3568 |
| No log | 4.0303 | 266 | 1.7825 | 0.2308 | 1.7825 | 1.3351 |
| No log | 4.0606 | 268 | 1.6237 | 0.3008 | 1.6237 | 1.2742 |
| No log | 4.0909 | 270 | 1.5478 | 0.4211 | 1.5478 | 1.2441 |
| No log | 4.1212 | 272 | 1.4616 | 0.4242 | 1.4616 | 1.2089 |
| No log | 4.1515 | 274 | 1.4935 | 0.4361 | 1.4935 | 1.2221 |
| No log | 4.1818 | 276 | 1.6308 | 0.3664 | 1.6308 | 1.2770 |
| No log | 4.2121 | 278 | 1.7634 | 0.2462 | 1.7634 | 1.3279 |
| No log | 4.2424 | 280 | 1.8587 | 0.2406 | 1.8587 | 1.3633 |
| No log | 4.2727 | 282 | 1.8221 | 0.2836 | 1.8221 | 1.3499 |
| No log | 4.3030 | 284 | 1.5922 | 0.3158 | 1.5922 | 1.2618 |
| No log | 4.3333 | 286 | 1.3365 | 0.5039 | 1.3365 | 1.1561 |
| No log | 4.3636 | 288 | 1.3022 | 0.5039 | 1.3022 | 1.1411 |
| No log | 4.3939 | 290 | 1.4342 | 0.4444 | 1.4342 | 1.1976 |
| No log | 4.4242 | 292 | 1.6800 | 0.3212 | 1.6800 | 1.2961 |
| No log | 4.4545 | 294 | 1.8339 | 0.2482 | 1.8339 | 1.3542 |
| No log | 4.4848 | 296 | 2.0354 | 0.1176 | 2.0354 | 1.4267 |
| No log | 4.5152 | 298 | 2.0186 | 0.0746 | 2.0186 | 1.4208 |
| No log | 4.5455 | 300 | 1.7992 | 0.2556 | 1.7992 | 1.3413 |
| No log | 4.5758 | 302 | 1.5955 | 0.3411 | 1.5955 | 1.2631 |
| No log | 4.6061 | 304 | 1.4671 | 0.3810 | 1.4671 | 1.2113 |
| No log | 4.6364 | 306 | 1.3656 | 0.5041 | 1.3656 | 1.1686 |
| No log | 4.6667 | 308 | 1.3630 | 0.5397 | 1.3630 | 1.1675 |
| No log | 4.6970 | 310 | 1.4873 | 0.4296 | 1.4873 | 1.2196 |
| No log | 4.7273 | 312 | 1.7610 | 0.2446 | 1.7610 | 1.3270 |
| No log | 4.7576 | 314 | 2.0182 | 0.1460 | 2.0182 | 1.4207 |
| No log | 4.7879 | 316 | 2.1348 | 0.1714 | 2.1348 | 1.4611 |
| No log | 4.8182 | 318 | 2.1244 | 0.1831 | 2.1244 | 1.4575 |
| No log | 4.8485 | 320 | 1.9065 | 0.2857 | 1.9065 | 1.3808 |
| No log | 4.8788 | 322 | 1.5997 | 0.3597 | 1.5997 | 1.2648 |
| No log | 4.9091 | 324 | 1.4691 | 0.4925 | 1.4691 | 1.2121 |
| No log | 4.9394 | 326 | 1.5112 | 0.4328 | 1.5112 | 1.2293 |
| No log | 4.9697 | 328 | 1.6980 | 0.3333 | 1.6980 | 1.3031 |
| No log | 5.0 | 330 | 1.9604 | 0.2286 | 1.9604 | 1.4001 |
| No log | 5.0303 | 332 | 1.9863 | 0.1986 | 1.9863 | 1.4094 |
| No log | 5.0606 | 334 | 1.9663 | 0.1986 | 1.9663 | 1.4022 |
| No log | 5.0909 | 336 | 1.9781 | 0.1986 | 1.9781 | 1.4064 |
| No log | 5.1212 | 338 | 2.0225 | 0.2000 | 2.0225 | 1.4221 |
| No log | 5.1515 | 340 | 2.0287 | 0.2000 | 2.0287 | 1.4243 |
| No log | 5.1818 | 342 | 2.0276 | 0.2000 | 2.0276 | 1.4239 |
| No log | 5.2121 | 344 | 1.9562 | 0.1844 | 1.9562 | 1.3986 |
| No log | 5.2424 | 346 | 1.7553 | 0.3333 | 1.7553 | 1.3249 |
| No log | 5.2727 | 348 | 1.6211 | 0.4058 | 1.6211 | 1.2732 |
| No log | 5.3030 | 350 | 1.6287 | 0.3796 | 1.6287 | 1.2762 |
| No log | 5.3333 | 352 | 1.5323 | 0.3676 | 1.5323 | 1.2378 |
| No log | 5.3636 | 354 | 1.4494 | 0.4328 | 1.4494 | 1.2039 |
| No log | 5.3939 | 356 | 1.4895 | 0.4148 | 1.4895 | 1.2204 |
| No log | 5.4242 | 358 | 1.7277 | 0.3143 | 1.7277 | 1.3144 |
| No log | 5.4545 | 360 | 2.0436 | 0.2098 | 2.0436 | 1.4295 |
| No log | 5.4848 | 362 | 2.0502 | 0.2222 | 2.0502 | 1.4319 |
| No log | 5.5152 | 364 | 1.9849 | 0.2778 | 1.9849 | 1.4089 |
| No log | 5.5455 | 366 | 1.7435 | 0.3497 | 1.7435 | 1.3204 |
| No log | 5.5758 | 368 | 1.5414 | 0.3597 | 1.5414 | 1.2415 |
| No log | 5.6061 | 370 | 1.4444 | 0.4328 | 1.4444 | 1.2018 |
| No log | 5.6364 | 372 | 1.5102 | 0.4203 | 1.5102 | 1.2289 |
| No log | 5.6667 | 374 | 1.7301 | 0.2878 | 1.7301 | 1.3153 |
| No log | 5.6970 | 376 | 1.8386 | 0.2754 | 1.8386 | 1.3559 |
| No log | 5.7273 | 378 | 1.7688 | 0.2590 | 1.7688 | 1.3299 |
| No log | 5.7576 | 380 | 1.6764 | 0.3333 | 1.6764 | 1.2948 |
| No log | 5.7879 | 382 | 1.6558 | 0.3212 | 1.6558 | 1.2868 |
| No log | 5.8182 | 384 | 1.6592 | 0.3212 | 1.6592 | 1.2881 |
| No log | 5.8485 | 386 | 1.8444 | 0.3022 | 1.8444 | 1.3581 |
| No log | 5.8788 | 388 | 2.0224 | 0.2695 | 2.0224 | 1.4221 |
| No log | 5.9091 | 390 | 1.9895 | 0.2695 | 1.9895 | 1.4105 |
| No log | 5.9394 | 392 | 1.7949 | 0.3022 | 1.7949 | 1.3397 |
| No log | 5.9697 | 394 | 1.5034 | 0.4029 | 1.5034 | 1.2261 |
| No log | 6.0 | 396 | 1.3198 | 0.4672 | 1.3198 | 1.1488 |
| No log | 6.0303 | 398 | 1.2589 | 0.4925 | 1.2589 | 1.1220 |
| No log | 6.0606 | 400 | 1.2109 | 0.5231 | 1.2109 | 1.1004 |
| No log | 6.0909 | 402 | 1.2751 | 0.4848 | 1.2751 | 1.1292 |
| No log | 6.1212 | 404 | 1.4672 | 0.4088 | 1.4672 | 1.2113 |
| No log | 6.1515 | 406 | 1.7512 | 0.3066 | 1.7512 | 1.3233 |
| No log | 6.1818 | 408 | 1.8199 | 0.2899 | 1.8199 | 1.3490 |
| No log | 6.2121 | 410 | 1.8737 | 0.2286 | 1.8737 | 1.3688 |
| No log | 6.2424 | 412 | 1.7951 | 0.3022 | 1.7951 | 1.3398 |
| No log | 6.2727 | 414 | 1.6056 | 0.3913 | 1.6056 | 1.2671 |
| No log | 6.3030 | 416 | 1.5029 | 0.4234 | 1.5029 | 1.2259 |
| No log | 6.3333 | 418 | 1.4873 | 0.4526 | 1.4873 | 1.2196 |
| No log | 6.3636 | 420 | 1.5242 | 0.4444 | 1.5242 | 1.2346 |
| No log | 6.3939 | 422 | 1.5626 | 0.4627 | 1.5626 | 1.2501 |
| No log | 6.4242 | 424 | 1.5605 | 0.4122 | 1.5605 | 1.2492 |
| No log | 6.4545 | 426 | 1.5478 | 0.4122 | 1.5478 | 1.2441 |
| No log | 6.4848 | 428 | 1.5280 | 0.4122 | 1.5280 | 1.2361 |
| No log | 6.5152 | 430 | 1.5320 | 0.4122 | 1.5320 | 1.2378 |
| No log | 6.5455 | 432 | 1.5495 | 0.4122 | 1.5495 | 1.2448 |
| No log | 6.5758 | 434 | 1.6572 | 0.3788 | 1.6571 | 1.2873 |
| No log | 6.6061 | 436 | 1.7076 | 0.3382 | 1.7076 | 1.3068 |
| No log | 6.6364 | 438 | 1.6395 | 0.4148 | 1.6395 | 1.2804 |
| No log | 6.6667 | 440 | 1.5910 | 0.4148 | 1.5910 | 1.2614 |
| No log | 6.6970 | 442 | 1.5514 | 0.4148 | 1.5514 | 1.2456 |
| No log | 6.7273 | 444 | 1.5695 | 0.3971 | 1.5695 | 1.2528 |
| No log | 6.7576 | 446 | 1.6529 | 0.3478 | 1.6529 | 1.2856 |
| No log | 6.7879 | 448 | 1.6667 | 0.3478 | 1.6667 | 1.2910 |
| No log | 6.8182 | 450 | 1.5943 | 0.3529 | 1.5943 | 1.2626 |
| No log | 6.8485 | 452 | 1.6232 | 0.3358 | 1.6232 | 1.2741 |
| No log | 6.8788 | 454 | 1.7026 | 0.3143 | 1.7026 | 1.3048 |
| No log | 6.9091 | 456 | 1.8190 | 0.2553 | 1.8190 | 1.3487 |
| No log | 6.9394 | 458 | 1.8363 | 0.2553 | 1.8363 | 1.3551 |
| No log | 6.9697 | 460 | 1.6069 | 0.3478 | 1.6069 | 1.2676 |
| No log | 7.0 | 462 | 1.3671 | 0.4662 | 1.3671 | 1.1692 |
| No log | 7.0303 | 464 | 1.2786 | 0.5736 | 1.2786 | 1.1308 |
| No log | 7.0606 | 466 | 1.3161 | 0.5469 | 1.3161 | 1.1472 |
| No log | 7.0909 | 468 | 1.4248 | 0.4427 | 1.4248 | 1.1937 |
| No log | 7.1212 | 470 | 1.6490 | 0.3212 | 1.6490 | 1.2841 |
| No log | 7.1515 | 472 | 1.8276 | 0.2553 | 1.8276 | 1.3519 |
| No log | 7.1818 | 474 | 1.8325 | 0.2014 | 1.8325 | 1.3537 |
| No log | 7.2121 | 476 | 1.7131 | 0.2687 | 1.7131 | 1.3089 |
| No log | 7.2424 | 478 | 1.6029 | 0.3840 | 1.6029 | 1.2660 |
| No log | 7.2727 | 480 | 1.6119 | 0.35 | 1.6119 | 1.2696 |
| No log | 7.3030 | 482 | 1.6414 | 0.3802 | 1.6414 | 1.2812 |
| No log | 7.3333 | 484 | 1.7304 | 0.3226 | 1.7304 | 1.3154 |
| No log | 7.3636 | 486 | 1.7827 | 0.2370 | 1.7827 | 1.3352 |
| No log | 7.3939 | 488 | 1.6803 | 0.3556 | 1.6803 | 1.2963 |
| No log | 7.4242 | 490 | 1.5603 | 0.3609 | 1.5603 | 1.2491 |
| No log | 7.4545 | 492 | 1.4283 | 0.4531 | 1.4283 | 1.1951 |
| No log | 7.4848 | 494 | 1.3342 | 0.4160 | 1.3342 | 1.1551 |
| No log | 7.5152 | 496 | 1.3632 | 0.4409 | 1.3632 | 1.1675 |
| No log | 7.5455 | 498 | 1.5263 | 0.4091 | 1.5263 | 1.2354 |
| 0.4132 | 7.5758 | 500 | 1.7767 | 0.3066 | 1.7767 | 1.3329 |
| 0.4132 | 7.6061 | 502 | 1.8679 | 0.2899 | 1.8679 | 1.3667 |
| 0.4132 | 7.6364 | 504 | 1.7845 | 0.2899 | 1.7845 | 1.3359 |
| 0.4132 | 7.6667 | 506 | 1.6449 | 0.3759 | 1.6449 | 1.2825 |
| 0.4132 | 7.6970 | 508 | 1.5565 | 0.4091 | 1.5565 | 1.2476 |
| 0.4132 | 7.7273 | 510 | 1.4505 | 0.4462 | 1.4505 | 1.2044 |
| 0.4132 | 7.7576 | 512 | 1.4850 | 0.4462 | 1.4850 | 1.2186 |
| 0.4132 | 7.7879 | 514 | 1.6438 | 0.3529 | 1.6438 | 1.2821 |
| 0.4132 | 7.8182 | 516 | 1.8453 | 0.2734 | 1.8453 | 1.3584 |
| 0.4132 | 7.8485 | 518 | 1.9250 | 0.2128 | 1.9250 | 1.3874 |
| 0.4132 | 7.8788 | 520 | 1.9368 | 0.2254 | 1.9368 | 1.3917 |
| 0.4132 | 7.9091 | 522 | 1.8844 | 0.2571 | 1.8844 | 1.3727 |
| 0.4132 | 7.9394 | 524 | 1.7167 | 0.3022 | 1.7167 | 1.3102 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
pvduy/Qwen2.7-7B-Instruct-QwQ
|
pvduy
| 2025-01-15T15:27:50Z
| 1,626
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T15:22:24Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lesso07/5f8a3450-51fd-403d-9d1a-57a04bedf50c
|
lesso07
| 2025-01-15T15:22:59Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T10:31:19Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5f8a3450-51fd-403d-9d1a-57a04bedf50c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: true
chat_template: llama3
datasets:
- data_files:
- a54dfbc141335bb5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a54dfbc141335bb5_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso07/5f8a3450-51fd-403d-9d1a-57a04bedf50c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/a54dfbc141335bb5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e302ed25-6563-4139-a251-885ddd901a8d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e302ed25-6563-4139-a251-885ddd901a8d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5f8a3450-51fd-403d-9d1a-57a04bedf50c
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7504 | 0.0000 | 1 | 2.8080 |
| 2.5122 | 0.0001 | 5 | 2.7556 |
| 2.3674 | 0.0002 | 10 | 2.5045 |
| 2.6639 | 0.0002 | 15 | 2.4447 |
| 1.8371 | 0.0003 | 20 | 2.4358 |
| 2.2204 | 0.0004 | 25 | 2.4338 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
chauhoang/5d7d72f6-1494-4bc7-888a-74bf6c08aad0
|
chauhoang
| 2025-01-15T15:21:55Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T15:16:00Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5d7d72f6-1494-4bc7-888a-74bf6c08aad0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d01e93bb4785c31e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d01e93bb4785c31e_train_data.json
type:
field_input: domain
field_instruction: question
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/5d7d72f6-1494-4bc7-888a-74bf6c08aad0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d01e93bb4785c31e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5d7d72f6-1494-4bc7-888a-74bf6c08aad0
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0082 | 10 | nan |
| 0.0 | 0.0163 | 20 | nan |
| 0.0 | 0.0245 | 30 | nan |
| 0.0 | 0.0327 | 40 | nan |
| 0.0 | 0.0408 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thakkkkkk/ac914070-92e4-4882-aab1-2dae1cc8f023
|
thakkkkkk
| 2025-01-15T15:17:26Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T14:23:23Z
|
---
library_name: peft
license: llama3
base_model: elyza/Llama-3-ELYZA-JP-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ac914070-92e4-4882-aab1-2dae1cc8f023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: elyza/Llama-3-ELYZA-JP-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fda69805a6ba006d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fda69805a6ba006d_train_data.json
type:
field_instruction: prompt
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/ac914070-92e4-4882-aab1-2dae1cc8f023
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/fda69805a6ba006d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f2e1c4d6-26a9-4626-844d-fb1f5d0379e3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f2e1c4d6-26a9-4626-844d-fb1f5d0379e3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ac914070-92e4-4882-aab1-2dae1cc8f023
This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7364 | 0.0187 | 200 | 0.7168 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hollmann/andre-dev
|
hollmann
| 2025-01-15T15:15:11Z
| 12
| 0
|
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T15:15:04Z
|
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: foo
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# andre dev
<Gallery />
## Model description
## Trigger words
You should use `foo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/hollmann/andre-dev/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
tarabukinivan/b421da53-ff70-4e6d-b92e-a790e19e8999
|
tarabukinivan
| 2025-01-15T15:13:31Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-01-15T14:14:08Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b421da53-ff70-4e6d-b92e-a790e19e8999
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0431498a24ad26e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0431498a24ad26e2_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: responseA
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tarabukinivan/b421da53-ff70-4e6d-b92e-a790e19e8999
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/0431498a24ad26e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21027872-1320-4917-9c1c-3ca7864e1be9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21027872-1320-4917-9c1c-3ca7864e1be9
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b421da53-ff70-4e6d-b92e-a790e19e8999
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0006 | 13 | nan |
| 0.0 | 0.0012 | 26 | nan |
| 0.0 | 0.0018 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso05/cb24c657-616f-497d-82aa-f52913f3c89f
|
lesso05
| 2025-01-15T15:13:11Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T13:24:13Z
|
---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cb24c657-616f-497d-82aa-f52913f3c89f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: true
chat_template: llama3
datasets:
- data_files:
- 3c33873967a336a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3c33873967a336a3_train_data.json
type:
field_input: context
field_instruction: question
field_output: answers
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/cb24c657-616f-497d-82aa-f52913f3c89f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/3c33873967a336a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd5b49f1-57e4-4c15-bca0-7eed58aadb30
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd5b49f1-57e4-4c15-bca0-7eed58aadb30
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cb24c657-616f-497d-82aa-f52913f3c89f
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0196 | 0.0001 | 1 | 3.4767 |
| 2.5522 | 0.0003 | 5 | 2.9957 |
| 1.1031 | 0.0006 | 10 | 1.1841 |
| 0.323 | 0.0009 | 15 | 0.3350 |
| 0.3152 | 0.0012 | 20 | 0.4184 |
| 0.359 | 0.0015 | 25 | 0.4272 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fedovtt/b8cd5a6b-26c8-44b2-bfd1-a975c77263d2
|
fedovtt
| 2025-01-15T15:10:48Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-01-15T14:12:47Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b8cd5a6b-26c8-44b2-bfd1-a975c77263d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0431498a24ad26e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0431498a24ad26e2_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: responseA
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/b8cd5a6b-26c8-44b2-bfd1-a975c77263d2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0431498a24ad26e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21027872-1320-4917-9c1c-3ca7864e1be9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21027872-1320-4917-9c1c-3ca7864e1be9
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b8cd5a6b-26c8-44b2-bfd1-a975c77263d2
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Nekuromento/watt-tool-8B-Q6_K-GGUF
|
Nekuromento
| 2025-01-15T15:10:37Z
| 23
| 0
| null |
[
"gguf",
"function-calling",
"tool-use",
"llama",
"bfcl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:watt-ai/watt-tool-8B",
"base_model:quantized:watt-ai/watt-tool-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-15T15:10:06Z
|
---
license: apache-2.0
language:
- en
base_model: watt-ai/watt-tool-8B
tags:
- function-calling
- tool-use
- llama
- bfcl
- llama-cpp
- gguf-my-repo
---
# Nekuromento/watt-tool-8B-Q6_K-GGUF
This model was converted to GGUF format from [`watt-ai/watt-tool-8B`](https://huggingface.co/watt-ai/watt-tool-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/watt-ai/watt-tool-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nekuromento/watt-tool-8B-Q6_K-GGUF --hf-file watt-tool-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nekuromento/watt-tool-8B-Q6_K-GGUF --hf-file watt-tool-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nekuromento/watt-tool-8B-Q6_K-GGUF --hf-file watt-tool-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nekuromento/watt-tool-8B-Q6_K-GGUF --hf-file watt-tool-8b-q6_k.gguf -c 2048
```
|
Keltezaa/Shione_Cooper_Actress
|
Keltezaa
| 2025-01-15T15:10:32Z
| 71
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-to-image
| 2025-01-15T14:32:12Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A glamour Breathtaking medium shot photography of a young woman with a
friendly and calm expression, dressed in a turtleneck. Her style reflects
effortless sophistication, blending laid-back comfort with subtle
refinement. She is smiling warmly, radiating confidence and approachability.
The background is solid and softly lit, enhancing the natural tones and
textures of the image. The lighting is diffused and flattering, emphasizing
her features while creating a professional and polished aesthetic, she is a
natural redhead with pale skin and freckles, standing naked in a lush forest
output:
url: images/example_ndl7kpc55.png
- text: >-
A glamour Breathtaking medium shot photography of a young woman with a
friendly and calm expression, dressed in a turtleneck. Her style reflects
effortless sophistication, blending laid-back comfort with subtle
refinement. She is smiling warmly, radiating confidence and approachability.
The background is solid and softly lit, enhancing the natural tones and
textures of the image. The lighting is diffused and flattering, emphasizing
her features while creating a professional and polished aesthetic,
output:
url: images/example_rdnlqg3w5.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: cc-by-nc-sa-4.0
---
# Shione_Cooper_ Actress
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/Shione_Cooper_Actress/tree/main) them in the Files & versions tab.
|
ClarenceDan/93993d99-8ea8-47e0-849f-d360c7481cb7
|
ClarenceDan
| 2025-01-15T15:09:54Z
| 32
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-13b-v1.5",
"base_model:adapter:lmsys/vicuna-13b-v1.5",
"license:llama2",
"region:us"
] | null | 2025-01-15T14:21:24Z
|
---
library_name: peft
license: llama2
base_model: lmsys/vicuna-13b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 93993d99-8ea8-47e0-849f-d360c7481cb7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-13b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4a8c0890df315567_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4a8c0890df315567_train_data.json
type:
field_instruction: original_text
field_output: correct_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/93993d99-8ea8-47e0-849f-d360c7481cb7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4a8c0890df315567_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a6ac3cc6-bcf7-42ba-814a-539ed29d5e1d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a6ac3cc6-bcf7-42ba-814a-539ed29d5e1d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 93993d99-8ea8-47e0-849f-d360c7481cb7
This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4302 | 0.0000 | 1 | 0.4490 |
| 0.448 | 0.0001 | 3 | 0.4463 |
| 0.4044 | 0.0002 | 6 | 0.4110 |
| 0.2998 | 0.0003 | 9 | 0.2676 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thaffggg/47549b63-3cef-4fdc-babf-e37cc823fc95
|
thaffggg
| 2025-01-15T15:03:52Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T14:50:54Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 47549b63-3cef-4fdc-babf-e37cc823fc95
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d01e93bb4785c31e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d01e93bb4785c31e_train_data.json
type:
field_input: domain
field_instruction: question
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/47549b63-3cef-4fdc-babf-e37cc823fc95
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d01e93bb4785c31e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 47549b63-3cef-4fdc-babf-e37cc823fc95
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3918 | 0.1633 | 200 | 0.2828 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/99cbf546-cfba-4383-aa91-9122d9181457
|
nhung01
| 2025-01-15T15:03:48Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T14:51:04Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 99cbf546-cfba-4383-aa91-9122d9181457
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d01e93bb4785c31e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d01e93bb4785c31e_train_data.json
type:
field_input: domain
field_instruction: question
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/99cbf546-cfba-4383-aa91-9122d9181457
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d01e93bb4785c31e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 99cbf546-cfba-4383-aa91-9122d9181457
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3922 | 0.1633 | 200 | 0.2814 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cunghoctienganh/a798d10d-dee8-4bf8-a46e-6c45dbb791b5
|
cunghoctienganh
| 2025-01-15T15:03:18Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T14:50:50Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a798d10d-dee8-4bf8-a46e-6c45dbb791b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d01e93bb4785c31e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d01e93bb4785c31e_train_data.json
type:
field_input: domain
field_instruction: question
field_output: query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/a798d10d-dee8-4bf8-a46e-6c45dbb791b5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d01e93bb4785c31e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b42bd8e6-00f4-4388-b8c9-71f792e66bc4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a798d10d-dee8-4bf8-a46e-6c45dbb791b5
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3897 | 0.1633 | 200 | 0.2802 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
augustinjianu/whisper-tiny-ro
|
augustinjianu
| 2025-01-15T15:00:52Z
| 21
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-01-14T22:58:36Z
|
---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Tiny Ro (local) - Augustin Jianu
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ro
split: test
args: 'config: ro, split: test'
metrics:
- name: Wer
type: wer
value: 37.48352861569144
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Ro (local) - Augustin Jianu
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5978
- Wer: 37.4835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:-------:|
| 0.4417 | 1.7730 | 1000 | 0.5327 | 43.8513 |
| 0.1813 | 3.5461 | 2000 | 0.4666 | 38.8689 |
| 0.0751 | 5.3191 | 3000 | 0.4645 | 36.5006 |
| 0.0326 | 7.0922 | 4000 | 0.4803 | 36.4614 |
| 0.0234 | 8.8652 | 5000 | 0.5087 | 36.5148 |
| 0.0082 | 10.6383 | 6000 | 0.5424 | 36.6252 |
| 0.0042 | 12.4113 | 7000 | 0.5650 | 37.6509 |
| 0.0029 | 14.1844 | 8000 | 0.5809 | 36.8710 |
| 0.0025 | 15.9574 | 9000 | 0.5922 | 38.1495 |
| 0.0021 | 17.7305 | 10000 | 0.5978 | 37.4835 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF
|
mradermacher
| 2025-01-15T15:00:04Z
| 555
| 0
|
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"mergekit",
"merge",
"en",
"base_model:pipihand01/QwQ-32B-Preview-abliterated-linear25",
"base_model:quantized:pipihand01/QwQ-32B-Preview-abliterated-linear25",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-15T10:23:55Z
|
---
base_model: pipihand01/QwQ-32B-Preview-abliterated-linear25
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear25/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
- abliterated
- uncensored
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pipihand01/QwQ-32B-Preview-abliterated-linear25
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-linear25-i1-GGUF/resolve/main/QwQ-32B-Preview-abliterated-linear25.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hendrydong/ckpt-m-1115-1
|
hendrydong
| 2025-01-15T14:59:58Z
| 47
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T14:54:46Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LocaleNLP/English_bambara
|
LocaleNLP
| 2025-01-15T14:59:17Z
| 263
| 0
|
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-01-15T14:28:35Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kokovova/1f7a1df9-f33f-4437-9e3c-c5a5ac50d5c7
|
kokovova
| 2025-01-15T14:58:59Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-15T14:58:05Z
|
---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f7a1df9-f33f-4437-9e3c-c5a5ac50d5c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 986b94c9dc58c18c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/986b94c9dc58c18c_train_data.json
type:
field_instruction: prompt
field_output: response_a
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kokovova/1f7a1df9-f33f-4437-9e3c-c5a5ac50d5c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/986b94c9dc58c18c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 69e5af40-c6f1-445f-a934-e367beed4132
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 69e5af40-c6f1-445f-a934-e367beed4132
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1f7a1df9-f33f-4437-9e3c-c5a5ac50d5c7
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 10.3803 |
| 10.3806 | 0.0012 | 8 | 10.3800 |
| 10.3797 | 0.0024 | 16 | 10.3791 |
| 10.3786 | 0.0035 | 24 | 10.3785 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
minsangK/20250115-bge-m3-8192-bs-8-1-epoch-5e-6-hn-1
|
minsangK
| 2025-01-15T14:58:53Z
| 23
| 0
|
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"generated_from_trainer",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-01-15T11:06:31Z
|
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: 20250115-bge-m3-8192-bs-8-1-epoch-5e-6-hn-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20250115-bge-m3-8192-bs-8-1-epoch-5e-6-hn-1
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2
- Datasets 2.19.0
- Tokenizers 0.19.1
|
bbytxt/11002e5d-55bc-4350-a995-08db41105f07
|
bbytxt
| 2025-01-15T14:58:20Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T14:32:11Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 11002e5d-55bc-4350-a995-08db41105f07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 19944eee78b6e7aa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/19944eee78b6e7aa_train_data.json
type:
field_input: background
field_instruction: context
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bbytxt/11002e5d-55bc-4350-a995-08db41105f07
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/19944eee78b6e7aa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d13b11f-b0e7-4077-be87-c28a4188a998
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d13b11f-b0e7-4077-be87-c28a4188a998
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 11002e5d-55bc-4350-a995-08db41105f07
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.078 | 0.0004 | 1 | 4.2947 |
| 2.1366 | 0.0186 | 50 | 2.0422 |
| 2.0896 | 0.0371 | 100 | 1.8271 |
| 1.8261 | 0.0557 | 150 | 1.7759 |
| 2.0419 | 0.0743 | 200 | 1.7661 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
visdata/bc15
|
visdata
| 2025-01-15T14:58:14Z
| 57
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T14:46:23Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
youssefguessous/hautdela-lora-flux-dev
|
youssefguessous
| 2025-01-15T14:57:17Z
| 44
| 1
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:54:04Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HAUTDELA
---
# Hautdela Lora Flux Dev
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HAUTDELA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('youssefguessous/hautdela-lora-flux-dev', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
dimasik87/388f1eb7-1d9e-47b8-9fb1-dcead681c9df
|
dimasik87
| 2025-01-15T14:57:06Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-01-15T14:15:12Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 388f1eb7-1d9e-47b8-9fb1-dcead681c9df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0431498a24ad26e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0431498a24ad26e2_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: responseA
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik87/388f1eb7-1d9e-47b8-9fb1-dcead681c9df
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0431498a24ad26e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 21027872-1320-4917-9c1c-3ca7864e1be9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 21027872-1320-4917-9c1c-3ca7864e1be9
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 388f1eb7-1d9e-47b8-9fb1-dcead681c9df
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k10_task2_organization
|
MayBashendy
| 2025-01-15T14:56:51Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T14:08:55Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k10_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k10_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6956
- Qwk: 0.4065
- Mse: 0.6956
- Rmse: 0.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0370 | 2 | 4.2649 | -0.0228 | 4.2649 | 2.0652 |
| No log | 0.0741 | 4 | 2.4048 | 0.0659 | 2.4048 | 1.5507 |
| No log | 0.1111 | 6 | 1.2037 | 0.0258 | 1.2037 | 1.0971 |
| No log | 0.1481 | 8 | 0.8782 | 0.1424 | 0.8782 | 0.9371 |
| No log | 0.1852 | 10 | 0.8781 | 0.1210 | 0.8781 | 0.9371 |
| No log | 0.2222 | 12 | 0.9623 | 0.0461 | 0.9623 | 0.9810 |
| No log | 0.2593 | 14 | 1.1572 | -0.0959 | 1.1572 | 1.0757 |
| No log | 0.2963 | 16 | 1.0446 | -0.0058 | 1.0446 | 1.0221 |
| No log | 0.3333 | 18 | 0.8847 | 0.0712 | 0.8847 | 0.9406 |
| No log | 0.3704 | 20 | 0.9010 | 0.0447 | 0.9010 | 0.9492 |
| No log | 0.4074 | 22 | 0.8834 | 0.1596 | 0.8834 | 0.9399 |
| No log | 0.4444 | 24 | 0.9201 | 0.1204 | 0.9201 | 0.9592 |
| No log | 0.4815 | 26 | 0.9268 | 0.1865 | 0.9268 | 0.9627 |
| No log | 0.5185 | 28 | 1.0286 | 0.0219 | 1.0286 | 1.0142 |
| No log | 0.5556 | 30 | 1.0217 | 0.0724 | 1.0217 | 1.0108 |
| No log | 0.5926 | 32 | 1.0320 | 0.0384 | 1.0320 | 1.0159 |
| No log | 0.6296 | 34 | 0.9380 | 0.2221 | 0.9380 | 0.9685 |
| No log | 0.6667 | 36 | 0.8912 | 0.2712 | 0.8912 | 0.9441 |
| No log | 0.7037 | 38 | 0.8365 | 0.2929 | 0.8365 | 0.9146 |
| No log | 0.7407 | 40 | 0.8275 | 0.2618 | 0.8275 | 0.9096 |
| No log | 0.7778 | 42 | 0.8612 | 0.2517 | 0.8612 | 0.9280 |
| No log | 0.8148 | 44 | 0.9083 | 0.1655 | 0.9083 | 0.9530 |
| No log | 0.8519 | 46 | 0.9579 | 0.0189 | 0.9579 | 0.9787 |
| No log | 0.8889 | 48 | 0.8705 | 0.2135 | 0.8705 | 0.9330 |
| No log | 0.9259 | 50 | 0.8026 | 0.2723 | 0.8026 | 0.8959 |
| No log | 0.9630 | 52 | 0.8513 | 0.1171 | 0.8513 | 0.9227 |
| No log | 1.0 | 54 | 0.8504 | 0.1725 | 0.8504 | 0.9222 |
| No log | 1.0370 | 56 | 0.8050 | 0.2772 | 0.8050 | 0.8972 |
| No log | 1.0741 | 58 | 0.7881 | 0.3398 | 0.7881 | 0.8877 |
| No log | 1.1111 | 60 | 0.8618 | 0.3128 | 0.8618 | 0.9283 |
| No log | 1.1481 | 62 | 1.0342 | 0.3109 | 1.0342 | 1.0169 |
| No log | 1.1852 | 64 | 0.8242 | 0.2916 | 0.8242 | 0.9078 |
| No log | 1.2222 | 66 | 0.7361 | 0.2098 | 0.7361 | 0.8580 |
| No log | 1.2593 | 68 | 0.7477 | 0.2240 | 0.7477 | 0.8647 |
| No log | 1.2963 | 70 | 0.8031 | 0.2975 | 0.8031 | 0.8962 |
| No log | 1.3333 | 72 | 1.0369 | 0.2641 | 1.0369 | 1.0183 |
| No log | 1.3704 | 74 | 0.8289 | 0.3314 | 0.8289 | 0.9104 |
| No log | 1.4074 | 76 | 0.7565 | 0.2932 | 0.7565 | 0.8698 |
| No log | 1.4444 | 78 | 0.7453 | 0.3283 | 0.7453 | 0.8633 |
| No log | 1.4815 | 80 | 0.9273 | 0.2649 | 0.9273 | 0.9630 |
| No log | 1.5185 | 82 | 0.9151 | 0.2909 | 0.9151 | 0.9566 |
| No log | 1.5556 | 84 | 0.7353 | 0.2598 | 0.7353 | 0.8575 |
| No log | 1.5926 | 86 | 0.7402 | 0.2537 | 0.7402 | 0.8603 |
| No log | 1.6296 | 88 | 0.7210 | 0.3183 | 0.7210 | 0.8491 |
| No log | 1.6667 | 90 | 0.7265 | 0.2319 | 0.7265 | 0.8523 |
| No log | 1.7037 | 92 | 0.9780 | 0.1547 | 0.9780 | 0.9890 |
| No log | 1.7407 | 94 | 1.1386 | 0.1581 | 1.1386 | 1.0670 |
| No log | 1.7778 | 96 | 0.9872 | 0.1703 | 0.9872 | 0.9936 |
| No log | 1.8148 | 98 | 0.7693 | 0.2500 | 0.7693 | 0.8771 |
| No log | 1.8519 | 100 | 0.6808 | 0.3121 | 0.6808 | 0.8251 |
| No log | 1.8889 | 102 | 0.7115 | 0.2345 | 0.7115 | 0.8435 |
| No log | 1.9259 | 104 | 0.7080 | 0.3626 | 0.7080 | 0.8414 |
| No log | 1.9630 | 106 | 0.7251 | 0.3840 | 0.7251 | 0.8515 |
| No log | 2.0 | 108 | 0.8961 | 0.3583 | 0.8961 | 0.9466 |
| No log | 2.0370 | 110 | 0.8678 | 0.3421 | 0.8678 | 0.9315 |
| No log | 2.0741 | 112 | 0.7853 | 0.4160 | 0.7853 | 0.8862 |
| No log | 2.1111 | 114 | 0.7847 | 0.3450 | 0.7847 | 0.8858 |
| No log | 2.1481 | 116 | 0.7820 | 0.2879 | 0.7820 | 0.8843 |
| No log | 2.1852 | 118 | 0.7886 | 0.2819 | 0.7886 | 0.8880 |
| No log | 2.2222 | 120 | 0.8080 | 0.3674 | 0.8080 | 0.8989 |
| No log | 2.2593 | 122 | 0.7933 | 0.3608 | 0.7933 | 0.8907 |
| No log | 2.2963 | 124 | 0.8356 | 0.1799 | 0.8356 | 0.9141 |
| No log | 2.3333 | 126 | 0.7891 | 0.3746 | 0.7891 | 0.8883 |
| No log | 2.3704 | 128 | 1.0069 | 0.2719 | 1.0069 | 1.0034 |
| No log | 2.4074 | 130 | 1.1976 | 0.2507 | 1.1976 | 1.0944 |
| No log | 2.4444 | 132 | 0.9833 | 0.2422 | 0.9833 | 0.9916 |
| No log | 2.4815 | 134 | 0.7235 | 0.3557 | 0.7235 | 0.8506 |
| No log | 2.5185 | 136 | 0.7984 | 0.2655 | 0.7984 | 0.8935 |
| No log | 2.5556 | 138 | 0.7571 | 0.2872 | 0.7571 | 0.8701 |
| No log | 2.5926 | 140 | 0.6836 | 0.3905 | 0.6836 | 0.8268 |
| No log | 2.6296 | 142 | 0.7576 | 0.2694 | 0.7576 | 0.8704 |
| No log | 2.6667 | 144 | 0.8193 | 0.2553 | 0.8193 | 0.9052 |
| No log | 2.7037 | 146 | 0.7582 | 0.3387 | 0.7582 | 0.8708 |
| No log | 2.7407 | 148 | 0.6818 | 0.4172 | 0.6818 | 0.8257 |
| No log | 2.7778 | 150 | 0.6817 | 0.3847 | 0.6817 | 0.8256 |
| No log | 2.8148 | 152 | 0.6857 | 0.4001 | 0.6857 | 0.8280 |
| No log | 2.8519 | 154 | 0.6964 | 0.3905 | 0.6964 | 0.8345 |
| No log | 2.8889 | 156 | 0.7135 | 0.3795 | 0.7135 | 0.8447 |
| No log | 2.9259 | 158 | 0.7874 | 0.3589 | 0.7874 | 0.8874 |
| No log | 2.9630 | 160 | 0.8083 | 0.3097 | 0.8083 | 0.8990 |
| No log | 3.0 | 162 | 0.7531 | 0.3002 | 0.7531 | 0.8678 |
| No log | 3.0370 | 164 | 0.7420 | 0.3057 | 0.7420 | 0.8614 |
| No log | 3.0741 | 166 | 0.7428 | 0.2769 | 0.7428 | 0.8619 |
| No log | 3.1111 | 168 | 0.7424 | 0.2735 | 0.7424 | 0.8616 |
| No log | 3.1481 | 170 | 0.6882 | 0.3449 | 0.6882 | 0.8296 |
| No log | 3.1852 | 172 | 0.7197 | 0.4562 | 0.7197 | 0.8484 |
| No log | 3.2222 | 174 | 0.7287 | 0.4128 | 0.7287 | 0.8536 |
| No log | 3.2593 | 176 | 0.7444 | 0.4239 | 0.7444 | 0.8628 |
| No log | 3.2963 | 178 | 0.7850 | 0.3941 | 0.7850 | 0.8860 |
| No log | 3.3333 | 180 | 0.7861 | 0.4062 | 0.7861 | 0.8866 |
| No log | 3.3704 | 182 | 0.7008 | 0.4332 | 0.7008 | 0.8371 |
| No log | 3.4074 | 184 | 0.6862 | 0.4233 | 0.6862 | 0.8284 |
| No log | 3.4444 | 186 | 0.6707 | 0.3905 | 0.6707 | 0.8189 |
| No log | 3.4815 | 188 | 0.6858 | 0.3590 | 0.6858 | 0.8281 |
| No log | 3.5185 | 190 | 0.8277 | 0.3560 | 0.8277 | 0.9098 |
| No log | 3.5556 | 192 | 0.8980 | 0.3155 | 0.8980 | 0.9476 |
| No log | 3.5926 | 194 | 0.7492 | 0.3088 | 0.7492 | 0.8655 |
| No log | 3.6296 | 196 | 0.6773 | 0.3358 | 0.6773 | 0.8230 |
| No log | 3.6667 | 198 | 0.7539 | 0.3898 | 0.7539 | 0.8683 |
| No log | 3.7037 | 200 | 0.7836 | 0.3404 | 0.7836 | 0.8852 |
| No log | 3.7407 | 202 | 0.7252 | 0.4127 | 0.7252 | 0.8516 |
| No log | 3.7778 | 204 | 0.7269 | 0.3153 | 0.7269 | 0.8526 |
| No log | 3.8148 | 206 | 0.8672 | 0.2295 | 0.8672 | 0.9312 |
| No log | 3.8519 | 208 | 0.8732 | 0.2705 | 0.8732 | 0.9345 |
| No log | 3.8889 | 210 | 0.7278 | 0.3419 | 0.7278 | 0.8531 |
| No log | 3.9259 | 212 | 0.6975 | 0.4351 | 0.6975 | 0.8351 |
| No log | 3.9630 | 214 | 0.7183 | 0.4552 | 0.7183 | 0.8476 |
| No log | 4.0 | 216 | 0.7031 | 0.4728 | 0.7031 | 0.8385 |
| No log | 4.0370 | 218 | 0.7066 | 0.3930 | 0.7066 | 0.8406 |
| No log | 4.0741 | 220 | 0.7424 | 0.3812 | 0.7424 | 0.8617 |
| No log | 4.1111 | 222 | 0.7145 | 0.3401 | 0.7145 | 0.8453 |
| No log | 4.1481 | 224 | 0.7027 | 0.3419 | 0.7027 | 0.8382 |
| No log | 4.1852 | 226 | 0.6740 | 0.3789 | 0.6740 | 0.8210 |
| No log | 4.2222 | 228 | 0.6646 | 0.3990 | 0.6646 | 0.8152 |
| No log | 4.2593 | 230 | 0.6643 | 0.4154 | 0.6643 | 0.8151 |
| No log | 4.2963 | 232 | 0.6807 | 0.4047 | 0.6807 | 0.8250 |
| No log | 4.3333 | 234 | 0.7314 | 0.4496 | 0.7314 | 0.8552 |
| No log | 4.3704 | 236 | 0.7605 | 0.4808 | 0.7605 | 0.8721 |
| No log | 4.4074 | 238 | 0.7136 | 0.4234 | 0.7136 | 0.8447 |
| No log | 4.4444 | 240 | 0.7071 | 0.4520 | 0.7071 | 0.8409 |
| No log | 4.4815 | 242 | 0.7238 | 0.4571 | 0.7238 | 0.8508 |
| No log | 4.5185 | 244 | 0.7134 | 0.4346 | 0.7134 | 0.8446 |
| No log | 4.5556 | 246 | 0.7427 | 0.4241 | 0.7427 | 0.8618 |
| No log | 4.5926 | 248 | 0.7480 | 0.4094 | 0.7480 | 0.8648 |
| No log | 4.6296 | 250 | 0.7522 | 0.4079 | 0.7522 | 0.8673 |
| No log | 4.6667 | 252 | 0.7522 | 0.4296 | 0.7522 | 0.8673 |
| No log | 4.7037 | 254 | 0.7353 | 0.3845 | 0.7353 | 0.8575 |
| No log | 4.7407 | 256 | 0.7388 | 0.3514 | 0.7388 | 0.8595 |
| No log | 4.7778 | 258 | 0.7709 | 0.3659 | 0.7709 | 0.8780 |
| No log | 4.8148 | 260 | 0.7823 | 0.3613 | 0.7823 | 0.8845 |
| No log | 4.8519 | 262 | 0.7626 | 0.3703 | 0.7626 | 0.8733 |
| No log | 4.8889 | 264 | 0.7646 | 0.4058 | 0.7646 | 0.8744 |
| No log | 4.9259 | 266 | 0.7517 | 0.4111 | 0.7517 | 0.8670 |
| No log | 4.9630 | 268 | 0.7591 | 0.3921 | 0.7591 | 0.8713 |
| No log | 5.0 | 270 | 0.7818 | 0.4122 | 0.7818 | 0.8842 |
| No log | 5.0370 | 272 | 0.7745 | 0.4303 | 0.7745 | 0.8800 |
| No log | 5.0741 | 274 | 0.7880 | 0.4521 | 0.7880 | 0.8877 |
| No log | 5.1111 | 276 | 0.8725 | 0.3928 | 0.8725 | 0.9341 |
| No log | 5.1481 | 278 | 0.8682 | 0.3761 | 0.8682 | 0.9318 |
| No log | 5.1852 | 280 | 0.7397 | 0.4129 | 0.7397 | 0.8600 |
| No log | 5.2222 | 282 | 0.7005 | 0.3669 | 0.7005 | 0.8370 |
| No log | 5.2593 | 284 | 0.7074 | 0.3585 | 0.7074 | 0.8410 |
| No log | 5.2963 | 286 | 0.7026 | 0.3227 | 0.7026 | 0.8382 |
| No log | 5.3333 | 288 | 0.7122 | 0.3655 | 0.7122 | 0.8439 |
| No log | 5.3704 | 290 | 0.7270 | 0.3693 | 0.7270 | 0.8526 |
| No log | 5.4074 | 292 | 0.7674 | 0.4339 | 0.7674 | 0.8760 |
| No log | 5.4444 | 294 | 0.7428 | 0.3853 | 0.7428 | 0.8618 |
| No log | 5.4815 | 296 | 0.7304 | 0.3568 | 0.7304 | 0.8546 |
| No log | 5.5185 | 298 | 0.7671 | 0.2821 | 0.7671 | 0.8758 |
| No log | 5.5556 | 300 | 0.7282 | 0.3860 | 0.7282 | 0.8534 |
| No log | 5.5926 | 302 | 0.7099 | 0.3877 | 0.7099 | 0.8426 |
| No log | 5.6296 | 304 | 0.7235 | 0.3827 | 0.7235 | 0.8506 |
| No log | 5.6667 | 306 | 0.7305 | 0.4462 | 0.7305 | 0.8547 |
| No log | 5.7037 | 308 | 0.6874 | 0.4306 | 0.6874 | 0.8291 |
| No log | 5.7407 | 310 | 0.7730 | 0.4009 | 0.7730 | 0.8792 |
| No log | 5.7778 | 312 | 0.8971 | 0.3048 | 0.8971 | 0.9472 |
| No log | 5.8148 | 314 | 0.8102 | 0.3699 | 0.8102 | 0.9001 |
| No log | 5.8519 | 316 | 0.6982 | 0.4026 | 0.6982 | 0.8356 |
| No log | 5.8889 | 318 | 0.6854 | 0.3977 | 0.6854 | 0.8279 |
| No log | 5.9259 | 320 | 0.6907 | 0.4542 | 0.6907 | 0.8311 |
| No log | 5.9630 | 322 | 0.7088 | 0.4649 | 0.7088 | 0.8419 |
| No log | 6.0 | 324 | 0.7319 | 0.4862 | 0.7319 | 0.8555 |
| No log | 6.0370 | 326 | 0.7332 | 0.4812 | 0.7332 | 0.8563 |
| No log | 6.0741 | 328 | 0.7223 | 0.4940 | 0.7223 | 0.8499 |
| No log | 6.1111 | 330 | 0.7190 | 0.3858 | 0.7190 | 0.8479 |
| No log | 6.1481 | 332 | 0.7488 | 0.3590 | 0.7488 | 0.8653 |
| No log | 6.1852 | 334 | 0.7076 | 0.2821 | 0.7076 | 0.8412 |
| No log | 6.2222 | 336 | 0.6852 | 0.2982 | 0.6852 | 0.8278 |
| No log | 6.2593 | 338 | 0.7207 | 0.2521 | 0.7207 | 0.8490 |
| No log | 6.2963 | 340 | 0.7623 | 0.2642 | 0.7623 | 0.8731 |
| No log | 6.3333 | 342 | 0.7289 | 0.3507 | 0.7289 | 0.8538 |
| No log | 6.3704 | 344 | 0.6978 | 0.3791 | 0.6978 | 0.8353 |
| No log | 6.4074 | 346 | 0.6939 | 0.3435 | 0.6939 | 0.8330 |
| No log | 6.4444 | 348 | 0.7328 | 0.3696 | 0.7328 | 0.8561 |
| No log | 6.4815 | 350 | 0.7141 | 0.384 | 0.7141 | 0.8450 |
| No log | 6.5185 | 352 | 0.6893 | 0.3894 | 0.6893 | 0.8302 |
| No log | 6.5556 | 354 | 0.6849 | 0.3443 | 0.6849 | 0.8276 |
| No log | 6.5926 | 356 | 0.7324 | 0.3791 | 0.7324 | 0.8558 |
| No log | 6.6296 | 358 | 0.7436 | 0.3755 | 0.7436 | 0.8623 |
| No log | 6.6667 | 360 | 0.7026 | 0.3604 | 0.7026 | 0.8382 |
| No log | 6.7037 | 362 | 0.7192 | 0.3814 | 0.7192 | 0.8481 |
| No log | 6.7407 | 364 | 0.7593 | 0.3607 | 0.7593 | 0.8714 |
| No log | 6.7778 | 366 | 0.7241 | 0.3776 | 0.7241 | 0.8509 |
| No log | 6.8148 | 368 | 0.7035 | 0.3373 | 0.7035 | 0.8388 |
| No log | 6.8519 | 370 | 0.7332 | 0.3985 | 0.7332 | 0.8563 |
| No log | 6.8889 | 372 | 0.7415 | 0.3983 | 0.7415 | 0.8611 |
| No log | 6.9259 | 374 | 0.7157 | 0.3970 | 0.7157 | 0.8460 |
| No log | 6.9630 | 376 | 0.7331 | 0.3808 | 0.7331 | 0.8562 |
| No log | 7.0 | 378 | 0.7475 | 0.4068 | 0.7475 | 0.8646 |
| No log | 7.0370 | 380 | 0.7260 | 0.3782 | 0.7260 | 0.8521 |
| No log | 7.0741 | 382 | 0.7003 | 0.3363 | 0.7003 | 0.8369 |
| No log | 7.1111 | 384 | 0.7012 | 0.4209 | 0.7012 | 0.8374 |
| No log | 7.1481 | 386 | 0.7126 | 0.4273 | 0.7126 | 0.8442 |
| No log | 7.1852 | 388 | 0.7068 | 0.4146 | 0.7068 | 0.8407 |
| No log | 7.2222 | 390 | 0.6893 | 0.3976 | 0.6893 | 0.8302 |
| No log | 7.2593 | 392 | 0.7194 | 0.3353 | 0.7194 | 0.8482 |
| No log | 7.2963 | 394 | 0.7559 | 0.3323 | 0.7559 | 0.8694 |
| No log | 7.3333 | 396 | 0.7225 | 0.3618 | 0.7225 | 0.8500 |
| No log | 7.3704 | 398 | 0.7170 | 0.3776 | 0.7170 | 0.8468 |
| No log | 7.4074 | 400 | 0.7965 | 0.3963 | 0.7965 | 0.8925 |
| No log | 7.4444 | 402 | 0.9136 | 0.3449 | 0.9136 | 0.9558 |
| No log | 7.4815 | 404 | 0.8872 | 0.3363 | 0.8872 | 0.9419 |
| No log | 7.5185 | 406 | 0.7623 | 0.3967 | 0.7623 | 0.8731 |
| No log | 7.5556 | 408 | 0.7050 | 0.3184 | 0.7050 | 0.8396 |
| No log | 7.5926 | 410 | 0.6974 | 0.3655 | 0.6974 | 0.8351 |
| No log | 7.6296 | 412 | 0.7093 | 0.2859 | 0.7093 | 0.8422 |
| No log | 7.6667 | 414 | 0.6987 | 0.3212 | 0.6987 | 0.8359 |
| No log | 7.7037 | 416 | 0.6966 | 0.3645 | 0.6966 | 0.8346 |
| No log | 7.7407 | 418 | 0.6930 | 0.3961 | 0.6930 | 0.8324 |
| No log | 7.7778 | 420 | 0.6835 | 0.4450 | 0.6835 | 0.8267 |
| No log | 7.8148 | 422 | 0.6783 | 0.4712 | 0.6783 | 0.8236 |
| No log | 7.8519 | 424 | 0.7015 | 0.4629 | 0.7015 | 0.8376 |
| No log | 7.8889 | 426 | 0.6753 | 0.4621 | 0.6753 | 0.8218 |
| No log | 7.9259 | 428 | 0.6774 | 0.3902 | 0.6774 | 0.8230 |
| No log | 7.9630 | 430 | 0.7012 | 0.4128 | 0.7012 | 0.8374 |
| No log | 8.0 | 432 | 0.6767 | 0.4186 | 0.6767 | 0.8226 |
| No log | 8.0370 | 434 | 0.6702 | 0.4079 | 0.6702 | 0.8187 |
| No log | 8.0741 | 436 | 0.6775 | 0.4211 | 0.6775 | 0.8231 |
| No log | 8.1111 | 438 | 0.6938 | 0.4465 | 0.6938 | 0.8330 |
| No log | 8.1481 | 440 | 0.7008 | 0.4403 | 0.7008 | 0.8372 |
| No log | 8.1852 | 442 | 0.6938 | 0.4465 | 0.6938 | 0.8329 |
| No log | 8.2222 | 444 | 0.7169 | 0.4451 | 0.7169 | 0.8467 |
| No log | 8.2593 | 446 | 0.7140 | 0.4504 | 0.7140 | 0.8450 |
| No log | 8.2963 | 448 | 0.7267 | 0.4444 | 0.7267 | 0.8525 |
| No log | 8.3333 | 450 | 0.7111 | 0.4713 | 0.7111 | 0.8433 |
| No log | 8.3704 | 452 | 0.7098 | 0.4960 | 0.7098 | 0.8425 |
| No log | 8.4074 | 454 | 0.7196 | 0.4569 | 0.7196 | 0.8483 |
| No log | 8.4444 | 456 | 0.7759 | 0.4233 | 0.7759 | 0.8808 |
| No log | 8.4815 | 458 | 0.7809 | 0.4237 | 0.7809 | 0.8837 |
| No log | 8.5185 | 460 | 0.7744 | 0.4066 | 0.7744 | 0.8800 |
| No log | 8.5556 | 462 | 0.7148 | 0.3988 | 0.7148 | 0.8455 |
| No log | 8.5926 | 464 | 0.7066 | 0.4181 | 0.7066 | 0.8406 |
| No log | 8.6296 | 466 | 0.7049 | 0.3904 | 0.7049 | 0.8396 |
| No log | 8.6667 | 468 | 0.7050 | 0.4147 | 0.7050 | 0.8397 |
| No log | 8.7037 | 470 | 0.7628 | 0.3665 | 0.7628 | 0.8734 |
| No log | 8.7407 | 472 | 0.8046 | 0.2797 | 0.8046 | 0.8970 |
| No log | 8.7778 | 474 | 0.7751 | 0.3621 | 0.7751 | 0.8804 |
| No log | 8.8148 | 476 | 0.7161 | 0.3629 | 0.7161 | 0.8462 |
| No log | 8.8519 | 478 | 0.7246 | 0.3481 | 0.7246 | 0.8512 |
| No log | 8.8889 | 480 | 0.7615 | 0.4345 | 0.7615 | 0.8727 |
| No log | 8.9259 | 482 | 0.7338 | 0.3665 | 0.7338 | 0.8566 |
| No log | 8.9630 | 484 | 0.7104 | 0.4143 | 0.7104 | 0.8428 |
| No log | 9.0 | 486 | 0.7510 | 0.3212 | 0.7510 | 0.8666 |
| No log | 9.0370 | 488 | 0.7680 | 0.3347 | 0.7680 | 0.8764 |
| No log | 9.0741 | 490 | 0.7204 | 0.3338 | 0.7204 | 0.8488 |
| No log | 9.1111 | 492 | 0.6770 | 0.3475 | 0.6770 | 0.8228 |
| No log | 9.1481 | 494 | 0.6943 | 0.3249 | 0.6943 | 0.8332 |
| No log | 9.1852 | 496 | 0.6981 | 0.3090 | 0.6981 | 0.8356 |
| No log | 9.2222 | 498 | 0.6866 | 0.3215 | 0.6866 | 0.8286 |
| 0.3989 | 9.2593 | 500 | 0.6787 | 0.2617 | 0.6787 | 0.8238 |
| 0.3989 | 9.2963 | 502 | 0.7103 | 0.3806 | 0.7103 | 0.8428 |
| 0.3989 | 9.3333 | 504 | 0.7781 | 0.3991 | 0.7781 | 0.8821 |
| 0.3989 | 9.3704 | 506 | 0.7649 | 0.4175 | 0.7649 | 0.8746 |
| 0.3989 | 9.4074 | 508 | 0.7344 | 0.3725 | 0.7344 | 0.8570 |
| 0.3989 | 9.4444 | 510 | 0.6900 | 0.2849 | 0.6900 | 0.8306 |
| 0.3989 | 9.4815 | 512 | 0.6813 | 0.2647 | 0.6813 | 0.8254 |
| 0.3989 | 9.5185 | 514 | 0.6820 | 0.2647 | 0.6820 | 0.8258 |
| 0.3989 | 9.5556 | 516 | 0.6890 | 0.2912 | 0.6890 | 0.8300 |
| 0.3989 | 9.5926 | 518 | 0.6841 | 0.2912 | 0.6841 | 0.8271 |
| 0.3989 | 9.6296 | 520 | 0.6673 | 0.3036 | 0.6673 | 0.8169 |
| 0.3989 | 9.6667 | 522 | 0.6629 | 0.3024 | 0.6629 | 0.8142 |
| 0.3989 | 9.7037 | 524 | 0.6666 | 0.3090 | 0.6666 | 0.8165 |
| 0.3989 | 9.7407 | 526 | 0.6590 | 0.3307 | 0.6590 | 0.8118 |
| 0.3989 | 9.7778 | 528 | 0.6633 | 0.3598 | 0.6633 | 0.8144 |
| 0.3989 | 9.8148 | 530 | 0.6790 | 0.3652 | 0.6790 | 0.8240 |
| 0.3989 | 9.8519 | 532 | 0.6762 | 0.3866 | 0.6762 | 0.8223 |
| 0.3989 | 9.8889 | 534 | 0.6872 | 0.4164 | 0.6872 | 0.8290 |
| 0.3989 | 9.9259 | 536 | 0.6703 | 0.4307 | 0.6703 | 0.8187 |
| 0.3989 | 9.9630 | 538 | 0.6707 | 0.4018 | 0.6707 | 0.8190 |
| 0.3989 | 10.0 | 540 | 0.6676 | 0.3584 | 0.6676 | 0.8171 |
| 0.3989 | 10.0370 | 542 | 0.6725 | 0.4244 | 0.6725 | 0.8201 |
| 0.3989 | 10.0741 | 544 | 0.7166 | 0.3911 | 0.7166 | 0.8465 |
| 0.3989 | 10.1111 | 546 | 0.6942 | 0.3875 | 0.6942 | 0.8332 |
| 0.3989 | 10.1481 | 548 | 0.6521 | 0.3847 | 0.6521 | 0.8076 |
| 0.3989 | 10.1852 | 550 | 0.6613 | 0.3423 | 0.6613 | 0.8132 |
| 0.3989 | 10.2222 | 552 | 0.6784 | 0.3811 | 0.6784 | 0.8236 |
| 0.3989 | 10.2593 | 554 | 0.6593 | 0.3961 | 0.6593 | 0.8120 |
| 0.3989 | 10.2963 | 556 | 0.6501 | 0.3639 | 0.6501 | 0.8063 |
| 0.3989 | 10.3333 | 558 | 0.6681 | 0.3637 | 0.6681 | 0.8174 |
| 0.3989 | 10.3704 | 560 | 0.7092 | 0.4351 | 0.7092 | 0.8421 |
| 0.3989 | 10.4074 | 562 | 0.6952 | 0.4476 | 0.6952 | 0.8338 |
| 0.3989 | 10.4444 | 564 | 0.6861 | 0.3861 | 0.6861 | 0.8283 |
| 0.3989 | 10.4815 | 566 | 0.7003 | 0.4138 | 0.7003 | 0.8368 |
| 0.3989 | 10.5185 | 568 | 0.6967 | 0.3646 | 0.6967 | 0.8347 |
| 0.3989 | 10.5556 | 570 | 0.6775 | 0.3455 | 0.6775 | 0.8231 |
| 0.3989 | 10.5926 | 572 | 0.6778 | 0.3455 | 0.6778 | 0.8233 |
| 0.3989 | 10.6296 | 574 | 0.6976 | 0.3124 | 0.6976 | 0.8352 |
| 0.3989 | 10.6667 | 576 | 0.7550 | 0.3738 | 0.7550 | 0.8689 |
| 0.3989 | 10.7037 | 578 | 0.7780 | 0.4051 | 0.7780 | 0.8820 |
| 0.3989 | 10.7407 | 580 | 0.7216 | 0.3524 | 0.7216 | 0.8495 |
| 0.3989 | 10.7778 | 582 | 0.6932 | 0.3696 | 0.6932 | 0.8326 |
| 0.3989 | 10.8148 | 584 | 0.6912 | 0.4042 | 0.6912 | 0.8314 |
| 0.3989 | 10.8519 | 586 | 0.6984 | 0.3805 | 0.6984 | 0.8357 |
| 0.3989 | 10.8889 | 588 | 0.7320 | 0.3074 | 0.7320 | 0.8556 |
| 0.3989 | 10.9259 | 590 | 0.7950 | 0.4049 | 0.7950 | 0.8916 |
| 0.3989 | 10.9630 | 592 | 0.8450 | 0.3658 | 0.8450 | 0.9192 |
| 0.3989 | 11.0 | 594 | 0.8332 | 0.3711 | 0.8332 | 0.9128 |
| 0.3989 | 11.0370 | 596 | 0.7693 | 0.3146 | 0.7693 | 0.8771 |
| 0.3989 | 11.0741 | 598 | 0.7161 | 0.2683 | 0.7161 | 0.8462 |
| 0.3989 | 11.1111 | 600 | 0.7131 | 0.2764 | 0.7131 | 0.8445 |
| 0.3989 | 11.1481 | 602 | 0.7296 | 0.2635 | 0.7296 | 0.8542 |
| 0.3989 | 11.1852 | 604 | 0.7695 | 0.2883 | 0.7695 | 0.8772 |
| 0.3989 | 11.2222 | 606 | 0.7810 | 0.3082 | 0.7810 | 0.8837 |
| 0.3989 | 11.2593 | 608 | 0.7489 | 0.3211 | 0.7489 | 0.8654 |
| 0.3989 | 11.2963 | 610 | 0.6990 | 0.3722 | 0.6990 | 0.8361 |
| 0.3989 | 11.3333 | 612 | 0.6908 | 0.3637 | 0.6908 | 0.8312 |
| 0.3989 | 11.3704 | 614 | 0.6900 | 0.4327 | 0.6900 | 0.8307 |
| 0.3989 | 11.4074 | 616 | 0.6778 | 0.3593 | 0.6778 | 0.8233 |
| 0.3989 | 11.4444 | 618 | 0.6911 | 0.3123 | 0.6911 | 0.8313 |
| 0.3989 | 11.4815 | 620 | 0.7727 | 0.3614 | 0.7727 | 0.8790 |
| 0.3989 | 11.5185 | 622 | 0.8277 | 0.3627 | 0.8277 | 0.9098 |
| 0.3989 | 11.5556 | 624 | 0.7816 | 0.3516 | 0.7816 | 0.8841 |
| 0.3989 | 11.5926 | 626 | 0.6957 | 0.3840 | 0.6957 | 0.8341 |
| 0.3989 | 11.6296 | 628 | 0.6688 | 0.2742 | 0.6688 | 0.8178 |
| 0.3989 | 11.6667 | 630 | 0.6807 | 0.3720 | 0.6807 | 0.8250 |
| 0.3989 | 11.7037 | 632 | 0.6776 | 0.3833 | 0.6776 | 0.8232 |
| 0.3989 | 11.7407 | 634 | 0.6928 | 0.3954 | 0.6928 | 0.8324 |
| 0.3989 | 11.7778 | 636 | 0.7114 | 0.3921 | 0.7114 | 0.8434 |
| 0.3989 | 11.8148 | 638 | 0.6843 | 0.4111 | 0.6843 | 0.8272 |
| 0.3989 | 11.8519 | 640 | 0.6685 | 0.4915 | 0.6685 | 0.8176 |
| 0.3989 | 11.8889 | 642 | 0.6649 | 0.4915 | 0.6649 | 0.8154 |
| 0.3989 | 11.9259 | 644 | 0.6607 | 0.4582 | 0.6607 | 0.8128 |
| 0.3989 | 11.9630 | 646 | 0.6603 | 0.4367 | 0.6603 | 0.8126 |
| 0.3989 | 12.0 | 648 | 0.6609 | 0.3799 | 0.6609 | 0.8130 |
| 0.3989 | 12.0370 | 650 | 0.6551 | 0.3786 | 0.6551 | 0.8094 |
| 0.3989 | 12.0741 | 652 | 0.6515 | 0.3552 | 0.6515 | 0.8072 |
| 0.3989 | 12.1111 | 654 | 0.6478 | 0.3962 | 0.6478 | 0.8048 |
| 0.3989 | 12.1481 | 656 | 0.6561 | 0.3962 | 0.6561 | 0.8100 |
| 0.3989 | 12.1852 | 658 | 0.6658 | 0.3795 | 0.6658 | 0.8159 |
| 0.3989 | 12.2222 | 660 | 0.6709 | 0.3794 | 0.6709 | 0.8191 |
| 0.3989 | 12.2593 | 662 | 0.7029 | 0.4209 | 0.7029 | 0.8384 |
| 0.3989 | 12.2963 | 664 | 0.7678 | 0.4051 | 0.7678 | 0.8763 |
| 0.3989 | 12.3333 | 666 | 0.7786 | 0.4165 | 0.7786 | 0.8824 |
| 0.3989 | 12.3704 | 668 | 0.7494 | 0.4387 | 0.7494 | 0.8657 |
| 0.3989 | 12.4074 | 670 | 0.6842 | 0.3376 | 0.6842 | 0.8272 |
| 0.3989 | 12.4444 | 672 | 0.6557 | 0.4136 | 0.6557 | 0.8098 |
| 0.3989 | 12.4815 | 674 | 0.6543 | 0.4013 | 0.6543 | 0.8089 |
| 0.3989 | 12.5185 | 676 | 0.6561 | 0.4185 | 0.6561 | 0.8100 |
| 0.3989 | 12.5556 | 678 | 0.6593 | 0.4337 | 0.6593 | 0.8120 |
| 0.3989 | 12.5926 | 680 | 0.6632 | 0.3989 | 0.6632 | 0.8144 |
| 0.3989 | 12.6296 | 682 | 0.6686 | 0.3850 | 0.6686 | 0.8177 |
| 0.3989 | 12.6667 | 684 | 0.6555 | 0.3717 | 0.6555 | 0.8096 |
| 0.3989 | 12.7037 | 686 | 0.6537 | 0.3961 | 0.6537 | 0.8085 |
| 0.3989 | 12.7407 | 688 | 0.6672 | 0.3931 | 0.6672 | 0.8168 |
| 0.3989 | 12.7778 | 690 | 0.6930 | 0.4439 | 0.6930 | 0.8325 |
| 0.3989 | 12.8148 | 692 | 0.7478 | 0.4464 | 0.7478 | 0.8647 |
| 0.3989 | 12.8519 | 694 | 0.7656 | 0.4464 | 0.7656 | 0.8750 |
| 0.3989 | 12.8889 | 696 | 0.7161 | 0.4330 | 0.7161 | 0.8462 |
| 0.3989 | 12.9259 | 698 | 0.6732 | 0.4321 | 0.6732 | 0.8205 |
| 0.3989 | 12.9630 | 700 | 0.6765 | 0.3894 | 0.6765 | 0.8225 |
| 0.3989 | 13.0 | 702 | 0.6678 | 0.3949 | 0.6678 | 0.8172 |
| 0.3989 | 13.0370 | 704 | 0.6724 | 0.4450 | 0.6724 | 0.8200 |
| 0.3989 | 13.0741 | 706 | 0.6971 | 0.4169 | 0.6971 | 0.8349 |
| 0.3989 | 13.1111 | 708 | 0.6941 | 0.4017 | 0.6941 | 0.8331 |
| 0.3989 | 13.1481 | 710 | 0.6841 | 0.4369 | 0.6841 | 0.8271 |
| 0.3989 | 13.1852 | 712 | 0.7099 | 0.4153 | 0.7099 | 0.8426 |
| 0.3989 | 13.2222 | 714 | 0.7661 | 0.4165 | 0.7661 | 0.8753 |
| 0.3989 | 13.2593 | 716 | 0.7600 | 0.4194 | 0.7600 | 0.8718 |
| 0.3989 | 13.2963 | 718 | 0.7098 | 0.4773 | 0.7098 | 0.8425 |
| 0.3989 | 13.3333 | 720 | 0.6987 | 0.4060 | 0.6987 | 0.8359 |
| 0.3989 | 13.3704 | 722 | 0.7065 | 0.3861 | 0.7065 | 0.8405 |
| 0.3989 | 13.4074 | 724 | 0.6807 | 0.3993 | 0.6807 | 0.8250 |
| 0.3989 | 13.4444 | 726 | 0.6681 | 0.3951 | 0.6681 | 0.8174 |
| 0.3989 | 13.4815 | 728 | 0.6905 | 0.4164 | 0.6905 | 0.8309 |
| 0.3989 | 13.5185 | 730 | 0.6999 | 0.4100 | 0.6999 | 0.8366 |
| 0.3989 | 13.5556 | 732 | 0.6771 | 0.3959 | 0.6771 | 0.8228 |
| 0.3989 | 13.5926 | 734 | 0.6639 | 0.3849 | 0.6639 | 0.8148 |
| 0.3989 | 13.6296 | 736 | 0.6664 | 0.4125 | 0.6664 | 0.8163 |
| 0.3989 | 13.6667 | 738 | 0.6685 | 0.4125 | 0.6685 | 0.8176 |
| 0.3989 | 13.7037 | 740 | 0.6685 | 0.4514 | 0.6685 | 0.8176 |
| 0.3989 | 13.7407 | 742 | 0.6763 | 0.3486 | 0.6763 | 0.8224 |
| 0.3989 | 13.7778 | 744 | 0.6774 | 0.3447 | 0.6774 | 0.8230 |
| 0.3989 | 13.8148 | 746 | 0.6772 | 0.4196 | 0.6772 | 0.8229 |
| 0.3989 | 13.8519 | 748 | 0.7201 | 0.3721 | 0.7201 | 0.8486 |
| 0.3989 | 13.8889 | 750 | 0.7478 | 0.3967 | 0.7478 | 0.8647 |
| 0.3989 | 13.9259 | 752 | 0.7129 | 0.3842 | 0.7129 | 0.8443 |
| 0.3989 | 13.9630 | 754 | 0.6736 | 0.3720 | 0.6736 | 0.8207 |
| 0.3989 | 14.0 | 756 | 0.6597 | 0.3260 | 0.6597 | 0.8122 |
| 0.3989 | 14.0370 | 758 | 0.6661 | 0.3535 | 0.6661 | 0.8161 |
| 0.3989 | 14.0741 | 760 | 0.6634 | 0.3261 | 0.6634 | 0.8145 |
| 0.3989 | 14.1111 | 762 | 0.6619 | 0.3932 | 0.6619 | 0.8135 |
| 0.3989 | 14.1481 | 764 | 0.7034 | 0.3979 | 0.7034 | 0.8387 |
| 0.3989 | 14.1852 | 766 | 0.7503 | 0.4186 | 0.7503 | 0.8662 |
| 0.3989 | 14.2222 | 768 | 0.7448 | 0.4291 | 0.7448 | 0.8630 |
| 0.3989 | 14.2593 | 770 | 0.7038 | 0.3582 | 0.7038 | 0.8390 |
| 0.3989 | 14.2963 | 772 | 0.6633 | 0.3481 | 0.6633 | 0.8145 |
| 0.3989 | 14.3333 | 774 | 0.6669 | 0.3535 | 0.6669 | 0.8166 |
| 0.3989 | 14.3704 | 776 | 0.6649 | 0.3427 | 0.6649 | 0.8154 |
| 0.3989 | 14.4074 | 778 | 0.6675 | 0.3868 | 0.6675 | 0.8170 |
| 0.3989 | 14.4444 | 780 | 0.6906 | 0.3825 | 0.6906 | 0.8310 |
| 0.3989 | 14.4815 | 782 | 0.6904 | 0.3923 | 0.6904 | 0.8309 |
| 0.3989 | 14.5185 | 784 | 0.6962 | 0.4358 | 0.6962 | 0.8344 |
| 0.3989 | 14.5556 | 786 | 0.6914 | 0.4262 | 0.6914 | 0.8315 |
| 0.3989 | 14.5926 | 788 | 0.6870 | 0.4385 | 0.6870 | 0.8288 |
| 0.3989 | 14.6296 | 790 | 0.6940 | 0.4323 | 0.6940 | 0.8331 |
| 0.3989 | 14.6667 | 792 | 0.6840 | 0.4330 | 0.6840 | 0.8270 |
| 0.3989 | 14.7037 | 794 | 0.6704 | 0.3710 | 0.6704 | 0.8188 |
| 0.3989 | 14.7407 | 796 | 0.6604 | 0.4066 | 0.6604 | 0.8127 |
| 0.3989 | 14.7778 | 798 | 0.6668 | 0.3904 | 0.6668 | 0.8166 |
| 0.3989 | 14.8148 | 800 | 0.6755 | 0.4607 | 0.6755 | 0.8219 |
| 0.3989 | 14.8519 | 802 | 0.6829 | 0.4445 | 0.6829 | 0.8264 |
| 0.3989 | 14.8889 | 804 | 0.6616 | 0.4246 | 0.6616 | 0.8134 |
| 0.3989 | 14.9259 | 806 | 0.6458 | 0.4192 | 0.6458 | 0.8036 |
| 0.3989 | 14.9630 | 808 | 0.6336 | 0.4225 | 0.6336 | 0.7960 |
| 0.3989 | 15.0 | 810 | 0.6256 | 0.4268 | 0.6256 | 0.7909 |
| 0.3989 | 15.0370 | 812 | 0.6270 | 0.4053 | 0.6270 | 0.7919 |
| 0.3989 | 15.0741 | 814 | 0.6710 | 0.3810 | 0.6710 | 0.8191 |
| 0.3989 | 15.1111 | 816 | 0.6770 | 0.3746 | 0.6770 | 0.8228 |
| 0.3989 | 15.1481 | 818 | 0.6395 | 0.4224 | 0.6395 | 0.7997 |
| 0.3989 | 15.1852 | 820 | 0.6250 | 0.4482 | 0.6250 | 0.7905 |
| 0.3989 | 15.2222 | 822 | 0.6428 | 0.4607 | 0.6428 | 0.8017 |
| 0.3989 | 15.2593 | 824 | 0.6432 | 0.4553 | 0.6432 | 0.8020 |
| 0.3989 | 15.2963 | 826 | 0.6369 | 0.4203 | 0.6369 | 0.7981 |
| 0.3989 | 15.3333 | 828 | 0.6731 | 0.4489 | 0.6731 | 0.8205 |
| 0.3989 | 15.3704 | 830 | 0.7253 | 0.4014 | 0.7253 | 0.8517 |
| 0.3989 | 15.4074 | 832 | 0.7206 | 0.3982 | 0.7206 | 0.8489 |
| 0.3989 | 15.4444 | 834 | 0.6697 | 0.4266 | 0.6697 | 0.8183 |
| 0.3989 | 15.4815 | 836 | 0.6468 | 0.4696 | 0.6468 | 0.8042 |
| 0.3989 | 15.5185 | 838 | 0.6553 | 0.4499 | 0.6553 | 0.8095 |
| 0.3989 | 15.5556 | 840 | 0.6369 | 0.4286 | 0.6369 | 0.7980 |
| 0.3989 | 15.5926 | 842 | 0.6288 | 0.4072 | 0.6288 | 0.7930 |
| 0.3989 | 15.6296 | 844 | 0.6593 | 0.4202 | 0.6593 | 0.8120 |
| 0.3989 | 15.6667 | 846 | 0.7051 | 0.4073 | 0.7051 | 0.8397 |
| 0.3989 | 15.7037 | 848 | 0.7109 | 0.3975 | 0.7109 | 0.8432 |
| 0.3989 | 15.7407 | 850 | 0.6783 | 0.4004 | 0.6783 | 0.8236 |
| 0.3989 | 15.7778 | 852 | 0.6444 | 0.4189 | 0.6444 | 0.8028 |
| 0.3989 | 15.8148 | 854 | 0.6317 | 0.4308 | 0.6317 | 0.7948 |
| 0.3989 | 15.8519 | 856 | 0.6378 | 0.4386 | 0.6378 | 0.7986 |
| 0.3989 | 15.8889 | 858 | 0.6511 | 0.4498 | 0.6511 | 0.8069 |
| 0.3989 | 15.9259 | 860 | 0.6488 | 0.4529 | 0.6488 | 0.8055 |
| 0.3989 | 15.9630 | 862 | 0.6421 | 0.4538 | 0.6421 | 0.8013 |
| 0.3989 | 16.0 | 864 | 0.6372 | 0.4544 | 0.6372 | 0.7982 |
| 0.3989 | 16.0370 | 866 | 0.6646 | 0.4705 | 0.6646 | 0.8152 |
| 0.3989 | 16.0741 | 868 | 0.6611 | 0.4567 | 0.6611 | 0.8131 |
| 0.3989 | 16.1111 | 870 | 0.6353 | 0.4394 | 0.6353 | 0.7970 |
| 0.3989 | 16.1481 | 872 | 0.6305 | 0.4618 | 0.6305 | 0.7941 |
| 0.3989 | 16.1852 | 874 | 0.6310 | 0.4587 | 0.6310 | 0.7943 |
| 0.3989 | 16.2222 | 876 | 0.6236 | 0.4335 | 0.6236 | 0.7897 |
| 0.3989 | 16.2593 | 878 | 0.6222 | 0.4613 | 0.6222 | 0.7888 |
| 0.3989 | 16.2963 | 880 | 0.6210 | 0.4623 | 0.6210 | 0.7880 |
| 0.3989 | 16.3333 | 882 | 0.6215 | 0.4505 | 0.6215 | 0.7884 |
| 0.3989 | 16.3704 | 884 | 0.6234 | 0.4530 | 0.6234 | 0.7896 |
| 0.3989 | 16.4074 | 886 | 0.6229 | 0.4273 | 0.6229 | 0.7892 |
| 0.3989 | 16.4444 | 888 | 0.6284 | 0.4365 | 0.6284 | 0.7927 |
| 0.3989 | 16.4815 | 890 | 0.6267 | 0.4345 | 0.6267 | 0.7917 |
| 0.3989 | 16.5185 | 892 | 0.6296 | 0.4368 | 0.6296 | 0.7935 |
| 0.3989 | 16.5556 | 894 | 0.6307 | 0.4368 | 0.6307 | 0.7942 |
| 0.3989 | 16.5926 | 896 | 0.6319 | 0.4328 | 0.6319 | 0.7949 |
| 0.3989 | 16.6296 | 898 | 0.6265 | 0.4433 | 0.6265 | 0.7915 |
| 0.3989 | 16.6667 | 900 | 0.6221 | 0.4341 | 0.6221 | 0.7887 |
| 0.3989 | 16.7037 | 902 | 0.6184 | 0.4380 | 0.6184 | 0.7864 |
| 0.3989 | 16.7407 | 904 | 0.6211 | 0.4331 | 0.6211 | 0.7881 |
| 0.3989 | 16.7778 | 906 | 0.6329 | 0.3923 | 0.6329 | 0.7956 |
| 0.3989 | 16.8148 | 908 | 0.6370 | 0.3923 | 0.6370 | 0.7981 |
| 0.3989 | 16.8519 | 910 | 0.6440 | 0.4124 | 0.6440 | 0.8025 |
| 0.3989 | 16.8889 | 912 | 0.6364 | 0.4415 | 0.6364 | 0.7978 |
| 0.3989 | 16.9259 | 914 | 0.6324 | 0.4827 | 0.6324 | 0.7952 |
| 0.3989 | 16.9630 | 916 | 0.6367 | 0.4712 | 0.6367 | 0.7980 |
| 0.3989 | 17.0 | 918 | 0.6374 | 0.4868 | 0.6374 | 0.7984 |
| 0.3989 | 17.0370 | 920 | 0.6298 | 0.4433 | 0.6298 | 0.7936 |
| 0.3989 | 17.0741 | 922 | 0.6262 | 0.4212 | 0.6262 | 0.7913 |
| 0.3989 | 17.1111 | 924 | 0.6226 | 0.4212 | 0.6226 | 0.7891 |
| 0.3989 | 17.1481 | 926 | 0.6201 | 0.4569 | 0.6201 | 0.7875 |
| 0.3989 | 17.1852 | 928 | 0.6240 | 0.4614 | 0.6240 | 0.7899 |
| 0.3989 | 17.2222 | 930 | 0.6161 | 0.4693 | 0.6161 | 0.7849 |
| 0.3989 | 17.2593 | 932 | 0.6410 | 0.4292 | 0.6410 | 0.8006 |
| 0.3989 | 17.2963 | 934 | 0.6465 | 0.4488 | 0.6465 | 0.8041 |
| 0.3989 | 17.3333 | 936 | 0.6256 | 0.4163 | 0.6256 | 0.7909 |
| 0.3989 | 17.3704 | 938 | 0.6211 | 0.4322 | 0.6211 | 0.7881 |
| 0.3989 | 17.4074 | 940 | 0.6531 | 0.4476 | 0.6531 | 0.8081 |
| 0.3989 | 17.4444 | 942 | 0.6888 | 0.4726 | 0.6888 | 0.8299 |
| 0.3989 | 17.4815 | 944 | 0.6918 | 0.4726 | 0.6918 | 0.8318 |
| 0.3989 | 17.5185 | 946 | 0.6702 | 0.4946 | 0.6702 | 0.8186 |
| 0.3989 | 17.5556 | 948 | 0.6504 | 0.4861 | 0.6504 | 0.8065 |
| 0.3989 | 17.5926 | 950 | 0.6541 | 0.4818 | 0.6541 | 0.8088 |
| 0.3989 | 17.6296 | 952 | 0.6489 | 0.4783 | 0.6489 | 0.8056 |
| 0.3989 | 17.6667 | 954 | 0.6567 | 0.4948 | 0.6567 | 0.8104 |
| 0.3989 | 17.7037 | 956 | 0.6684 | 0.5438 | 0.6684 | 0.8175 |
| 0.3989 | 17.7407 | 958 | 0.6579 | 0.4967 | 0.6579 | 0.8111 |
| 0.3989 | 17.7778 | 960 | 0.6376 | 0.4900 | 0.6376 | 0.7985 |
| 0.3989 | 17.8148 | 962 | 0.6296 | 0.4788 | 0.6296 | 0.7935 |
| 0.3989 | 17.8519 | 964 | 0.6319 | 0.4193 | 0.6319 | 0.7949 |
| 0.3989 | 17.8889 | 966 | 0.6270 | 0.4440 | 0.6270 | 0.7918 |
| 0.3989 | 17.9259 | 968 | 0.6294 | 0.4592 | 0.6294 | 0.7934 |
| 0.3989 | 17.9630 | 970 | 0.6303 | 0.4614 | 0.6303 | 0.7939 |
| 0.3989 | 18.0 | 972 | 0.6321 | 0.4614 | 0.6321 | 0.7951 |
| 0.3989 | 18.0370 | 974 | 0.6270 | 0.4474 | 0.6270 | 0.7918 |
| 0.3989 | 18.0741 | 976 | 0.6202 | 0.4815 | 0.6202 | 0.7875 |
| 0.3989 | 18.1111 | 978 | 0.6292 | 0.5075 | 0.6292 | 0.7932 |
| 0.3989 | 18.1481 | 980 | 0.6572 | 0.4963 | 0.6572 | 0.8107 |
| 0.3989 | 18.1852 | 982 | 0.6353 | 0.4780 | 0.6353 | 0.7971 |
| 0.3989 | 18.2222 | 984 | 0.6217 | 0.4474 | 0.6217 | 0.7885 |
| 0.3989 | 18.2593 | 986 | 0.6191 | 0.4556 | 0.6191 | 0.7868 |
| 0.3989 | 18.2963 | 988 | 0.6139 | 0.4196 | 0.6139 | 0.7835 |
| 0.3989 | 18.3333 | 990 | 0.6329 | 0.4905 | 0.6329 | 0.7956 |
| 0.3989 | 18.3704 | 992 | 0.6486 | 0.5086 | 0.6486 | 0.8053 |
| 0.3989 | 18.4074 | 994 | 0.6243 | 0.4934 | 0.6243 | 0.7901 |
| 0.3989 | 18.4444 | 996 | 0.6024 | 0.4497 | 0.6024 | 0.7761 |
| 0.3989 | 18.4815 | 998 | 0.6024 | 0.4433 | 0.6024 | 0.7761 |
| 0.0722 | 18.5185 | 1000 | 0.6044 | 0.4539 | 0.6044 | 0.7774 |
| 0.0722 | 18.5556 | 1002 | 0.6345 | 0.4999 | 0.6345 | 0.7966 |
| 0.0722 | 18.5926 | 1004 | 0.6415 | 0.4876 | 0.6415 | 0.8009 |
| 0.0722 | 18.6296 | 1006 | 0.6148 | 0.4063 | 0.6148 | 0.7841 |
| 0.0722 | 18.6667 | 1008 | 0.6138 | 0.4063 | 0.6138 | 0.7835 |
| 0.0722 | 18.7037 | 1010 | 0.6392 | 0.4840 | 0.6392 | 0.7995 |
| 0.0722 | 18.7407 | 1012 | 0.6662 | 0.4804 | 0.6662 | 0.8162 |
| 0.0722 | 18.7778 | 1014 | 0.6590 | 0.4739 | 0.6590 | 0.8118 |
| 0.0722 | 18.8148 | 1016 | 0.6394 | 0.4921 | 0.6394 | 0.7996 |
| 0.0722 | 18.8519 | 1018 | 0.6215 | 0.4457 | 0.6215 | 0.7884 |
| 0.0722 | 18.8889 | 1020 | 0.6133 | 0.4446 | 0.6133 | 0.7832 |
| 0.0722 | 18.9259 | 1022 | 0.6121 | 0.4354 | 0.6121 | 0.7824 |
| 0.0722 | 18.9630 | 1024 | 0.6167 | 0.4623 | 0.6167 | 0.7853 |
| 0.0722 | 19.0 | 1026 | 0.6212 | 0.4352 | 0.6212 | 0.7882 |
| 0.0722 | 19.0370 | 1028 | 0.6566 | 0.3747 | 0.6566 | 0.8103 |
| 0.0722 | 19.0741 | 1030 | 0.7234 | 0.4190 | 0.7234 | 0.8505 |
| 0.0722 | 19.1111 | 1032 | 0.7400 | 0.4149 | 0.7400 | 0.8602 |
| 0.0722 | 19.1481 | 1034 | 0.6956 | 0.4065 | 0.6956 | 0.8340 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Kuongan/CS221-mdeberta-v3-base-randomdrop
|
Kuongan
| 2025-01-15T14:56:47Z
| 24
| 0
|
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T14:08:24Z
|
---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-mdeberta-v3-base-randomdrop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-mdeberta-v3-base-randomdrop
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5440
- F1: 0.6741
- Roc Auc: 0.7756
- Accuracy: 0.4071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.5661 | 1.0 | 99 | 0.5434 | 0.0 | 0.5 | 0.1425 |
| 0.5054 | 2.0 | 198 | 0.4744 | 0.4852 | 0.6560 | 0.2621 |
| 0.4409 | 3.0 | 297 | 0.4436 | 0.5766 | 0.7104 | 0.3308 |
| 0.3975 | 4.0 | 396 | 0.4284 | 0.6071 | 0.7316 | 0.3588 |
| 0.2827 | 5.0 | 495 | 0.4228 | 0.6095 | 0.7296 | 0.3562 |
| 0.2831 | 6.0 | 594 | 0.4540 | 0.6467 | 0.7642 | 0.3715 |
| 0.1846 | 7.0 | 693 | 0.4519 | 0.6325 | 0.7459 | 0.3893 |
| 0.1752 | 8.0 | 792 | 0.4538 | 0.6426 | 0.7535 | 0.3740 |
| 0.1547 | 9.0 | 891 | 0.4799 | 0.6541 | 0.7642 | 0.3791 |
| 0.1046 | 10.0 | 990 | 0.4793 | 0.6667 | 0.7687 | 0.4020 |
| 0.1052 | 11.0 | 1089 | 0.5001 | 0.6593 | 0.7658 | 0.4046 |
| 0.0843 | 12.0 | 1188 | 0.5069 | 0.6647 | 0.7705 | 0.3893 |
| 0.0653 | 13.0 | 1287 | 0.5275 | 0.6681 | 0.7669 | 0.4097 |
| 0.0575 | 14.0 | 1386 | 0.5455 | 0.6617 | 0.7632 | 0.3944 |
| 0.0503 | 15.0 | 1485 | 0.5440 | 0.6741 | 0.7756 | 0.4071 |
| 0.0499 | 16.0 | 1584 | 0.5555 | 0.6653 | 0.7660 | 0.4097 |
| 0.0431 | 17.0 | 1683 | 0.5557 | 0.6660 | 0.7675 | 0.4020 |
| 0.0422 | 18.0 | 1782 | 0.5599 | 0.6632 | 0.7664 | 0.3944 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
rsicproject/mnasnet-GPT-UCM-captioning
|
rsicproject
| 2025-01-15T14:55:57Z
| 39
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-01-15T14:54:18Z
|
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mnasnet-GPT-UCM-captioning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnasnet-GPT-UCM-captioning
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1599
- Rouge: 0.7565
- Bleu1: 0.8056
- Bleu2: 0.7427
- Bleu3: 0.6876
- Bleu4: 0.6411
- Meteor: 0.7599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1024
- num_epochs: 128
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:------:|
| No log | 1.0 | 132 | 1.9151 | 0.4839 | 0.5481 | 0.4544 | 0.3734 | 0.3128 | 0.4642 |
| No log | 2.0 | 264 | 1.5925 | 0.5534 | 0.5757 | 0.4913 | 0.4205 | 0.3644 | 0.5538 |
| No log | 3.0 | 396 | 1.3495 | 0.5927 | 0.6183 | 0.5338 | 0.4640 | 0.4116 | 0.5908 |
| No log | 4.0 | 528 | 1.2230 | 0.5476 | 0.6251 | 0.5351 | 0.4665 | 0.4141 | 0.5342 |
| No log | 5.0 | 660 | 1.0534 | 0.6380 | 0.6833 | 0.6038 | 0.5417 | 0.4909 | 0.6387 |
| No log | 6.0 | 792 | 1.1554 | 0.6127 | 0.6740 | 0.5783 | 0.5065 | 0.4487 | 0.5980 |
| No log | 7.0 | 924 | 1.0466 | 0.5886 | 0.6646 | 0.5867 | 0.5250 | 0.4784 | 0.5726 |
| 0.8231 | 8.0 | 1056 | 1.0427 | 0.6528 | 0.7305 | 0.6468 | 0.5849 | 0.5415 | 0.6299 |
| 0.8231 | 9.0 | 1188 | 0.9752 | 0.6546 | 0.7104 | 0.6276 | 0.5647 | 0.5149 | 0.6534 |
| 0.8231 | 10.0 | 1320 | 0.9692 | 0.6850 | 0.7409 | 0.6665 | 0.6060 | 0.5589 | 0.7002 |
| 0.8231 | 11.0 | 1452 | 1.0092 | 0.6708 | 0.7441 | 0.6592 | 0.5922 | 0.5382 | 0.6744 |
| 0.8231 | 12.0 | 1584 | 0.9588 | 0.7100 | 0.7872 | 0.7174 | 0.6573 | 0.6057 | 0.7064 |
| 0.8231 | 13.0 | 1716 | 1.0117 | 0.6975 | 0.7612 | 0.6820 | 0.6170 | 0.5607 | 0.6904 |
| 0.8231 | 14.0 | 1848 | 0.9922 | 0.7642 | 0.8057 | 0.7403 | 0.6830 | 0.6321 | 0.7646 |
| 0.8231 | 15.0 | 1980 | 1.0432 | 0.7051 | 0.7689 | 0.6954 | 0.6346 | 0.5875 | 0.6945 |
| 0.2276 | 16.0 | 2112 | 1.0168 | 0.7263 | 0.7839 | 0.7197 | 0.6627 | 0.6130 | 0.7216 |
| 0.2276 | 17.0 | 2244 | 0.9891 | 0.7684 | 0.8315 | 0.7710 | 0.7150 | 0.6689 | 0.7671 |
| 0.2276 | 18.0 | 2376 | 0.9994 | 0.7563 | 0.8113 | 0.7474 | 0.6967 | 0.6514 | 0.7514 |
| 0.2276 | 19.0 | 2508 | 1.0163 | 0.7709 | 0.8105 | 0.7480 | 0.6924 | 0.6433 | 0.7779 |
| 0.2276 | 20.0 | 2640 | 1.0252 | 0.7643 | 0.8030 | 0.7419 | 0.6868 | 0.6379 | 0.7782 |
| 0.2276 | 21.0 | 2772 | 1.0315 | 0.7764 | 0.8350 | 0.7695 | 0.7125 | 0.6612 | 0.7814 |
| 0.2276 | 22.0 | 2904 | 1.0395 | 0.7695 | 0.8167 | 0.7632 | 0.7180 | 0.6781 | 0.7768 |
| 0.2276 | 23.0 | 3036 | 1.0521 | 0.7353 | 0.7927 | 0.7278 | 0.6731 | 0.6262 | 0.7424 |
| 0.1659 | 24.0 | 3168 | 1.0567 | 0.7760 | 0.8122 | 0.7512 | 0.7009 | 0.6583 | 0.7793 |
| 0.1659 | 25.0 | 3300 | 1.0346 | 0.7586 | 0.8060 | 0.7456 | 0.6933 | 0.6520 | 0.7633 |
| 0.1659 | 26.0 | 3432 | 1.0450 | 0.7731 | 0.8191 | 0.7544 | 0.7031 | 0.6597 | 0.7738 |
| 0.1659 | 27.0 | 3564 | 1.0612 | 0.7768 | 0.8142 | 0.7523 | 0.6951 | 0.6423 | 0.7883 |
| 0.1659 | 28.0 | 3696 | 1.0530 | 0.7643 | 0.8098 | 0.7543 | 0.7066 | 0.6671 | 0.7639 |
| 0.1659 | 29.0 | 3828 | 1.0286 | 0.7877 | 0.8337 | 0.7801 | 0.7344 | 0.6944 | 0.7977 |
| 0.1659 | 30.0 | 3960 | 1.0435 | 0.7741 | 0.8154 | 0.7506 | 0.6967 | 0.6483 | 0.7790 |
| 0.1659 | 31.0 | 4092 | 1.0730 | 0.7393 | 0.7980 | 0.7362 | 0.6846 | 0.6403 | 0.7382 |
| 0.1503 | 32.0 | 4224 | 1.0879 | 0.7519 | 0.7933 | 0.7289 | 0.6759 | 0.6291 | 0.7550 |
| 0.1503 | 33.0 | 4356 | 1.0987 | 0.7719 | 0.8107 | 0.7470 | 0.6948 | 0.6498 | 0.7759 |
| 0.1503 | 34.0 | 4488 | 1.0784 | 0.7725 | 0.8143 | 0.7475 | 0.6920 | 0.6443 | 0.7743 |
| 0.1503 | 35.0 | 4620 | 1.0775 | 0.7863 | 0.8176 | 0.7646 | 0.7201 | 0.6807 | 0.7955 |
| 0.1503 | 36.0 | 4752 | 1.1081 | 0.7746 | 0.8070 | 0.7499 | 0.7004 | 0.6576 | 0.7832 |
| 0.1503 | 37.0 | 4884 | 1.1177 | 0.7682 | 0.8246 | 0.7658 | 0.7128 | 0.6665 | 0.7746 |
| 0.1503 | 38.0 | 5016 | 1.1341 | 0.7608 | 0.8068 | 0.7442 | 0.6927 | 0.6477 | 0.7603 |
| 0.1427 | 39.0 | 5148 | 1.1599 | 0.7565 | 0.8056 | 0.7427 | 0.6876 | 0.6411 | 0.7599 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
|
SenhorDasMoscas/acho-classification-15-01-2025
|
SenhorDasMoscas
| 2025-01-15T14:54:48Z
| 38
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T14:53:48Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kk-aivio/4b965d6b-b099-462c-a0fa-8d0e5ec1fc3a
|
kk-aivio
| 2025-01-15T14:53:21Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T14:51:47Z
|
---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-mistral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4b965d6b-b099-462c-a0fa-8d0e5ec1fc3a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-mistral
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 960a93b2c47c666b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/960a93b2c47c666b_train_data.json
type:
field_instruction: prompt
field_output: model_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/4b965d6b-b099-462c-a0fa-8d0e5ec1fc3a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/960a93b2c47c666b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b43660c7-01f6-4d83-9a12-686ad5ff81c2
wandb_project: birthday-sn56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b43660c7-01f6-4d83-9a12-686ad5ff81c2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4b965d6b-b099-462c-a0fa-8d0e5ec1fc3a
This model is a fine-tuned version of [echarlaix/tiny-random-mistral](https://huggingface.co/echarlaix/tiny-random-mistral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0004 | 6 | nan |
| 0.0 | 0.0006 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.