modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
great0001/07033457-a0a9-421e-b394-e7c68192121a
|
great0001
| 2025-01-15T13:05:32Z
| 17
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-15T12:59:03Z
|
---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 07033457-a0a9-421e-b394-e7c68192121a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 076b950e817955d5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/076b950e817955d5_train_data.json
type:
field_instruction: document_description
field_output: generated_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/07033457-a0a9-421e-b394-e7c68192121a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/076b950e817955d5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22a9c3e5-2d1c-4708-a824-5e8c2b882e45
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22a9c3e5-2d1c-4708-a824-5e8c2b882e45
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 07033457-a0a9-421e-b394-e7c68192121a
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1896 | 0.0002 | 1 | 1.3592 |
| 1.5352 | 0.0005 | 3 | 1.3583 |
| 1.5301 | 0.0009 | 6 | 1.3463 |
| 1.2542 | 0.0014 | 9 | 1.2990 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
roleplaiapp/Llama-3.3-70B-Instruct-Q4_K_S-GGUF
|
roleplaiapp
| 2025-01-15T13:04:27Z
| 29
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"Llama-3.3-70B-Instruct",
"Q4_K_S",
"meta-llama",
"code",
"math",
"chat",
"roleplay",
"text-generation",
"safetensors",
"nlp",
"en",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"de",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:quantized:meta-llama/Llama-3.1-70B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-15T12:58:15Z
|
---
library_name: transformers
language:
- en
- fr
- it
- pt
- hi
- es
- th
- de
base_model:
- meta-llama/Llama-3.1-70B
tags:
- llama-cpp
- Llama-3.3-70B-Instruct
- gguf
- Q4_K_S
- llama-cpp
- gguf
- meta-llama
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
pipeline_tag: text-generation
---
# roleplaiapp/Llama-3.3-70B-Instruct-Q4_K_S-GGUF
**Repo:** `roleplaiapp/Llama-3.3-70B-Instruct-Q4_K_S-GGUF`
**Original Model:** `Llama-3.3-70B-Instruct`
**Organization:** `meta-llama`
**Quantized File:** `llama-3.3-70b-instruct-q4_k_s.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q4_K_SL`
**Use Imatrix:** `False`
**Split Model:** `False`
## Overview
This is an GGUF Q4_K_S quantized version of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
## Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/)
|
Kort/Cm12
|
Kort
| 2025-01-15T13:04:08Z
| 52
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T12:09:13Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prxy5608/01a1a26c-2d52-45e8-839d-acf6ac4d32f0
|
prxy5608
| 2025-01-15T13:04:03Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"license:llama3",
"region:us"
] | null | 2025-01-15T12:01:30Z
|
---
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 01a1a26c-2d52-45e8-839d-acf6ac4d32f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- dce9896700f0ce5f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dce9896700f0ce5f_train_data.json
type:
field_instruction: src
field_output: tgt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5608/01a1a26c-2d52-45e8-839d-acf6ac4d32f0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/dce9896700f0ce5f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 52ef6789-f01a-4c8b-8f1e-8b1f2647b104
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 52ef6789-f01a-4c8b-8f1e-8b1f2647b104
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 01a1a26c-2d52-45e8-839d-acf6ac4d32f0
This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2967 | 0.0004 | 1 | 2.6983 |
| 1.6368 | 0.0185 | 50 | 1.5613 |
| 1.3982 | 0.0370 | 100 | 1.4993 |
| 1.2559 | 0.0554 | 150 | 1.4673 |
| 1.477 | 0.0739 | 200 | 1.4535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhoxinh/4c244fe0-342d-412f-83d2-817a14228fcd
|
nhoxinh
| 2025-01-15T13:03:14Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:32:46Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4c244fe0-342d-412f-83d2-817a14228fcd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/4c244fe0-342d-412f-83d2-817a14228fcd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4c244fe0-342d-412f-83d2-817a14228fcd
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3866 | 0.0841 | 200 | 1.5657 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
0x1202/c593a074-ad0a-4259-bc18-4ac31e2ac0aa
|
0x1202
| 2025-01-15T13:02:57Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-15T12:33:21Z
|
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c593a074-ad0a-4259-bc18-4ac31e2ac0aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7b96ec4cd4e81767_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7b96ec4cd4e81767_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/c593a074-ad0a-4259-bc18-4ac31e2ac0aa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/7b96ec4cd4e81767_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7097860e-e08a-4e11-a1f3-efd6d3f0bd8f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7097860e-e08a-4e11-a1f3-efd6d3f0bd8f
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c593a074-ad0a-4259-bc18-4ac31e2ac0aa
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.3855 | 0.0006 | 1 | nan |
| 0.0 | 0.0306 | 50 | nan |
| 0.0 | 0.0613 | 100 | nan |
| 0.0 | 0.0919 | 150 | nan |
| 0.0 | 0.1226 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik87/7778942c-1cac-4374-85e6-e1e4e3e568e2
|
dimasik87
| 2025-01-15T13:02:41Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-15T12:55:22Z
|
---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7778942c-1cac-4374-85e6-e1e4e3e568e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1fe352d62e94ba0a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1fe352d62e94ba0a_train_data.json
type:
field_instruction: input persona
field_output: synthesized text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik87/7778942c-1cac-4374-85e6-e1e4e3e568e2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/1fe352d62e94ba0a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 30e72060-83cb-4d3e-8303-a4de21bdca5c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 30e72060-83cb-4d3e-8303-a4de21bdca5c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7778942c-1cac-4374-85e6-e1e4e3e568e2
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0008 | 5 | nan |
| 0.0 | 0.0017 | 10 | nan |
| 0.0 | 0.0025 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cunghoctienganh/87369a10-86ee-40c2-ba09-e64f285fd967
|
cunghoctienganh
| 2025-01-15T13:02:15Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:32:12Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 87369a10-86ee-40c2-ba09-e64f285fd967
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/87369a10-86ee-40c2-ba09-e64f285fd967
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 87369a10-86ee-40c2-ba09-e64f285fd967
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3954 | 0.0841 | 200 | 1.5655 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nttx/3d38c258-f124-449b-a214-658d9eba7889
|
nttx
| 2025-01-15T13:02:13Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T12:47:59Z
|
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d38c258-f124-449b-a214-658d9eba7889
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 81e3afe9598012d3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/81e3afe9598012d3_train_data.json
type:
field_input: article_url
field_instruction: lang
field_output: article_title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/3d38c258-f124-449b-a214-658d9eba7889
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/81e3afe9598012d3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d099f438-67f0-42ef-85d5-1117981da441
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d099f438-67f0-42ef-85d5-1117981da441
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d38c258-f124-449b-a214-658d9eba7889
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.039 | 0.0005 | 1 | 1.8852 |
| 1.6283 | 0.0250 | 50 | 0.8585 |
| 1.6549 | 0.0500 | 100 | 0.8010 |
| 1.5797 | 0.0750 | 150 | 0.7674 |
| 1.6391 | 0.1000 | 200 | 0.7563 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
trangtrannnnn/519e190e-89bc-4f70-a74e-9cea6d7931b3
|
trangtrannnnn
| 2025-01-15T13:02:10Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:31:11Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 519e190e-89bc-4f70-a74e-9cea6d7931b3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: trangtrannnnn/519e190e-89bc-4f70-a74e-9cea6d7931b3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 519e190e-89bc-4f70-a74e-9cea6d7931b3
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3807 | 0.0841 | 200 | 1.5642 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso01/ed0e8564-b196-4f10-ade5-828737cacb29
|
lesso01
| 2025-01-15T13:01:50Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:48:16Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed0e8564-b196-4f10-ade5-828737cacb29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
datasets:
- data_files:
- f580d7de8a316926_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f580d7de8a316926_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/ed0e8564-b196-4f10-ade5-828737cacb29
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/f580d7de8a316926_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cad08d50-283a-4923-8652-b01a3766bb86
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cad08d50-283a-4923-8652-b01a3766bb86
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed0e8564-b196-4f10-ade5-828737cacb29
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0009 | 5 | nan |
| 0.0 | 0.0018 | 10 | nan |
| 0.0 | 0.0026 | 15 | nan |
| 0.0 | 0.0035 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thaffggg/42ddee19-3646-45ba-b1d9-7e28b1b8926e
|
thaffggg
| 2025-01-15T13:00:55Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:47:52Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 42ddee19-3646-45ba-b1d9-7e28b1b8926e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f580d7de8a316926_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f580d7de8a316926_train_data.json
type:
field_input: Context
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/42ddee19-3646-45ba-b1d9-7e28b1b8926e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f580d7de8a316926_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cad08d50-283a-4923-8652-b01a3766bb86
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cad08d50-283a-4923-8652-b01a3766bb86
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 42ddee19-3646-45ba-b1d9-7e28b1b8926e
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3232 | 0.0352 | 200 | 2.2163 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/8519eb17-9998-41d5-ac3f-0da7827ed541
|
lesso04
| 2025-01-15T12:55:32Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:41:24Z
|
---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8519eb17-9998-41d5-ac3f-0da7827ed541
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 590fd4cbceee3791_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/590fd4cbceee3791_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/8519eb17-9998-41d5-ac3f-0da7827ed541
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/590fd4cbceee3791_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8519eb17-9998-41d5-ac3f-0da7827ed541
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0992 | 0.0006 | 1 | 4.7135 |
| 4.0554 | 0.0028 | 5 | 4.6688 |
| 4.0266 | 0.0057 | 10 | 4.2980 |
| 3.9737 | 0.0085 | 15 | 3.4516 |
| 2.776 | 0.0114 | 20 | 3.1267 |
| 3.7507 | 0.0142 | 25 | 3.0683 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso14/006fe55c-031f-470b-a252-4c282ecff834
|
lesso14
| 2025-01-15T12:53:33Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:31:55Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 006fe55c-031f-470b-a252-4c282ecff834
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: true
chat_template: llama3
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso14/006fe55c-031f-470b-a252-4c282ecff834
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 006fe55c-031f-470b-a252-4c282ecff834
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0042 | 10 | nan |
| 0.0 | 0.0063 | 15 | nan |
| 0.0 | 0.0084 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AminSharif/FineTunedSnappFoodReg
|
AminSharif
| 2025-01-15T12:52:33Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-15T12:52:09Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kk-aivio/645df65a-0012-4614-8a68-7850e782f052
|
kk-aivio
| 2025-01-15T12:51:49Z
| 14
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"region:us"
] | null | 2025-01-15T12:48:00Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 645df65a-0012-4614-8a68-7850e782f052
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/645df65a-0012-4614-8a68-7850e782f052
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: birthday-sn56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 645df65a-0012-4614-8a68-7850e782f052
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0013 | 3 | nan |
| 0.0 | 0.0025 | 6 | nan |
| 0.0 | 0.0038 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k4_task1_organization
|
MayBashendy
| 2025-01-15T12:51:28Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T12:34:24Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k4_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k4_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1682
- Qwk: 0.5663
- Mse: 1.1682
- Rmse: 1.0808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0667 | 2 | 6.8411 | 0.0290 | 6.8411 | 2.6155 |
| No log | 0.1333 | 4 | 4.5055 | 0.0794 | 4.5055 | 2.1226 |
| No log | 0.2 | 6 | 2.7652 | 0.0870 | 2.7652 | 1.6629 |
| No log | 0.2667 | 8 | 2.0825 | 0.1594 | 2.0825 | 1.4431 |
| No log | 0.3333 | 10 | 1.7389 | 0.2264 | 1.7389 | 1.3187 |
| No log | 0.4 | 12 | 1.6371 | 0.1905 | 1.6371 | 1.2795 |
| No log | 0.4667 | 14 | 1.6820 | 0.1495 | 1.6820 | 1.2969 |
| No log | 0.5333 | 16 | 1.7122 | 0.3548 | 1.7122 | 1.3085 |
| No log | 0.6 | 18 | 1.7297 | 0.2645 | 1.7297 | 1.3152 |
| No log | 0.6667 | 20 | 1.7308 | 0.2456 | 1.7308 | 1.3156 |
| No log | 0.7333 | 22 | 1.9098 | 0.1565 | 1.9098 | 1.3819 |
| No log | 0.8 | 24 | 1.9363 | 0.0847 | 1.9363 | 1.3915 |
| No log | 0.8667 | 26 | 1.7360 | 0.1667 | 1.7360 | 1.3176 |
| No log | 0.9333 | 28 | 1.4894 | 0.1947 | 1.4894 | 1.2204 |
| No log | 1.0 | 30 | 1.4493 | 0.3740 | 1.4493 | 1.2039 |
| No log | 1.0667 | 32 | 1.5090 | 0.3333 | 1.5090 | 1.2284 |
| No log | 1.1333 | 34 | 1.6048 | 0.2835 | 1.6048 | 1.2668 |
| No log | 1.2 | 36 | 1.5969 | 0.3333 | 1.5969 | 1.2637 |
| No log | 1.2667 | 38 | 1.5607 | 0.3256 | 1.5607 | 1.2493 |
| No log | 1.3333 | 40 | 1.5836 | 0.3053 | 1.5836 | 1.2584 |
| No log | 1.4 | 42 | 1.5397 | 0.3382 | 1.5397 | 1.2408 |
| No log | 1.4667 | 44 | 1.5278 | 0.3913 | 1.5278 | 1.2361 |
| No log | 1.5333 | 46 | 1.4911 | 0.4160 | 1.4911 | 1.2211 |
| No log | 1.6 | 48 | 1.3562 | 0.4286 | 1.3562 | 1.1645 |
| No log | 1.6667 | 50 | 1.3315 | 0.4769 | 1.3315 | 1.1539 |
| No log | 1.7333 | 52 | 1.7800 | 0.2901 | 1.7800 | 1.3342 |
| No log | 1.8 | 54 | 1.6626 | 0.3077 | 1.6626 | 1.2894 |
| No log | 1.8667 | 56 | 1.2296 | 0.4426 | 1.2296 | 1.1089 |
| No log | 1.9333 | 58 | 1.1936 | 0.3571 | 1.1936 | 1.0925 |
| No log | 2.0 | 60 | 1.2190 | 0.4407 | 1.2190 | 1.1041 |
| No log | 2.0667 | 62 | 1.2760 | 0.4961 | 1.2760 | 1.1296 |
| No log | 2.1333 | 64 | 1.3799 | 0.4462 | 1.3799 | 1.1747 |
| No log | 2.2 | 66 | 1.4315 | 0.4857 | 1.4315 | 1.1964 |
| No log | 2.2667 | 68 | 1.4433 | 0.4242 | 1.4433 | 1.2014 |
| No log | 2.3333 | 70 | 1.3643 | 0.4648 | 1.3643 | 1.1681 |
| No log | 2.4 | 72 | 1.4124 | 0.4460 | 1.4124 | 1.1884 |
| No log | 2.4667 | 74 | 1.3953 | 0.4148 | 1.3953 | 1.1812 |
| No log | 2.5333 | 76 | 1.4697 | 0.4444 | 1.4697 | 1.2123 |
| No log | 2.6 | 78 | 1.4942 | 0.4211 | 1.4942 | 1.2224 |
| No log | 2.6667 | 80 | 1.3468 | 0.4806 | 1.3468 | 1.1605 |
| No log | 2.7333 | 82 | 1.1612 | 0.5512 | 1.1612 | 1.0776 |
| No log | 2.8 | 84 | 1.0878 | 0.528 | 1.0878 | 1.0430 |
| No log | 2.8667 | 86 | 1.1252 | 0.6119 | 1.1252 | 1.0608 |
| No log | 2.9333 | 88 | 1.1308 | 0.5778 | 1.1308 | 1.0634 |
| No log | 3.0 | 90 | 1.1582 | 0.5143 | 1.1582 | 1.0762 |
| No log | 3.0667 | 92 | 1.3523 | 0.5576 | 1.3523 | 1.1629 |
| No log | 3.1333 | 94 | 1.4646 | 0.4815 | 1.4646 | 1.2102 |
| No log | 3.2 | 96 | 1.3034 | 0.4783 | 1.3034 | 1.1417 |
| No log | 3.2667 | 98 | 1.2189 | 0.5538 | 1.2189 | 1.1041 |
| No log | 3.3333 | 100 | 1.1976 | 0.5954 | 1.1976 | 1.0943 |
| No log | 3.4 | 102 | 1.2458 | 0.5 | 1.2458 | 1.1162 |
| No log | 3.4667 | 104 | 1.1685 | 0.5294 | 1.1685 | 1.0810 |
| No log | 3.5333 | 106 | 1.0799 | 0.5735 | 1.0799 | 1.0392 |
| No log | 3.6 | 108 | 0.9817 | 0.6277 | 0.9817 | 0.9908 |
| No log | 3.6667 | 110 | 0.9840 | 0.6619 | 0.9840 | 0.9920 |
| No log | 3.7333 | 112 | 1.0359 | 0.6571 | 1.0359 | 1.0178 |
| No log | 3.8 | 114 | 1.1027 | 0.6 | 1.1027 | 1.0501 |
| No log | 3.8667 | 116 | 1.1618 | 0.5734 | 1.1618 | 1.0779 |
| No log | 3.9333 | 118 | 1.3260 | 0.5325 | 1.3260 | 1.1515 |
| No log | 4.0 | 120 | 1.5194 | 0.5031 | 1.5194 | 1.2326 |
| No log | 4.0667 | 122 | 1.4042 | 0.5 | 1.4042 | 1.1850 |
| No log | 4.1333 | 124 | 1.2303 | 0.5395 | 1.2303 | 1.1092 |
| No log | 4.2 | 126 | 1.1265 | 0.5547 | 1.1265 | 1.0614 |
| No log | 4.2667 | 128 | 1.0583 | 0.6567 | 1.0583 | 1.0288 |
| No log | 4.3333 | 130 | 1.0741 | 0.6212 | 1.0741 | 1.0364 |
| No log | 4.4 | 132 | 1.1067 | 0.6107 | 1.1067 | 1.0520 |
| No log | 4.4667 | 134 | 1.1166 | 0.6165 | 1.1166 | 1.0567 |
| No log | 4.5333 | 136 | 1.2331 | 0.5175 | 1.2331 | 1.1105 |
| No log | 4.6 | 138 | 1.2278 | 0.5068 | 1.2278 | 1.1081 |
| No log | 4.6667 | 140 | 1.1185 | 0.5755 | 1.1185 | 1.0576 |
| No log | 4.7333 | 142 | 1.0165 | 0.6418 | 1.0165 | 1.0082 |
| No log | 4.8 | 144 | 0.9863 | 0.6619 | 0.9863 | 0.9931 |
| No log | 4.8667 | 146 | 1.0320 | 0.6174 | 1.0320 | 1.0159 |
| No log | 4.9333 | 148 | 1.1882 | 0.5897 | 1.1882 | 1.0901 |
| No log | 5.0 | 150 | 1.1527 | 0.5987 | 1.1527 | 1.0736 |
| No log | 5.0667 | 152 | 1.0692 | 0.6577 | 1.0692 | 1.0340 |
| No log | 5.1333 | 154 | 0.9798 | 0.6761 | 0.9798 | 0.9899 |
| No log | 5.2 | 156 | 1.0669 | 0.6423 | 1.0669 | 1.0329 |
| No log | 5.2667 | 158 | 1.2467 | 0.5135 | 1.2467 | 1.1166 |
| No log | 5.3333 | 160 | 1.3934 | 0.4845 | 1.3934 | 1.1804 |
| No log | 5.4 | 162 | 1.3303 | 0.4935 | 1.3303 | 1.1534 |
| No log | 5.4667 | 164 | 1.1582 | 0.5333 | 1.1582 | 1.0762 |
| No log | 5.5333 | 166 | 1.0258 | 0.6131 | 1.0258 | 1.0128 |
| No log | 5.6 | 168 | 0.9112 | 0.6567 | 0.9112 | 0.9546 |
| No log | 5.6667 | 170 | 0.8672 | 0.6716 | 0.8672 | 0.9312 |
| No log | 5.7333 | 172 | 0.8603 | 0.6861 | 0.8603 | 0.9275 |
| No log | 5.8 | 174 | 0.9279 | 0.6370 | 0.9279 | 0.9633 |
| No log | 5.8667 | 176 | 1.0725 | 0.6099 | 1.0725 | 1.0356 |
| No log | 5.9333 | 178 | 1.1306 | 0.5612 | 1.1306 | 1.0633 |
| No log | 6.0 | 180 | 1.1694 | 0.5714 | 1.1694 | 1.0814 |
| No log | 6.0667 | 182 | 1.1824 | 0.5571 | 1.1824 | 1.0874 |
| No log | 6.1333 | 184 | 1.1753 | 0.5547 | 1.1753 | 1.0841 |
| No log | 6.2 | 186 | 1.1645 | 0.5734 | 1.1645 | 1.0791 |
| No log | 6.2667 | 188 | 1.0933 | 0.5588 | 1.0933 | 1.0456 |
| No log | 6.3333 | 190 | 1.0788 | 0.6131 | 1.0788 | 1.0387 |
| No log | 6.4 | 192 | 1.0192 | 0.6423 | 1.0192 | 1.0096 |
| No log | 6.4667 | 194 | 0.9612 | 0.6667 | 0.9612 | 0.9804 |
| No log | 6.5333 | 196 | 1.0047 | 0.6667 | 1.0047 | 1.0023 |
| No log | 6.6 | 198 | 1.0488 | 0.6423 | 1.0488 | 1.0241 |
| No log | 6.6667 | 200 | 1.1262 | 0.5578 | 1.1262 | 1.0612 |
| No log | 6.7333 | 202 | 1.1493 | 0.5625 | 1.1493 | 1.0720 |
| No log | 6.8 | 204 | 1.0030 | 0.6241 | 1.0030 | 1.0015 |
| No log | 6.8667 | 206 | 0.7999 | 0.6857 | 0.7999 | 0.8944 |
| No log | 6.9333 | 208 | 0.6923 | 0.7153 | 0.6923 | 0.8320 |
| No log | 7.0 | 210 | 0.7225 | 0.7101 | 0.7225 | 0.8500 |
| No log | 7.0667 | 212 | 0.7961 | 0.6765 | 0.7961 | 0.8922 |
| No log | 7.1333 | 214 | 0.9400 | 0.6418 | 0.9400 | 0.9695 |
| No log | 7.2 | 216 | 1.1221 | 0.5758 | 1.1221 | 1.0593 |
| No log | 7.2667 | 218 | 1.1054 | 0.5954 | 1.1054 | 1.0514 |
| No log | 7.3333 | 220 | 1.1653 | 0.6119 | 1.1653 | 1.0795 |
| No log | 7.4 | 222 | 1.2698 | 0.4966 | 1.2698 | 1.1268 |
| No log | 7.4667 | 224 | 1.3247 | 0.4969 | 1.3247 | 1.1509 |
| No log | 7.5333 | 226 | 1.1623 | 0.6053 | 1.1623 | 1.0781 |
| No log | 7.6 | 228 | 1.0819 | 0.6301 | 1.0819 | 1.0401 |
| No log | 7.6667 | 230 | 0.9733 | 0.6525 | 0.9733 | 0.9866 |
| No log | 7.7333 | 232 | 1.1109 | 0.6207 | 1.1109 | 1.0540 |
| No log | 7.8 | 234 | 1.3954 | 0.5562 | 1.3954 | 1.1813 |
| No log | 7.8667 | 236 | 1.5114 | 0.5059 | 1.5114 | 1.2294 |
| No log | 7.9333 | 238 | 1.3694 | 0.525 | 1.3694 | 1.1702 |
| No log | 8.0 | 240 | 1.1268 | 0.5906 | 1.1268 | 1.0615 |
| No log | 8.0667 | 242 | 0.9815 | 0.6993 | 0.9815 | 0.9907 |
| No log | 8.1333 | 244 | 0.9961 | 0.6667 | 0.9961 | 0.9980 |
| No log | 8.2 | 246 | 1.1060 | 0.5578 | 1.1060 | 1.0517 |
| No log | 8.2667 | 248 | 1.1312 | 0.5616 | 1.1312 | 1.0636 |
| No log | 8.3333 | 250 | 1.0797 | 0.6241 | 1.0797 | 1.0391 |
| No log | 8.4 | 252 | 1.0239 | 0.6277 | 1.0239 | 1.0119 |
| No log | 8.4667 | 254 | 0.9819 | 0.6763 | 0.9819 | 0.9909 |
| No log | 8.5333 | 256 | 0.9788 | 0.6667 | 0.9788 | 0.9893 |
| No log | 8.6 | 258 | 1.0502 | 0.6418 | 1.0502 | 1.0248 |
| No log | 8.6667 | 260 | 1.1559 | 0.6029 | 1.1559 | 1.0751 |
| No log | 8.7333 | 262 | 1.2523 | 0.5109 | 1.2523 | 1.1191 |
| No log | 8.8 | 264 | 1.2236 | 0.5109 | 1.2236 | 1.1062 |
| No log | 8.8667 | 266 | 1.1008 | 0.5865 | 1.1008 | 1.0492 |
| No log | 8.9333 | 268 | 1.0131 | 0.5865 | 1.0131 | 1.0065 |
| No log | 9.0 | 270 | 0.9567 | 0.6316 | 0.9567 | 0.9781 |
| No log | 9.0667 | 272 | 0.9227 | 0.6277 | 0.9227 | 0.9606 |
| No log | 9.1333 | 274 | 0.9190 | 0.6522 | 0.9190 | 0.9586 |
| No log | 9.2 | 276 | 0.9918 | 0.6405 | 0.9918 | 0.9959 |
| No log | 9.2667 | 278 | 1.0030 | 0.6667 | 1.0030 | 1.0015 |
| No log | 9.3333 | 280 | 0.8790 | 0.6324 | 0.8790 | 0.9375 |
| No log | 9.4 | 282 | 0.8730 | 0.6324 | 0.8730 | 0.9343 |
| No log | 9.4667 | 284 | 0.9293 | 0.6187 | 0.9293 | 0.9640 |
| No log | 9.5333 | 286 | 0.9091 | 0.6324 | 0.9091 | 0.9535 |
| No log | 9.6 | 288 | 0.8488 | 0.6615 | 0.8488 | 0.9213 |
| No log | 9.6667 | 290 | 0.8677 | 0.6815 | 0.8677 | 0.9315 |
| No log | 9.7333 | 292 | 0.9591 | 0.6812 | 0.9591 | 0.9794 |
| No log | 9.8 | 294 | 1.1017 | 0.5882 | 1.1017 | 1.0496 |
| No log | 9.8667 | 296 | 1.2618 | 0.5509 | 1.2618 | 1.1233 |
| No log | 9.9333 | 298 | 1.3770 | 0.5325 | 1.3770 | 1.1734 |
| No log | 10.0 | 300 | 1.3422 | 0.5238 | 1.3422 | 1.1585 |
| No log | 10.0667 | 302 | 1.1776 | 0.5578 | 1.1776 | 1.0852 |
| No log | 10.1333 | 304 | 0.9751 | 0.6716 | 0.9751 | 0.9875 |
| No log | 10.2 | 306 | 0.8607 | 0.6406 | 0.8607 | 0.9277 |
| No log | 10.2667 | 308 | 0.8602 | 0.5984 | 0.8602 | 0.9275 |
| No log | 10.3333 | 310 | 0.8909 | 0.625 | 0.8909 | 0.9439 |
| No log | 10.4 | 312 | 0.9768 | 0.6906 | 0.9768 | 0.9883 |
| No log | 10.4667 | 314 | 1.1032 | 0.6225 | 1.1032 | 1.0503 |
| No log | 10.5333 | 316 | 1.1913 | 0.6265 | 1.1913 | 1.0915 |
| No log | 10.6 | 318 | 1.1459 | 0.5860 | 1.1459 | 1.0705 |
| No log | 10.6667 | 320 | 0.9666 | 0.6892 | 0.9666 | 0.9831 |
| No log | 10.7333 | 322 | 0.8932 | 0.6269 | 0.8932 | 0.9451 |
| No log | 10.8 | 324 | 0.8935 | 0.6269 | 0.8935 | 0.9452 |
| No log | 10.8667 | 326 | 0.9379 | 0.6912 | 0.9379 | 0.9684 |
| No log | 10.9333 | 328 | 1.0794 | 0.6309 | 1.0794 | 1.0389 |
| No log | 11.0 | 330 | 1.1860 | 0.6296 | 1.1860 | 1.0891 |
| No log | 11.0667 | 332 | 1.1891 | 0.6429 | 1.1891 | 1.0904 |
| No log | 11.1333 | 334 | 1.0631 | 0.6581 | 1.0631 | 1.0311 |
| No log | 11.2 | 336 | 0.9568 | 0.6763 | 0.9568 | 0.9782 |
| No log | 11.2667 | 338 | 0.9640 | 0.6324 | 0.9640 | 0.9818 |
| No log | 11.3333 | 340 | 0.9778 | 0.6324 | 0.9778 | 0.9889 |
| No log | 11.4 | 342 | 1.0438 | 0.6519 | 1.0438 | 1.0217 |
| No log | 11.4667 | 344 | 1.2155 | 0.5655 | 1.2155 | 1.1025 |
| No log | 11.5333 | 346 | 1.3660 | 0.5256 | 1.3660 | 1.1688 |
| No log | 11.6 | 348 | 1.2987 | 0.5325 | 1.2987 | 1.1396 |
| No log | 11.6667 | 350 | 1.0831 | 0.6029 | 1.0831 | 1.0407 |
| No log | 11.7333 | 352 | 0.9538 | 0.6260 | 0.9538 | 0.9766 |
| No log | 11.8 | 354 | 0.9300 | 0.6269 | 0.9300 | 0.9644 |
| No log | 11.8667 | 356 | 0.9813 | 0.6324 | 0.9813 | 0.9906 |
| No log | 11.9333 | 358 | 1.1209 | 0.6232 | 1.1209 | 1.0587 |
| No log | 12.0 | 360 | 1.3313 | 0.5170 | 1.3313 | 1.1538 |
| No log | 12.0667 | 362 | 1.4482 | 0.4196 | 1.4482 | 1.2034 |
| No log | 12.1333 | 364 | 1.3954 | 0.4265 | 1.3954 | 1.1813 |
| No log | 12.2 | 366 | 1.2580 | 0.5294 | 1.2580 | 1.1216 |
| No log | 12.2667 | 368 | 1.1873 | 0.5882 | 1.1873 | 1.0896 |
| No log | 12.3333 | 370 | 1.1016 | 0.6466 | 1.1016 | 1.0496 |
| No log | 12.4 | 372 | 1.0789 | 0.6519 | 1.0789 | 1.0387 |
| No log | 12.4667 | 374 | 1.1158 | 0.5915 | 1.1158 | 1.0563 |
| No log | 12.5333 | 376 | 1.2124 | 0.5806 | 1.2124 | 1.1011 |
| No log | 12.6 | 378 | 1.2120 | 0.5974 | 1.2120 | 1.1009 |
| No log | 12.6667 | 380 | 1.1104 | 0.6207 | 1.1104 | 1.0538 |
| No log | 12.7333 | 382 | 1.0596 | 0.6197 | 1.0596 | 1.0294 |
| No log | 12.8 | 384 | 1.0240 | 0.6197 | 1.0240 | 1.0119 |
| No log | 12.8667 | 386 | 1.0451 | 0.6197 | 1.0451 | 1.0223 |
| No log | 12.9333 | 388 | 1.0846 | 0.6197 | 1.0846 | 1.0415 |
| No log | 13.0 | 390 | 1.0829 | 0.6241 | 1.0829 | 1.0406 |
| No log | 13.0667 | 392 | 1.0886 | 0.6241 | 1.0886 | 1.0434 |
| No log | 13.1333 | 394 | 1.0738 | 0.6197 | 1.0738 | 1.0362 |
| No log | 13.2 | 396 | 1.1437 | 0.6144 | 1.1437 | 1.0694 |
| No log | 13.2667 | 398 | 1.1341 | 0.6174 | 1.1341 | 1.0649 |
| No log | 13.3333 | 400 | 1.0499 | 0.6197 | 1.0499 | 1.0246 |
| No log | 13.4 | 402 | 1.0067 | 0.6277 | 1.0067 | 1.0033 |
| No log | 13.4667 | 404 | 1.0055 | 0.6061 | 1.0055 | 1.0027 |
| No log | 13.5333 | 406 | 1.0464 | 0.6061 | 1.0464 | 1.0229 |
| No log | 13.6 | 408 | 1.0631 | 0.5909 | 1.0631 | 1.0311 |
| No log | 13.6667 | 410 | 1.0617 | 0.6061 | 1.0617 | 1.0304 |
| No log | 13.7333 | 412 | 1.0727 | 0.6260 | 1.0727 | 1.0357 |
| No log | 13.8 | 414 | 1.1313 | 0.6119 | 1.1313 | 1.0636 |
| No log | 13.8667 | 416 | 1.2340 | 0.5503 | 1.2340 | 1.1109 |
| No log | 13.9333 | 418 | 1.3273 | 0.5350 | 1.3273 | 1.1521 |
| No log | 14.0 | 420 | 1.3200 | 0.5385 | 1.3200 | 1.1489 |
| No log | 14.0667 | 422 | 1.3247 | 0.5223 | 1.3247 | 1.1510 |
| No log | 14.1333 | 424 | 1.2907 | 0.5170 | 1.2907 | 1.1361 |
| No log | 14.2 | 426 | 1.2642 | 0.5468 | 1.2642 | 1.1244 |
| No log | 14.2667 | 428 | 1.2646 | 0.5401 | 1.2646 | 1.1245 |
| No log | 14.3333 | 430 | 1.1767 | 0.5909 | 1.1767 | 1.0848 |
| No log | 14.4 | 432 | 1.1276 | 0.5802 | 1.1276 | 1.0619 |
| No log | 14.4667 | 434 | 1.0721 | 0.5512 | 1.0721 | 1.0354 |
| No log | 14.5333 | 436 | 1.0481 | 0.5649 | 1.0481 | 1.0237 |
| No log | 14.6 | 438 | 1.0613 | 0.6131 | 1.0613 | 1.0302 |
| No log | 14.6667 | 440 | 1.1536 | 0.5811 | 1.1536 | 1.0741 |
| No log | 14.7333 | 442 | 1.3342 | 0.5395 | 1.3342 | 1.1551 |
| No log | 14.8 | 444 | 1.3475 | 0.5548 | 1.3475 | 1.1608 |
| No log | 14.8667 | 446 | 1.2069 | 0.5714 | 1.2069 | 1.0986 |
| No log | 14.9333 | 448 | 1.0777 | 0.6232 | 1.0777 | 1.0381 |
| No log | 15.0 | 450 | 0.9441 | 0.6277 | 0.9441 | 0.9717 |
| No log | 15.0667 | 452 | 0.8969 | 0.6212 | 0.8969 | 0.9470 |
| No log | 15.1333 | 454 | 0.9045 | 0.6569 | 0.9045 | 0.9511 |
| No log | 15.2 | 456 | 1.0174 | 0.6395 | 1.0174 | 1.0086 |
| No log | 15.2667 | 458 | 1.2439 | 0.6182 | 1.2439 | 1.1153 |
| No log | 15.3333 | 460 | 1.3134 | 0.6024 | 1.3134 | 1.1460 |
| No log | 15.4 | 462 | 1.2133 | 0.5897 | 1.2133 | 1.1015 |
| No log | 15.4667 | 464 | 1.0636 | 0.6525 | 1.0636 | 1.0313 |
| No log | 15.5333 | 466 | 0.9961 | 0.6316 | 0.9961 | 0.9981 |
| No log | 15.6 | 468 | 1.0037 | 0.6316 | 1.0037 | 1.0019 |
| No log | 15.6667 | 470 | 1.0793 | 0.6569 | 1.0793 | 1.0389 |
| No log | 15.7333 | 472 | 1.2025 | 0.6099 | 1.2025 | 1.0966 |
| No log | 15.8 | 474 | 1.2963 | 0.5890 | 1.2963 | 1.1386 |
| No log | 15.8667 | 476 | 1.2393 | 0.5899 | 1.2393 | 1.1132 |
| No log | 15.9333 | 478 | 1.1624 | 0.6324 | 1.1624 | 1.0781 |
| No log | 16.0 | 480 | 1.1372 | 0.6119 | 1.1372 | 1.0664 |
| No log | 16.0667 | 482 | 1.1336 | 0.6131 | 1.1336 | 1.0647 |
| No log | 16.1333 | 484 | 1.0423 | 0.6331 | 1.0423 | 1.0209 |
| No log | 16.2 | 486 | 0.9645 | 0.6620 | 0.9645 | 0.9821 |
| No log | 16.2667 | 488 | 0.9296 | 0.6755 | 0.9296 | 0.9641 |
| No log | 16.3333 | 490 | 0.8918 | 0.7006 | 0.8918 | 0.9444 |
| No log | 16.4 | 492 | 0.8814 | 0.7 | 0.8814 | 0.9388 |
| No log | 16.4667 | 494 | 0.9504 | 0.6424 | 0.9504 | 0.9749 |
| No log | 16.5333 | 496 | 1.0877 | 0.6509 | 1.0877 | 1.0429 |
| No log | 16.6 | 498 | 1.0546 | 0.6220 | 1.0546 | 1.0270 |
| 0.372 | 16.6667 | 500 | 0.9086 | 0.6667 | 0.9086 | 0.9532 |
| 0.372 | 16.7333 | 502 | 0.8080 | 0.6806 | 0.8080 | 0.8989 |
| 0.372 | 16.8 | 504 | 0.8219 | 0.7206 | 0.8219 | 0.9066 |
| 0.372 | 16.8667 | 506 | 0.8541 | 0.6667 | 0.8541 | 0.9242 |
| 0.372 | 16.9333 | 508 | 0.8858 | 0.6667 | 0.8858 | 0.9412 |
| 0.372 | 17.0 | 510 | 0.9483 | 0.6119 | 0.9483 | 0.9738 |
| 0.372 | 17.0667 | 512 | 1.0789 | 0.6438 | 1.0789 | 1.0387 |
| 0.372 | 17.1333 | 514 | 1.1682 | 0.5663 | 1.1682 | 1.0808 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nbninh/0d47e552-fa12-456d-acb3-29792f1b41d3
|
nbninh
| 2025-01-15T12:48:49Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:34:45Z
|
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0d47e552-fa12-456d-acb3-29792f1b41d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 34a8ecc953c9036c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/34a8ecc953c9036c_train_data.json
type:
field_input: title_main
field_instruction: texte
field_output: texteHtml
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/0d47e552-fa12-456d-acb3-29792f1b41d3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/34a8ecc953c9036c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4cab3fe6-17ad-4a26-9515-f652b7692aed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4cab3fe6-17ad-4a26-9515-f652b7692aed
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0d47e552-fa12-456d-acb3-29792f1b41d3
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1282 | 0.2454 | 200 | 0.1003 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/1bc7a2fd-8257-45c0-a2ad-46fa41293d91
|
nhung01
| 2025-01-15T12:47:33Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:26:36Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1bc7a2fd-8257-45c0-a2ad-46fa41293d91
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8492dab84eb4a872_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8492dab84eb4a872_train_data.json
type:
field_input: Abstract
field_instruction: Title
field_output: Hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/1bc7a2fd-8257-45c0-a2ad-46fa41293d91
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8492dab84eb4a872_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb4dd990-a545-4b1a-8ec7-8e5c79e135a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb4dd990-a545-4b1a-8ec7-8e5c79e135a9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1bc7a2fd-8257-45c0-a2ad-46fa41293d91
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3891 | 0.0120 | 200 | 0.4799 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung03/595181d3-f1cb-438b-a2e0-5e773a889fca
|
nhung03
| 2025-01-15T12:46:55Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:33:15Z
|
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 595181d3-f1cb-438b-a2e0-5e773a889fca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 34a8ecc953c9036c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/34a8ecc953c9036c_train_data.json
type:
field_input: title_main
field_instruction: texte
field_output: texteHtml
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/595181d3-f1cb-438b-a2e0-5e773a889fca
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/34a8ecc953c9036c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4cab3fe6-17ad-4a26-9515-f652b7692aed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4cab3fe6-17ad-4a26-9515-f652b7692aed
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 595181d3-f1cb-438b-a2e0-5e773a889fca
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.127 | 0.2454 | 200 | 0.0998 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
growwithgarry/gwg
|
growwithgarry
| 2025-01-15T12:46:29Z
| 22
| 1
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T11:46:11Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: garry
---
# Gwg
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `garry` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('growwithgarry/gwg', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ivangrapher/c2f92ade-8d4b-48de-8cd4-be397d723092
|
ivangrapher
| 2025-01-15T12:45:34Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2025-01-15T12:41:07Z
|
---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c2f92ade-8d4b-48de-8cd4-be397d723092
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 590fd4cbceee3791_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/590fd4cbceee3791_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 6
gradient_checkpointing: false
group_by_length: false
hub_model_id: ivangrapher/c2f92ade-8d4b-48de-8cd4-be397d723092
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/590fd4cbceee3791_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 78bbc8a0-78c1-4557-a1dd-2fa1b271760f
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c2f92ade-8d4b-48de-8cd4-be397d723092
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | 3.5257 |
| 3.4061 | 0.0137 | 8 | 3.1505 |
| 2.563 | 0.0273 | 16 | 2.2606 |
| 1.9834 | 0.0410 | 24 | 1.8462 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mrHungddddh/a82cdf3b-5f09-4f16-93eb-d3fffb95ea55
|
mrHungddddh
| 2025-01-15T12:45:29Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T09:10:39Z
|
---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a82cdf3b-5f09-4f16-93eb-d3fffb95ea55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ace8777dda8ac20a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ace8777dda8ac20a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/a82cdf3b-5f09-4f16-93eb-d3fffb95ea55
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ace8777dda8ac20a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c1949657-d260-41e7-bb7f-22bfda51db95
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c1949657-d260-41e7-bb7f-22bfda51db95
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a82cdf3b-5f09-4f16-93eb-d3fffb95ea55
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3496 | 0.0032 | 200 | 1.0714 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso05/a3b98087-b7e3-454e-9cf0-f213d1cfa238
|
lesso05
| 2025-01-15T12:44:10Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:32:05Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a3b98087-b7e3-454e-9cf0-f213d1cfa238
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: true
chat_template: llama3
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/a3b98087-b7e3-454e-9cf0-f213d1cfa238
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a3b98087-b7e3-454e-9cf0-f213d1cfa238
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0042 | 10 | nan |
| 0.0 | 0.0063 | 15 | nan |
| 0.0 | 0.0084 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k4_task2_organization
|
MayBashendy
| 2025-01-15T12:43:50Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T12:20:24Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k4_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k4_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5702
- Qwk: 0.5276
- Mse: 0.5702
- Rmse: 0.7551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0909 | 2 | 4.1289 | -0.0163 | 4.1289 | 2.0320 |
| No log | 0.1818 | 4 | 2.3335 | 0.0928 | 2.3335 | 1.5276 |
| No log | 0.2727 | 6 | 1.3547 | 0.1304 | 1.3547 | 1.1639 |
| No log | 0.3636 | 8 | 0.8243 | 0.1612 | 0.8243 | 0.9079 |
| No log | 0.4545 | 10 | 0.8167 | 0.1543 | 0.8167 | 0.9037 |
| No log | 0.5455 | 12 | 0.8909 | 0.0600 | 0.8909 | 0.9439 |
| No log | 0.6364 | 14 | 0.8382 | 0.0797 | 0.8382 | 0.9155 |
| No log | 0.7273 | 16 | 0.9294 | 0.0710 | 0.9295 | 0.9641 |
| No log | 0.8182 | 18 | 0.9013 | 0.0364 | 0.9013 | 0.9494 |
| No log | 0.9091 | 20 | 0.7705 | 0.1758 | 0.7705 | 0.8778 |
| No log | 1.0 | 22 | 0.8491 | 0.1837 | 0.8491 | 0.9215 |
| No log | 1.0909 | 24 | 1.0730 | -0.0064 | 1.0730 | 1.0359 |
| No log | 1.1818 | 26 | 0.9685 | 0.1752 | 0.9685 | 0.9841 |
| No log | 1.2727 | 28 | 0.8001 | 0.2758 | 0.8001 | 0.8945 |
| No log | 1.3636 | 30 | 0.6932 | 0.3324 | 0.6932 | 0.8326 |
| No log | 1.4545 | 32 | 0.6469 | 0.3085 | 0.6469 | 0.8043 |
| No log | 1.5455 | 34 | 0.7003 | 0.3825 | 0.7003 | 0.8369 |
| No log | 1.6364 | 36 | 0.9571 | 0.3413 | 0.9571 | 0.9783 |
| No log | 1.7273 | 38 | 0.8095 | 0.3397 | 0.8095 | 0.8997 |
| No log | 1.8182 | 40 | 0.6484 | 0.3199 | 0.6484 | 0.8052 |
| No log | 1.9091 | 42 | 0.6724 | 0.3422 | 0.6724 | 0.8200 |
| No log | 2.0 | 44 | 0.9376 | 0.3737 | 0.9376 | 0.9683 |
| No log | 2.0909 | 46 | 0.9724 | 0.2971 | 0.9724 | 0.9861 |
| No log | 2.1818 | 48 | 0.6714 | 0.3809 | 0.6714 | 0.8194 |
| No log | 2.2727 | 50 | 0.6992 | 0.4827 | 0.6992 | 0.8362 |
| No log | 2.3636 | 52 | 0.6801 | 0.4829 | 0.6801 | 0.8247 |
| No log | 2.4545 | 54 | 0.7007 | 0.3582 | 0.7007 | 0.8371 |
| No log | 2.5455 | 56 | 0.8167 | 0.2229 | 0.8167 | 0.9037 |
| No log | 2.6364 | 58 | 0.8054 | 0.3327 | 0.8054 | 0.8975 |
| No log | 2.7273 | 60 | 0.7523 | 0.4179 | 0.7523 | 0.8674 |
| No log | 2.8182 | 62 | 0.7476 | 0.3990 | 0.7476 | 0.8646 |
| No log | 2.9091 | 64 | 0.7602 | 0.4638 | 0.7602 | 0.8719 |
| No log | 3.0 | 66 | 0.7520 | 0.4320 | 0.7520 | 0.8672 |
| No log | 3.0909 | 68 | 0.7639 | 0.4721 | 0.7639 | 0.8740 |
| No log | 3.1818 | 70 | 0.7820 | 0.4843 | 0.7820 | 0.8843 |
| No log | 3.2727 | 72 | 0.7551 | 0.4786 | 0.7551 | 0.8689 |
| No log | 3.3636 | 74 | 0.7043 | 0.4932 | 0.7043 | 0.8392 |
| No log | 3.4545 | 76 | 0.6817 | 0.5159 | 0.6817 | 0.8257 |
| No log | 3.5455 | 78 | 0.8143 | 0.5015 | 0.8143 | 0.9024 |
| No log | 3.6364 | 80 | 0.7963 | 0.4933 | 0.7963 | 0.8924 |
| No log | 3.7273 | 82 | 0.6655 | 0.4843 | 0.6655 | 0.8158 |
| No log | 3.8182 | 84 | 0.6994 | 0.5257 | 0.6994 | 0.8363 |
| No log | 3.9091 | 86 | 0.7376 | 0.5171 | 0.7376 | 0.8589 |
| No log | 4.0 | 88 | 0.6865 | 0.5336 | 0.6865 | 0.8286 |
| No log | 4.0909 | 90 | 0.6946 | 0.5355 | 0.6946 | 0.8334 |
| No log | 4.1818 | 92 | 0.6506 | 0.5309 | 0.6506 | 0.8066 |
| No log | 4.2727 | 94 | 0.7794 | 0.4715 | 0.7794 | 0.8829 |
| No log | 4.3636 | 96 | 0.9749 | 0.4178 | 0.9749 | 0.9874 |
| No log | 4.4545 | 98 | 0.7843 | 0.4764 | 0.7843 | 0.8856 |
| No log | 4.5455 | 100 | 0.6275 | 0.5193 | 0.6275 | 0.7921 |
| No log | 4.6364 | 102 | 0.6422 | 0.5641 | 0.6422 | 0.8013 |
| No log | 4.7273 | 104 | 0.6895 | 0.5382 | 0.6895 | 0.8303 |
| No log | 4.8182 | 106 | 0.6924 | 0.5332 | 0.6924 | 0.8321 |
| No log | 4.9091 | 108 | 0.7141 | 0.5619 | 0.7141 | 0.8451 |
| No log | 5.0 | 110 | 0.7168 | 0.5298 | 0.7168 | 0.8466 |
| No log | 5.0909 | 112 | 0.7726 | 0.5584 | 0.7726 | 0.8790 |
| No log | 5.1818 | 114 | 0.6981 | 0.5476 | 0.6981 | 0.8355 |
| No log | 5.2727 | 116 | 0.7291 | 0.5198 | 0.7291 | 0.8539 |
| No log | 5.3636 | 118 | 0.6943 | 0.5069 | 0.6943 | 0.8332 |
| No log | 5.4545 | 120 | 0.6622 | 0.5123 | 0.6622 | 0.8138 |
| No log | 5.5455 | 122 | 0.6204 | 0.5496 | 0.6204 | 0.7877 |
| No log | 5.6364 | 124 | 0.6236 | 0.4926 | 0.6236 | 0.7897 |
| No log | 5.7273 | 126 | 0.6298 | 0.5655 | 0.6298 | 0.7936 |
| No log | 5.8182 | 128 | 0.6542 | 0.5416 | 0.6542 | 0.8088 |
| No log | 5.9091 | 130 | 0.6566 | 0.5531 | 0.6566 | 0.8103 |
| No log | 6.0 | 132 | 0.6911 | 0.5088 | 0.6911 | 0.8313 |
| No log | 6.0909 | 134 | 0.6520 | 0.5248 | 0.6520 | 0.8075 |
| No log | 6.1818 | 136 | 0.6579 | 0.5327 | 0.6579 | 0.8111 |
| No log | 6.2727 | 138 | 0.6381 | 0.5191 | 0.6381 | 0.7988 |
| No log | 6.3636 | 140 | 0.7690 | 0.5085 | 0.7690 | 0.8769 |
| No log | 6.4545 | 142 | 0.7417 | 0.5101 | 0.7417 | 0.8612 |
| No log | 6.5455 | 144 | 0.7300 | 0.5151 | 0.7300 | 0.8544 |
| No log | 6.6364 | 146 | 0.6336 | 0.5285 | 0.6336 | 0.7960 |
| No log | 6.7273 | 148 | 0.6141 | 0.5239 | 0.6141 | 0.7836 |
| No log | 6.8182 | 150 | 0.6567 | 0.4857 | 0.6567 | 0.8104 |
| No log | 6.9091 | 152 | 0.8897 | 0.4287 | 0.8897 | 0.9432 |
| No log | 7.0 | 154 | 0.8958 | 0.4398 | 0.8958 | 0.9465 |
| No log | 7.0909 | 156 | 0.6126 | 0.5023 | 0.6126 | 0.7827 |
| No log | 7.1818 | 158 | 0.6669 | 0.4960 | 0.6669 | 0.8166 |
| No log | 7.2727 | 160 | 0.7190 | 0.5039 | 0.7190 | 0.8479 |
| No log | 7.3636 | 162 | 0.5995 | 0.5081 | 0.5995 | 0.7742 |
| No log | 7.4545 | 164 | 0.6595 | 0.4752 | 0.6595 | 0.8121 |
| No log | 7.5455 | 166 | 0.6808 | 0.4536 | 0.6808 | 0.8251 |
| No log | 7.6364 | 168 | 0.6300 | 0.5618 | 0.6300 | 0.7937 |
| No log | 7.7273 | 170 | 0.8434 | 0.4557 | 0.8434 | 0.9184 |
| No log | 7.8182 | 172 | 0.9728 | 0.4074 | 0.9728 | 0.9863 |
| No log | 7.9091 | 174 | 0.7747 | 0.4881 | 0.7747 | 0.8802 |
| No log | 8.0 | 176 | 0.6230 | 0.5875 | 0.6230 | 0.7893 |
| No log | 8.0909 | 178 | 0.6791 | 0.4978 | 0.6791 | 0.8241 |
| No log | 8.1818 | 180 | 0.6281 | 0.4657 | 0.6281 | 0.7925 |
| No log | 8.2727 | 182 | 0.5617 | 0.5652 | 0.5617 | 0.7495 |
| No log | 8.3636 | 184 | 0.5606 | 0.5505 | 0.5606 | 0.7487 |
| No log | 8.4545 | 186 | 0.5504 | 0.5796 | 0.5504 | 0.7419 |
| No log | 8.5455 | 188 | 0.5510 | 0.5837 | 0.5510 | 0.7423 |
| No log | 8.6364 | 190 | 0.5509 | 0.5607 | 0.5509 | 0.7423 |
| No log | 8.7273 | 192 | 0.5612 | 0.6065 | 0.5612 | 0.7491 |
| No log | 8.8182 | 194 | 0.5691 | 0.5554 | 0.5691 | 0.7544 |
| No log | 8.9091 | 196 | 0.5771 | 0.5593 | 0.5771 | 0.7596 |
| No log | 9.0 | 198 | 0.5954 | 0.5587 | 0.5954 | 0.7716 |
| No log | 9.0909 | 200 | 0.5969 | 0.5646 | 0.5969 | 0.7726 |
| No log | 9.1818 | 202 | 0.6034 | 0.5788 | 0.6034 | 0.7768 |
| No log | 9.2727 | 204 | 0.5833 | 0.5182 | 0.5833 | 0.7637 |
| No log | 9.3636 | 206 | 0.5934 | 0.5093 | 0.5934 | 0.7703 |
| No log | 9.4545 | 208 | 0.6044 | 0.5188 | 0.6044 | 0.7774 |
| No log | 9.5455 | 210 | 0.6236 | 0.5380 | 0.6236 | 0.7897 |
| No log | 9.6364 | 212 | 0.5629 | 0.5505 | 0.5629 | 0.7503 |
| No log | 9.7273 | 214 | 0.6084 | 0.5056 | 0.6084 | 0.7800 |
| No log | 9.8182 | 216 | 0.6008 | 0.5297 | 0.6008 | 0.7751 |
| No log | 9.9091 | 218 | 0.5704 | 0.5538 | 0.5704 | 0.7552 |
| No log | 10.0 | 220 | 0.7712 | 0.4946 | 0.7712 | 0.8782 |
| No log | 10.0909 | 222 | 0.8232 | 0.4794 | 0.8232 | 0.9073 |
| No log | 10.1818 | 224 | 0.6717 | 0.5239 | 0.6717 | 0.8196 |
| No log | 10.2727 | 226 | 0.5981 | 0.5822 | 0.5981 | 0.7734 |
| No log | 10.3636 | 228 | 0.6178 | 0.5023 | 0.6178 | 0.7860 |
| No log | 10.4545 | 230 | 0.5906 | 0.5396 | 0.5906 | 0.7685 |
| No log | 10.5455 | 232 | 0.5917 | 0.5563 | 0.5917 | 0.7692 |
| No log | 10.6364 | 234 | 0.6051 | 0.5379 | 0.6051 | 0.7779 |
| No log | 10.7273 | 236 | 0.6386 | 0.5126 | 0.6386 | 0.7991 |
| No log | 10.8182 | 238 | 0.6946 | 0.5119 | 0.6946 | 0.8334 |
| No log | 10.9091 | 240 | 0.6808 | 0.5139 | 0.6808 | 0.8251 |
| No log | 11.0 | 242 | 0.6435 | 0.5568 | 0.6435 | 0.8022 |
| No log | 11.0909 | 244 | 0.6000 | 0.6099 | 0.6000 | 0.7746 |
| No log | 11.1818 | 246 | 0.6166 | 0.4752 | 0.6166 | 0.7852 |
| No log | 11.2727 | 248 | 0.6222 | 0.4687 | 0.6222 | 0.7888 |
| No log | 11.3636 | 250 | 0.6081 | 0.4735 | 0.6081 | 0.7798 |
| No log | 11.4545 | 252 | 0.5607 | 0.5700 | 0.5607 | 0.7488 |
| No log | 11.5455 | 254 | 0.5646 | 0.5525 | 0.5646 | 0.7514 |
| No log | 11.6364 | 256 | 0.6002 | 0.5763 | 0.6002 | 0.7747 |
| No log | 11.7273 | 258 | 0.6733 | 0.5753 | 0.6733 | 0.8205 |
| No log | 11.8182 | 260 | 0.6699 | 0.5775 | 0.6699 | 0.8185 |
| No log | 11.9091 | 262 | 0.5880 | 0.5019 | 0.5880 | 0.7668 |
| No log | 12.0 | 264 | 0.5843 | 0.5121 | 0.5843 | 0.7644 |
| No log | 12.0909 | 266 | 0.6040 | 0.5077 | 0.6040 | 0.7772 |
| No log | 12.1818 | 268 | 0.6133 | 0.4956 | 0.6133 | 0.7831 |
| No log | 12.2727 | 270 | 0.6544 | 0.5354 | 0.6544 | 0.8089 |
| No log | 12.3636 | 272 | 0.6838 | 0.5656 | 0.6838 | 0.8269 |
| No log | 12.4545 | 274 | 0.7079 | 0.5411 | 0.7079 | 0.8414 |
| No log | 12.5455 | 276 | 0.7326 | 0.5266 | 0.7326 | 0.8559 |
| No log | 12.6364 | 278 | 0.7464 | 0.5266 | 0.7464 | 0.8640 |
| No log | 12.7273 | 280 | 0.6757 | 0.5393 | 0.6757 | 0.8220 |
| No log | 12.8182 | 282 | 0.6755 | 0.4877 | 0.6755 | 0.8219 |
| No log | 12.9091 | 284 | 0.6310 | 0.5049 | 0.6310 | 0.7944 |
| No log | 13.0 | 286 | 0.5879 | 0.5054 | 0.5879 | 0.7667 |
| No log | 13.0909 | 288 | 0.6253 | 0.5079 | 0.6253 | 0.7907 |
| No log | 13.1818 | 290 | 0.6147 | 0.5212 | 0.6147 | 0.7840 |
| No log | 13.2727 | 292 | 0.6433 | 0.5331 | 0.6433 | 0.8021 |
| No log | 13.3636 | 294 | 0.6557 | 0.5281 | 0.6557 | 0.8098 |
| No log | 13.4545 | 296 | 0.6136 | 0.5360 | 0.6136 | 0.7833 |
| No log | 13.5455 | 298 | 0.6470 | 0.4562 | 0.6470 | 0.8044 |
| No log | 13.6364 | 300 | 0.6222 | 0.5155 | 0.6222 | 0.7888 |
| No log | 13.7273 | 302 | 0.6224 | 0.5463 | 0.6224 | 0.7889 |
| No log | 13.8182 | 304 | 0.6831 | 0.5348 | 0.6831 | 0.8265 |
| No log | 13.9091 | 306 | 0.7109 | 0.4983 | 0.7109 | 0.8432 |
| No log | 14.0 | 308 | 0.8132 | 0.5264 | 0.8132 | 0.9018 |
| No log | 14.0909 | 310 | 0.8106 | 0.5281 | 0.8106 | 0.9003 |
| No log | 14.1818 | 312 | 0.8155 | 0.5224 | 0.8155 | 0.9030 |
| No log | 14.2727 | 314 | 0.8112 | 0.5047 | 0.8112 | 0.9007 |
| No log | 14.3636 | 316 | 0.7349 | 0.5357 | 0.7349 | 0.8573 |
| No log | 14.4545 | 318 | 0.6635 | 0.5399 | 0.6635 | 0.8146 |
| No log | 14.5455 | 320 | 0.6214 | 0.5192 | 0.6214 | 0.7883 |
| No log | 14.6364 | 322 | 0.5802 | 0.4865 | 0.5802 | 0.7617 |
| No log | 14.7273 | 324 | 0.5878 | 0.4999 | 0.5878 | 0.7667 |
| No log | 14.8182 | 326 | 0.6374 | 0.4750 | 0.6374 | 0.7984 |
| No log | 14.9091 | 328 | 0.6116 | 0.5086 | 0.6116 | 0.7820 |
| No log | 15.0 | 330 | 0.5774 | 0.5774 | 0.5774 | 0.7599 |
| No log | 15.0909 | 332 | 0.5823 | 0.5630 | 0.5823 | 0.7631 |
| No log | 15.1818 | 334 | 0.5991 | 0.5491 | 0.5991 | 0.7740 |
| No log | 15.2727 | 336 | 0.6026 | 0.5310 | 0.6026 | 0.7763 |
| No log | 15.3636 | 338 | 0.6061 | 0.5100 | 0.6061 | 0.7785 |
| No log | 15.4545 | 340 | 0.6039 | 0.5768 | 0.6039 | 0.7771 |
| No log | 15.5455 | 342 | 0.5981 | 0.5768 | 0.5981 | 0.7734 |
| No log | 15.6364 | 344 | 0.5939 | 0.5581 | 0.5939 | 0.7707 |
| No log | 15.7273 | 346 | 0.5895 | 0.5097 | 0.5895 | 0.7678 |
| No log | 15.8182 | 348 | 0.6569 | 0.4596 | 0.6569 | 0.8105 |
| No log | 15.9091 | 350 | 0.7011 | 0.4467 | 0.7011 | 0.8373 |
| No log | 16.0 | 352 | 0.6249 | 0.4118 | 0.6249 | 0.7905 |
| No log | 16.0909 | 354 | 0.6025 | 0.5829 | 0.6025 | 0.7762 |
| No log | 16.1818 | 356 | 0.6402 | 0.5698 | 0.6402 | 0.8001 |
| No log | 16.2727 | 358 | 0.6214 | 0.5536 | 0.6214 | 0.7883 |
| No log | 16.3636 | 360 | 0.6119 | 0.5828 | 0.6119 | 0.7822 |
| No log | 16.4545 | 362 | 0.6061 | 0.5514 | 0.6061 | 0.7785 |
| No log | 16.5455 | 364 | 0.6042 | 0.5487 | 0.6042 | 0.7773 |
| No log | 16.6364 | 366 | 0.6121 | 0.5191 | 0.6121 | 0.7824 |
| No log | 16.7273 | 368 | 0.6307 | 0.4672 | 0.6307 | 0.7942 |
| No log | 16.8182 | 370 | 0.6117 | 0.4934 | 0.6117 | 0.7821 |
| No log | 16.9091 | 372 | 0.6233 | 0.5046 | 0.6233 | 0.7895 |
| No log | 17.0 | 374 | 0.6423 | 0.5361 | 0.6423 | 0.8014 |
| No log | 17.0909 | 376 | 0.6115 | 0.5180 | 0.6115 | 0.7820 |
| No log | 17.1818 | 378 | 0.5932 | 0.4978 | 0.5932 | 0.7702 |
| No log | 17.2727 | 380 | 0.5943 | 0.4733 | 0.5943 | 0.7709 |
| No log | 17.3636 | 382 | 0.5906 | 0.4686 | 0.5906 | 0.7685 |
| No log | 17.4545 | 384 | 0.5974 | 0.5132 | 0.5974 | 0.7729 |
| No log | 17.5455 | 386 | 0.6258 | 0.5490 | 0.6258 | 0.7911 |
| No log | 17.6364 | 388 | 0.6177 | 0.5638 | 0.6177 | 0.7860 |
| No log | 17.7273 | 390 | 0.5997 | 0.4885 | 0.5997 | 0.7744 |
| No log | 17.8182 | 392 | 0.6634 | 0.4593 | 0.6634 | 0.8145 |
| No log | 17.9091 | 394 | 0.6374 | 0.4895 | 0.6374 | 0.7983 |
| No log | 18.0 | 396 | 0.5936 | 0.5303 | 0.5936 | 0.7705 |
| No log | 18.0909 | 398 | 0.6190 | 0.5291 | 0.6190 | 0.7867 |
| No log | 18.1818 | 400 | 0.5982 | 0.5314 | 0.5982 | 0.7734 |
| No log | 18.2727 | 402 | 0.5839 | 0.5270 | 0.5839 | 0.7641 |
| No log | 18.3636 | 404 | 0.5840 | 0.5200 | 0.5840 | 0.7642 |
| No log | 18.4545 | 406 | 0.6128 | 0.5291 | 0.6128 | 0.7828 |
| No log | 18.5455 | 408 | 0.6391 | 0.5301 | 0.6391 | 0.7994 |
| No log | 18.6364 | 410 | 0.6248 | 0.5260 | 0.6248 | 0.7905 |
| No log | 18.7273 | 412 | 0.6061 | 0.5223 | 0.6061 | 0.7785 |
| No log | 18.8182 | 414 | 0.6178 | 0.5208 | 0.6178 | 0.7860 |
| No log | 18.9091 | 416 | 0.6389 | 0.5228 | 0.6389 | 0.7993 |
| No log | 19.0 | 418 | 0.6360 | 0.5523 | 0.6360 | 0.7975 |
| No log | 19.0909 | 420 | 0.6266 | 0.5758 | 0.6266 | 0.7916 |
| No log | 19.1818 | 422 | 0.6141 | 0.6061 | 0.6141 | 0.7836 |
| No log | 19.2727 | 424 | 0.6449 | 0.5323 | 0.6449 | 0.8031 |
| No log | 19.3636 | 426 | 0.7252 | 0.5203 | 0.7252 | 0.8516 |
| No log | 19.4545 | 428 | 0.6828 | 0.5255 | 0.6828 | 0.8263 |
| No log | 19.5455 | 430 | 0.5977 | 0.5641 | 0.5977 | 0.7731 |
| No log | 19.6364 | 432 | 0.5565 | 0.5570 | 0.5565 | 0.7460 |
| No log | 19.7273 | 434 | 0.5654 | 0.6175 | 0.5654 | 0.7519 |
| No log | 19.8182 | 436 | 0.5554 | 0.5818 | 0.5554 | 0.7453 |
| No log | 19.9091 | 438 | 0.5352 | 0.5562 | 0.5352 | 0.7316 |
| No log | 20.0 | 440 | 0.5657 | 0.4958 | 0.5657 | 0.7521 |
| No log | 20.0909 | 442 | 0.5662 | 0.4867 | 0.5662 | 0.7524 |
| No log | 20.1818 | 444 | 0.5347 | 0.5535 | 0.5347 | 0.7312 |
| No log | 20.2727 | 446 | 0.5350 | 0.5589 | 0.5350 | 0.7315 |
| No log | 20.3636 | 448 | 0.5352 | 0.5453 | 0.5352 | 0.7316 |
| No log | 20.4545 | 450 | 0.5375 | 0.5543 | 0.5375 | 0.7332 |
| No log | 20.5455 | 452 | 0.5444 | 0.5676 | 0.5444 | 0.7378 |
| No log | 20.6364 | 454 | 0.5717 | 0.6178 | 0.5717 | 0.7561 |
| No log | 20.7273 | 456 | 0.5784 | 0.5864 | 0.5784 | 0.7605 |
| No log | 20.8182 | 458 | 0.5722 | 0.5709 | 0.5722 | 0.7564 |
| No log | 20.9091 | 460 | 0.6167 | 0.5341 | 0.6167 | 0.7853 |
| No log | 21.0 | 462 | 0.5992 | 0.5503 | 0.5992 | 0.7741 |
| No log | 21.0909 | 464 | 0.5725 | 0.5087 | 0.5725 | 0.7566 |
| No log | 21.1818 | 466 | 0.5554 | 0.5020 | 0.5554 | 0.7453 |
| No log | 21.2727 | 468 | 0.5592 | 0.4928 | 0.5592 | 0.7478 |
| No log | 21.3636 | 470 | 0.5891 | 0.4863 | 0.5891 | 0.7675 |
| No log | 21.4545 | 472 | 0.6507 | 0.4865 | 0.6507 | 0.8067 |
| No log | 21.5455 | 474 | 0.6651 | 0.5132 | 0.6651 | 0.8155 |
| No log | 21.6364 | 476 | 0.6183 | 0.4811 | 0.6183 | 0.7863 |
| No log | 21.7273 | 478 | 0.5829 | 0.5549 | 0.5829 | 0.7635 |
| No log | 21.8182 | 480 | 0.6007 | 0.5189 | 0.6007 | 0.7751 |
| No log | 21.9091 | 482 | 0.5915 | 0.5553 | 0.5915 | 0.7691 |
| No log | 22.0 | 484 | 0.5931 | 0.4946 | 0.5931 | 0.7701 |
| No log | 22.0909 | 486 | 0.5943 | 0.5091 | 0.5943 | 0.7709 |
| No log | 22.1818 | 488 | 0.6062 | 0.5314 | 0.6062 | 0.7786 |
| No log | 22.2727 | 490 | 0.5855 | 0.5155 | 0.5855 | 0.7652 |
| No log | 22.3636 | 492 | 0.5791 | 0.5081 | 0.5791 | 0.7610 |
| No log | 22.4545 | 494 | 0.5685 | 0.5339 | 0.5685 | 0.7540 |
| No log | 22.5455 | 496 | 0.5644 | 0.5155 | 0.5644 | 0.7512 |
| No log | 22.6364 | 498 | 0.5690 | 0.5502 | 0.5690 | 0.7544 |
| 0.3089 | 22.7273 | 500 | 0.5745 | 0.5685 | 0.5745 | 0.7579 |
| 0.3089 | 22.8182 | 502 | 0.5815 | 0.5549 | 0.5815 | 0.7625 |
| 0.3089 | 22.9091 | 504 | 0.5910 | 0.5373 | 0.5910 | 0.7688 |
| 0.3089 | 23.0 | 506 | 0.5781 | 0.5402 | 0.5781 | 0.7603 |
| 0.3089 | 23.0909 | 508 | 0.5665 | 0.5440 | 0.5665 | 0.7526 |
| 0.3089 | 23.1818 | 510 | 0.5702 | 0.5276 | 0.5702 | 0.7551 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
phungkhaccuong/568ec1ab-a12a-4b2f-913f-a3d1fdbd003e
|
phungkhaccuong
| 2025-01-15T12:42:49Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-15T12:33:01Z
|
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 568ec1ab-a12a-4b2f-913f-a3d1fdbd003e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 34a8ecc953c9036c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/34a8ecc953c9036c_train_data.json
type:
field_input: title_main
field_instruction: texte
field_output: texteHtml
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/568ec1ab-a12a-4b2f-913f-a3d1fdbd003e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/34a8ecc953c9036c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4cab3fe6-17ad-4a26-9515-f652b7692aed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4cab3fe6-17ad-4a26-9515-f652b7692aed
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 568ec1ab-a12a-4b2f-913f-a3d1fdbd003e
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 0.8833 |
| 0.6431 | 0.0123 | 10 | 0.4092 |
| 0.2016 | 0.0245 | 20 | 0.1782 |
| 0.1453 | 0.0368 | 30 | 0.1392 |
| 0.129 | 0.0491 | 40 | 0.1237 |
| 0.1143 | 0.0613 | 50 | 0.1213 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
duyphu/9a662880-dd97-4b40-ae74-f6c55cb8b3de
|
duyphu
| 2025-01-15T12:41:56Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | 2025-01-15T12:39:01Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a662880-dd97-4b40-ae74-f6c55cb8b3de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 311f161d394c0f20_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/311f161d394c0f20_train_data.json
type:
field_input: answer_1
field_instruction: question
field_output: answer_2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/9a662880-dd97-4b40-ae74-f6c55cb8b3de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/311f161d394c0f20_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d87d5d3f-f10b-4275-a4e4-bfe0eaa3d151
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d87d5d3f-f10b-4275-a4e4-bfe0eaa3d151
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9a662880-dd97-4b40-ae74-f6c55cb8b3de
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0043 | 1 | 2.7217 |
| 2.5721 | 0.0434 | 10 | 2.6530 |
| 2.6293 | 0.0869 | 20 | 2.5852 |
| 2.3779 | 0.1303 | 30 | 2.5555 |
| 2.3164 | 0.1737 | 40 | 2.5445 |
| 2.5845 | 0.2172 | 50 | 2.5425 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
bustamiyusoef/_Arabic_nougat_AHR
|
bustamiyusoef
| 2025-01-15T12:41:46Z
| 14
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:MohamedRashad/arabic-base-nougat",
"base_model:finetune:MohamedRashad/arabic-base-nougat",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-01-15T12:40:21Z
|
---
library_name: transformers
license: gpl-3.0
base_model: MohamedRashad/arabic-base-nougat
tags:
- generated_from_trainer
model-index:
- name: _Arabic_nougat_AHR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _Arabic_nougat_AHR
This model is a fine-tuned version of [MohamedRashad/arabic-base-nougat](https://huggingface.co/MohamedRashad/arabic-base-nougat) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.3594 | 1.0 | 77 | 0.3758 |
| 1.6401 | 2.0 | 154 | 0.3143 |
| 1.4332 | 3.0 | 231 | 0.2986 |
| 0.9917 | 4.0 | 308 | 0.2993 |
| 0.8202 | 5.0 | 385 | 0.3082 |
| 0.669 | 6.0 | 462 | 0.3103 |
| 0.5788 | 7.0 | 539 | 0.3233 |
| 0.4873 | 8.0 | 616 | 0.3336 |
| 0.4865 | 9.0 | 693 | 0.3366 |
| 0.3386 | 10.0 | 770 | 0.3503 |
| 0.3643 | 11.0 | 847 | 0.3476 |
| 0.3229 | 12.0 | 924 | 0.3546 |
| 0.3406 | 13.0 | 1001 | 0.3536 |
| 0.331 | 14.0 | 1078 | 0.3534 |
| 0.3016 | 14.8140 | 1140 | 0.3533 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso03/3fceefbb-ef0f-4e97-afe5-1fd1691eb752
|
lesso03
| 2025-01-15T12:41:32Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:34:57Z
|
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3fceefbb-ef0f-4e97-afe5-1fd1691eb752
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: true
chat_template: llama3
datasets:
- data_files:
- 34a8ecc953c9036c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/34a8ecc953c9036c_train_data.json
type:
field_input: title_main
field_instruction: texte
field_output: texteHtml
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso03/3fceefbb-ef0f-4e97-afe5-1fd1691eb752
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/34a8ecc953c9036c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4cab3fe6-17ad-4a26-9515-f652b7692aed
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4cab3fe6-17ad-4a26-9515-f652b7692aed
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3fceefbb-ef0f-4e97-afe5-1fd1691eb752
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6828 | 0.0012 | 1 | 0.8920 |
| 0.6502 | 0.0061 | 5 | 0.6842 |
| 0.4316 | 0.0123 | 10 | 0.2525 |
| 0.1992 | 0.0184 | 15 | 0.1788 |
| 0.1733 | 0.0245 | 20 | 0.1621 |
| 0.1932 | 0.0307 | 25 | 0.1601 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/2fd95b9e-76a3-44ff-8ec5-6dfff15f2c7f
|
Best000
| 2025-01-15T12:40:22Z
| 15
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"region:us"
] | null | 2025-01-15T12:36:37Z
|
---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2fd95b9e-76a3-44ff-8ec5-6dfff15f2c7f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c711a1480f59d37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c711a1480f59d37_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/2fd95b9e-76a3-44ff-8ec5-6dfff15f2c7f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c711a1480f59d37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c88fd593-37b5-4dc0-be79-680dbbc06811
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c88fd593-37b5-4dc0-be79-680dbbc06811
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2fd95b9e-76a3-44ff-8ec5-6dfff15f2c7f
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0013 | 3 | nan |
| 0.0 | 0.0025 | 6 | nan |
| 0.0 | 0.0038 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tensorwa/c3_q5_1
|
tensorwa
| 2025-01-15T12:40:18Z
| 51
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T12:36:52Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
duyphu/380bc76e-f068-4938-8563-66819cf1d4af
|
duyphu
| 2025-01-15T12:37:56Z
| 14
| 0
|
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-15T12:23:11Z
|
---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 380bc76e-f068-4938-8563-66819cf1d4af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a46d02fc217c0509_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a46d02fc217c0509_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/380bc76e-f068-4938-8563-66819cf1d4af
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/a46d02fc217c0509_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: da5a59e4-b4e2-40cf-906c-59fc8d44af6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: da5a59e4-b4e2-40cf-906c-59fc8d44af6a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 380bc76e-f068-4938-8563-66819cf1d4af
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 5.1141 |
| 19.9015 | 0.0068 | 10 | 4.7637 |
| 15.6149 | 0.0137 | 20 | 3.2703 |
| 9.8552 | 0.0205 | 30 | 2.5106 |
| 7.8637 | 0.0274 | 40 | 2.3464 |
| 10.2621 | 0.0342 | 50 | 2.2988 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
John6666/naixl-mmmmix-v45-sdxl
|
John6666
| 2025-01-15T12:37:46Z
| 393
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"detail",
"color",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.0",
"base_model:finetune:Laxhar/noobai-XL-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-01-15T12:32:01Z
|
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- detail
- color
- illustrious
base_model: Laxhar/noobai-XL-1.0
---
Original model is [here](https://civitai.com/models/997769/naixlmmmmix?modelVersionId=1286203).
This model created by [xiaofan941297](https://civitai.com/user/xiaofan941297).
|
ClarenceDan/fd4e7993-fd51-4e4c-967c-2c3418dace26
|
ClarenceDan
| 2025-01-15T12:36:23Z
| 15
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T12:26:38Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd4e7993-fd51-4e4c-967c-2c3418dace26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8492dab84eb4a872_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8492dab84eb4a872_train_data.json
type:
field_input: Abstract
field_instruction: Title
field_output: Hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/fd4e7993-fd51-4e4c-967c-2c3418dace26
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8492dab84eb4a872_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb4dd990-a545-4b1a-8ec7-8e5c79e135a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb4dd990-a545-4b1a-8ec7-8e5c79e135a9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fd4e7993-fd51-4e4c-967c-2c3418dace26
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 4.1641 | 0.0002 | 3 | nan |
| 6.308 | 0.0004 | 6 | nan |
| 3.1081 | 0.0005 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk-out/ffeb230c-753e-4b7d-b713-b1159dd0866b
|
kostiantynk-out
| 2025-01-15T12:33:56Z
| 18
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-01-15T12:19:39Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ffeb230c-753e-4b7d-b713-b1159dd0866b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22898ffc4e984aab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22898ffc4e984aab_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/ffeb230c-753e-4b7d-b713-b1159dd0866b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/22898ffc4e984aab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 537ea366-0d00-4215-a8bf-12dd9968edce
wandb_project: Mine-SN56-1-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 537ea366-0d00-4215-a8bf-12dd9968edce
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ffeb230c-753e-4b7d-b713-b1159dd0866b
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0002 | 3 | nan |
| 0.0 | 0.0004 | 6 | nan |
| 0.0 | 0.0006 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k3_task1_organization
|
MayBashendy
| 2025-01-15T12:33:54Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T12:16:01Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k3_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k3_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6083
- Qwk: 0.7613
- Mse: 0.6083
- Rmse: 0.7800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0870 | 2 | 6.7965 | 0.0242 | 6.7965 | 2.6070 |
| No log | 0.1739 | 4 | 5.2676 | 0.0366 | 5.2676 | 2.2951 |
| No log | 0.2609 | 6 | 3.0917 | 0.0595 | 3.0917 | 1.7583 |
| No log | 0.3478 | 8 | 2.5803 | 0.0132 | 2.5803 | 1.6063 |
| No log | 0.4348 | 10 | 3.5005 | 0.0787 | 3.5005 | 1.8710 |
| No log | 0.5217 | 12 | 2.6461 | 0.0141 | 2.6461 | 1.6267 |
| No log | 0.6087 | 14 | 1.7385 | 0.2478 | 1.7385 | 1.3185 |
| No log | 0.6957 | 16 | 1.6891 | 0.0583 | 1.6891 | 1.2996 |
| No log | 0.7826 | 18 | 1.6908 | 0.1296 | 1.6908 | 1.3003 |
| No log | 0.8696 | 20 | 1.6275 | 0.1308 | 1.6275 | 1.2757 |
| No log | 0.9565 | 22 | 1.4872 | 0.1538 | 1.4872 | 1.2195 |
| No log | 1.0435 | 24 | 1.5368 | 0.3158 | 1.5368 | 1.2397 |
| No log | 1.1304 | 26 | 1.3818 | 0.3158 | 1.3818 | 1.1755 |
| No log | 1.2174 | 28 | 1.2881 | 0.3932 | 1.2881 | 1.1349 |
| No log | 1.3043 | 30 | 1.1907 | 0.3621 | 1.1907 | 1.0912 |
| No log | 1.3913 | 32 | 1.1694 | 0.3652 | 1.1694 | 1.0814 |
| No log | 1.4783 | 34 | 1.1661 | 0.3214 | 1.1661 | 1.0798 |
| No log | 1.5652 | 36 | 1.0867 | 0.4138 | 1.0867 | 1.0425 |
| No log | 1.6522 | 38 | 1.0225 | 0.4959 | 1.0225 | 1.0112 |
| No log | 1.7391 | 40 | 0.9464 | 0.6970 | 0.9464 | 0.9728 |
| No log | 1.8261 | 42 | 0.8481 | 0.7482 | 0.8481 | 0.9209 |
| No log | 1.9130 | 44 | 0.7664 | 0.7260 | 0.7664 | 0.8754 |
| No log | 2.0 | 46 | 0.7207 | 0.7651 | 0.7207 | 0.8489 |
| No log | 2.0870 | 48 | 0.9562 | 0.5926 | 0.9562 | 0.9779 |
| No log | 2.1739 | 50 | 1.2811 | 0.5224 | 1.2811 | 1.1319 |
| No log | 2.2609 | 52 | 1.1457 | 0.5481 | 1.1457 | 1.0704 |
| No log | 2.3478 | 54 | 1.0695 | 0.5630 | 1.0695 | 1.0342 |
| No log | 2.4348 | 56 | 1.2533 | 0.5180 | 1.2533 | 1.1195 |
| No log | 2.5217 | 58 | 1.4337 | 0.4714 | 1.4337 | 1.1974 |
| No log | 2.6087 | 60 | 1.1399 | 0.5693 | 1.1399 | 1.0677 |
| No log | 2.6957 | 62 | 0.8858 | 0.5926 | 0.8858 | 0.9412 |
| No log | 2.7826 | 64 | 0.9129 | 0.6567 | 0.9129 | 0.9555 |
| No log | 2.8696 | 66 | 1.2352 | 0.5 | 1.2352 | 1.1114 |
| No log | 2.9565 | 68 | 1.1034 | 0.5606 | 1.1034 | 1.0504 |
| No log | 3.0435 | 70 | 0.9522 | 0.5891 | 0.9522 | 0.9758 |
| No log | 3.1304 | 72 | 0.9261 | 0.6142 | 0.9261 | 0.9624 |
| No log | 3.2174 | 74 | 0.9060 | 0.6515 | 0.9060 | 0.9519 |
| No log | 3.3043 | 76 | 0.8631 | 0.6667 | 0.8631 | 0.9290 |
| No log | 3.3913 | 78 | 0.8798 | 0.6522 | 0.8798 | 0.9380 |
| No log | 3.4783 | 80 | 0.8796 | 0.6324 | 0.8796 | 0.9379 |
| No log | 3.5652 | 82 | 0.9746 | 0.6222 | 0.9746 | 0.9872 |
| No log | 3.6522 | 84 | 1.0278 | 0.5899 | 1.0278 | 1.0138 |
| No log | 3.7391 | 86 | 0.9362 | 0.6479 | 0.9362 | 0.9676 |
| No log | 3.8261 | 88 | 0.9420 | 0.6383 | 0.9420 | 0.9706 |
| No log | 3.9130 | 90 | 1.0282 | 0.6383 | 1.0282 | 1.0140 |
| No log | 4.0 | 92 | 1.3175 | 0.5 | 1.3175 | 1.1478 |
| No log | 4.0870 | 94 | 1.2619 | 0.5324 | 1.2619 | 1.1234 |
| No log | 4.1739 | 96 | 0.9363 | 0.6716 | 0.9363 | 0.9676 |
| No log | 4.2609 | 98 | 0.7244 | 0.7194 | 0.7244 | 0.8511 |
| No log | 4.3478 | 100 | 0.7534 | 0.7586 | 0.7534 | 0.8680 |
| No log | 4.4348 | 102 | 0.7001 | 0.7286 | 0.7001 | 0.8367 |
| No log | 4.5217 | 104 | 0.7011 | 0.7324 | 0.7011 | 0.8373 |
| No log | 4.6087 | 106 | 0.7827 | 0.6897 | 0.7827 | 0.8847 |
| No log | 4.6957 | 108 | 0.7922 | 0.7034 | 0.7922 | 0.8900 |
| No log | 4.7826 | 110 | 0.8235 | 0.6377 | 0.8235 | 0.9075 |
| No log | 4.8696 | 112 | 0.8034 | 0.6331 | 0.8034 | 0.8963 |
| No log | 4.9565 | 114 | 0.7506 | 0.6897 | 0.7506 | 0.8664 |
| No log | 5.0435 | 116 | 0.8669 | 0.6525 | 0.8669 | 0.9311 |
| No log | 5.1304 | 118 | 0.9777 | 0.6241 | 0.9777 | 0.9888 |
| No log | 5.2174 | 120 | 0.8365 | 0.6525 | 0.8365 | 0.9146 |
| No log | 5.3043 | 122 | 0.7675 | 0.7034 | 0.7675 | 0.8761 |
| No log | 5.3913 | 124 | 0.8405 | 0.7222 | 0.8405 | 0.9168 |
| No log | 5.4783 | 126 | 0.8176 | 0.7020 | 0.8176 | 0.9042 |
| No log | 5.5652 | 128 | 0.7173 | 0.7027 | 0.7173 | 0.8469 |
| No log | 5.6522 | 130 | 0.7593 | 0.6897 | 0.7593 | 0.8714 |
| No log | 5.7391 | 132 | 0.7562 | 0.6806 | 0.7562 | 0.8696 |
| No log | 5.8261 | 134 | 0.7676 | 0.6475 | 0.7676 | 0.8761 |
| No log | 5.9130 | 136 | 0.7785 | 0.6812 | 0.7785 | 0.8823 |
| No log | 6.0 | 138 | 0.8224 | 0.6667 | 0.8224 | 0.9069 |
| No log | 6.0870 | 140 | 0.7774 | 0.6963 | 0.7774 | 0.8817 |
| No log | 6.1739 | 142 | 0.7749 | 0.6906 | 0.7749 | 0.8803 |
| No log | 6.2609 | 144 | 0.7668 | 0.6906 | 0.7668 | 0.8757 |
| No log | 6.3478 | 146 | 0.7373 | 0.7273 | 0.7373 | 0.8587 |
| No log | 6.4348 | 148 | 0.7192 | 0.7310 | 0.7192 | 0.8481 |
| No log | 6.5217 | 150 | 0.7090 | 0.7333 | 0.7090 | 0.8420 |
| No log | 6.6087 | 152 | 0.7434 | 0.7248 | 0.7434 | 0.8622 |
| No log | 6.6957 | 154 | 0.9131 | 0.6957 | 0.9131 | 0.9555 |
| No log | 6.7826 | 156 | 1.0124 | 0.6497 | 1.0124 | 1.0062 |
| No log | 6.8696 | 158 | 1.0566 | 0.6364 | 1.0566 | 1.0279 |
| No log | 6.9565 | 160 | 0.9694 | 0.6316 | 0.9694 | 0.9846 |
| No log | 7.0435 | 162 | 0.8067 | 0.7042 | 0.8067 | 0.8982 |
| No log | 7.1304 | 164 | 0.7349 | 0.6944 | 0.7349 | 0.8573 |
| No log | 7.2174 | 166 | 0.6986 | 0.7310 | 0.6986 | 0.8358 |
| No log | 7.3043 | 168 | 0.6657 | 0.7582 | 0.6657 | 0.8159 |
| No log | 7.3913 | 170 | 0.6363 | 0.7516 | 0.6363 | 0.7977 |
| No log | 7.4783 | 172 | 0.6082 | 0.7625 | 0.6082 | 0.7799 |
| No log | 7.5652 | 174 | 0.7006 | 0.7248 | 0.7006 | 0.8370 |
| No log | 7.6522 | 176 | 0.7965 | 0.6906 | 0.7965 | 0.8925 |
| No log | 7.7391 | 178 | 0.8408 | 0.6715 | 0.8408 | 0.9169 |
| No log | 7.8261 | 180 | 0.8113 | 0.6815 | 0.8113 | 0.9007 |
| No log | 7.9130 | 182 | 0.7450 | 0.7101 | 0.7450 | 0.8631 |
| No log | 8.0 | 184 | 0.7569 | 0.6765 | 0.7569 | 0.8700 |
| No log | 8.0870 | 186 | 0.8218 | 0.6364 | 0.8218 | 0.9065 |
| No log | 8.1739 | 188 | 0.8568 | 0.7353 | 0.8568 | 0.9256 |
| No log | 8.2609 | 190 | 0.8843 | 0.7007 | 0.8843 | 0.9404 |
| No log | 8.3478 | 192 | 0.8448 | 0.7092 | 0.8448 | 0.9191 |
| No log | 8.4348 | 194 | 0.8084 | 0.7133 | 0.8084 | 0.8991 |
| No log | 8.5217 | 196 | 0.8005 | 0.6901 | 0.8005 | 0.8947 |
| No log | 8.6087 | 198 | 0.8825 | 0.6099 | 0.8825 | 0.9394 |
| No log | 8.6957 | 200 | 0.9992 | 0.5899 | 0.9992 | 0.9996 |
| No log | 8.7826 | 202 | 0.9517 | 0.6471 | 0.9517 | 0.9756 |
| No log | 8.8696 | 204 | 0.7884 | 0.6619 | 0.7884 | 0.8879 |
| No log | 8.9565 | 206 | 0.6833 | 0.7483 | 0.6833 | 0.8266 |
| No log | 9.0435 | 208 | 0.6564 | 0.7534 | 0.6564 | 0.8102 |
| No log | 9.1304 | 210 | 0.6524 | 0.7260 | 0.6524 | 0.8077 |
| No log | 9.2174 | 212 | 0.7133 | 0.7114 | 0.7133 | 0.8446 |
| No log | 9.3043 | 214 | 0.7107 | 0.7097 | 0.7107 | 0.8430 |
| No log | 9.3913 | 216 | 0.6151 | 0.7248 | 0.6151 | 0.7843 |
| No log | 9.4783 | 218 | 0.5884 | 0.7632 | 0.5884 | 0.7671 |
| No log | 9.5652 | 220 | 0.6203 | 0.7453 | 0.6203 | 0.7876 |
| No log | 9.6522 | 222 | 0.6408 | 0.7586 | 0.6408 | 0.8005 |
| No log | 9.7391 | 224 | 0.5502 | 0.8136 | 0.5502 | 0.7417 |
| No log | 9.8261 | 226 | 0.5383 | 0.8095 | 0.5383 | 0.7337 |
| No log | 9.9130 | 228 | 0.5737 | 0.7738 | 0.5737 | 0.7575 |
| No log | 10.0 | 230 | 0.7389 | 0.7317 | 0.7389 | 0.8596 |
| No log | 10.0870 | 232 | 0.7915 | 0.6933 | 0.7915 | 0.8897 |
| No log | 10.1739 | 234 | 0.7053 | 0.7042 | 0.7053 | 0.8398 |
| No log | 10.2609 | 236 | 0.6509 | 0.7639 | 0.6509 | 0.8068 |
| No log | 10.3478 | 238 | 0.6944 | 0.7448 | 0.6944 | 0.8333 |
| No log | 10.4348 | 240 | 0.6575 | 0.7867 | 0.6575 | 0.8109 |
| No log | 10.5217 | 242 | 0.6312 | 0.8077 | 0.6312 | 0.7945 |
| No log | 10.6087 | 244 | 0.5962 | 0.7413 | 0.5962 | 0.7721 |
| No log | 10.6957 | 246 | 0.7272 | 0.7117 | 0.7272 | 0.8528 |
| No log | 10.7826 | 248 | 0.9181 | 0.7159 | 0.9181 | 0.9582 |
| No log | 10.8696 | 250 | 0.9041 | 0.7273 | 0.9041 | 0.9509 |
| No log | 10.9565 | 252 | 0.7246 | 0.7261 | 0.7246 | 0.8512 |
| No log | 11.0435 | 254 | 0.6210 | 0.7123 | 0.6210 | 0.7880 |
| No log | 11.1304 | 256 | 0.6441 | 0.7975 | 0.6441 | 0.8026 |
| No log | 11.2174 | 258 | 0.6400 | 0.7722 | 0.6400 | 0.8000 |
| No log | 11.3043 | 260 | 0.6608 | 0.7133 | 0.6608 | 0.8129 |
| No log | 11.3913 | 262 | 0.6943 | 0.7050 | 0.6943 | 0.8333 |
| No log | 11.4783 | 264 | 0.7331 | 0.7194 | 0.7331 | 0.8562 |
| No log | 11.5652 | 266 | 0.8334 | 0.6619 | 0.8334 | 0.9129 |
| No log | 11.6522 | 268 | 0.8575 | 0.6277 | 0.8575 | 0.9260 |
| No log | 11.7391 | 270 | 0.7644 | 0.6765 | 0.7644 | 0.8743 |
| No log | 11.8261 | 272 | 0.6426 | 0.7552 | 0.6426 | 0.8016 |
| No log | 11.9130 | 274 | 0.6004 | 0.7792 | 0.6004 | 0.7749 |
| No log | 12.0 | 276 | 0.5728 | 0.7871 | 0.5728 | 0.7569 |
| No log | 12.0870 | 278 | 0.5652 | 0.7831 | 0.5652 | 0.7518 |
| No log | 12.1739 | 280 | 0.6229 | 0.7727 | 0.6229 | 0.7893 |
| No log | 12.2609 | 282 | 0.6352 | 0.7574 | 0.6352 | 0.7970 |
| No log | 12.3478 | 284 | 0.6230 | 0.7712 | 0.6230 | 0.7893 |
| No log | 12.4348 | 286 | 0.6832 | 0.7143 | 0.6832 | 0.8266 |
| No log | 12.5217 | 288 | 0.7361 | 0.7429 | 0.7361 | 0.8580 |
| No log | 12.6087 | 290 | 0.7906 | 0.7092 | 0.7906 | 0.8892 |
| No log | 12.6957 | 292 | 0.7854 | 0.6901 | 0.7854 | 0.8863 |
| No log | 12.7826 | 294 | 0.7310 | 0.7172 | 0.7310 | 0.8550 |
| No log | 12.8696 | 296 | 0.6754 | 0.7432 | 0.6754 | 0.8219 |
| No log | 12.9565 | 298 | 0.6937 | 0.7347 | 0.6937 | 0.8329 |
| No log | 13.0435 | 300 | 0.7777 | 0.7172 | 0.7777 | 0.8819 |
| No log | 13.1304 | 302 | 0.8698 | 0.6806 | 0.8698 | 0.9326 |
| No log | 13.2174 | 304 | 0.8839 | 0.6806 | 0.8839 | 0.9402 |
| No log | 13.3043 | 306 | 0.8234 | 0.6901 | 0.8234 | 0.9074 |
| No log | 13.3913 | 308 | 0.7608 | 0.6950 | 0.7608 | 0.8722 |
| No log | 13.4783 | 310 | 0.6967 | 0.7222 | 0.6967 | 0.8347 |
| No log | 13.5652 | 312 | 0.6828 | 0.7083 | 0.6828 | 0.8263 |
| No log | 13.6522 | 314 | 0.7175 | 0.7342 | 0.7175 | 0.8470 |
| No log | 13.7391 | 316 | 0.7009 | 0.7545 | 0.7009 | 0.8372 |
| No log | 13.8261 | 318 | 0.5856 | 0.7619 | 0.5856 | 0.7653 |
| No log | 13.9130 | 320 | 0.5438 | 0.7516 | 0.5438 | 0.7374 |
| No log | 14.0 | 322 | 0.5434 | 0.8075 | 0.5434 | 0.7371 |
| No log | 14.0870 | 324 | 0.5506 | 0.7949 | 0.5506 | 0.7420 |
| No log | 14.1739 | 326 | 0.5678 | 0.7722 | 0.5678 | 0.7535 |
| No log | 14.2609 | 328 | 0.6371 | 0.6986 | 0.6371 | 0.7982 |
| No log | 14.3478 | 330 | 0.7045 | 0.7034 | 0.7045 | 0.8394 |
| No log | 14.4348 | 332 | 0.6817 | 0.6901 | 0.6817 | 0.8256 |
| No log | 14.5217 | 334 | 0.6294 | 0.7222 | 0.6294 | 0.7933 |
| No log | 14.6087 | 336 | 0.6336 | 0.7361 | 0.6336 | 0.7960 |
| No log | 14.6957 | 338 | 0.6504 | 0.7222 | 0.6504 | 0.8065 |
| No log | 14.7826 | 340 | 0.7387 | 0.7034 | 0.7387 | 0.8595 |
| No log | 14.8696 | 342 | 0.7726 | 0.7485 | 0.7726 | 0.8790 |
| No log | 14.9565 | 344 | 0.6830 | 0.7389 | 0.6830 | 0.8264 |
| No log | 15.0435 | 346 | 0.5874 | 0.7582 | 0.5874 | 0.7664 |
| No log | 15.1304 | 348 | 0.5782 | 0.7895 | 0.5782 | 0.7604 |
| No log | 15.2174 | 350 | 0.5740 | 0.7895 | 0.5740 | 0.7576 |
| No log | 15.3043 | 352 | 0.5592 | 0.7821 | 0.5592 | 0.7478 |
| No log | 15.3913 | 354 | 0.5783 | 0.7643 | 0.5783 | 0.7605 |
| No log | 15.4783 | 356 | 0.6894 | 0.7152 | 0.6894 | 0.8303 |
| No log | 15.5652 | 358 | 0.8083 | 0.7152 | 0.8083 | 0.8991 |
| No log | 15.6522 | 360 | 0.7990 | 0.7034 | 0.7990 | 0.8938 |
| No log | 15.7391 | 362 | 0.7269 | 0.7391 | 0.7269 | 0.8526 |
| No log | 15.8261 | 364 | 0.6937 | 0.7286 | 0.6937 | 0.8329 |
| No log | 15.9130 | 366 | 0.6747 | 0.7413 | 0.6747 | 0.8214 |
| No log | 16.0 | 368 | 0.7028 | 0.7391 | 0.7028 | 0.8384 |
| No log | 16.0870 | 370 | 0.7329 | 0.7042 | 0.7329 | 0.8561 |
| No log | 16.1739 | 372 | 0.7263 | 0.7042 | 0.7263 | 0.8522 |
| No log | 16.2609 | 374 | 0.6680 | 0.7234 | 0.6680 | 0.8173 |
| No log | 16.3478 | 376 | 0.6554 | 0.7324 | 0.6554 | 0.8096 |
| No log | 16.4348 | 378 | 0.6463 | 0.7092 | 0.6463 | 0.8039 |
| No log | 16.5217 | 380 | 0.6519 | 0.6993 | 0.6519 | 0.8074 |
| No log | 16.6087 | 382 | 0.7800 | 0.7355 | 0.7800 | 0.8832 |
| No log | 16.6957 | 384 | 0.8796 | 0.7135 | 0.8796 | 0.9379 |
| No log | 16.7826 | 386 | 0.8313 | 0.7412 | 0.8313 | 0.9117 |
| No log | 16.8696 | 388 | 0.7209 | 0.7403 | 0.7209 | 0.8491 |
| No log | 16.9565 | 390 | 0.6469 | 0.6993 | 0.6469 | 0.8043 |
| No log | 17.0435 | 392 | 0.6308 | 0.7429 | 0.6308 | 0.7942 |
| No log | 17.1304 | 394 | 0.6557 | 0.7391 | 0.6557 | 0.8098 |
| No log | 17.2174 | 396 | 0.6678 | 0.7206 | 0.6678 | 0.8172 |
| No log | 17.3043 | 398 | 0.6687 | 0.7299 | 0.6687 | 0.8177 |
| No log | 17.3913 | 400 | 0.6655 | 0.7299 | 0.6655 | 0.8158 |
| No log | 17.4783 | 402 | 0.6923 | 0.7286 | 0.6923 | 0.8321 |
| No log | 17.5652 | 404 | 0.6951 | 0.7092 | 0.6951 | 0.8337 |
| No log | 17.6522 | 406 | 0.6981 | 0.7299 | 0.6981 | 0.8355 |
| No log | 17.7391 | 408 | 0.6696 | 0.7259 | 0.6696 | 0.8183 |
| No log | 17.8261 | 410 | 0.6481 | 0.7338 | 0.6481 | 0.8051 |
| No log | 17.9130 | 412 | 0.6289 | 0.7465 | 0.6289 | 0.7930 |
| No log | 18.0 | 414 | 0.6130 | 0.7448 | 0.6130 | 0.7829 |
| No log | 18.0870 | 416 | 0.6391 | 0.7172 | 0.6391 | 0.7995 |
| No log | 18.1739 | 418 | 0.6525 | 0.7285 | 0.6525 | 0.8078 |
| No log | 18.2609 | 420 | 0.6984 | 0.6993 | 0.6984 | 0.8357 |
| No log | 18.3478 | 422 | 0.7015 | 0.6993 | 0.7015 | 0.8376 |
| No log | 18.4348 | 424 | 0.6479 | 0.7083 | 0.6479 | 0.8049 |
| No log | 18.5217 | 426 | 0.6102 | 0.7568 | 0.6102 | 0.7812 |
| No log | 18.6087 | 428 | 0.6221 | 0.7619 | 0.6221 | 0.7887 |
| No log | 18.6957 | 430 | 0.6196 | 0.7682 | 0.6196 | 0.7871 |
| No log | 18.7826 | 432 | 0.6425 | 0.7465 | 0.6425 | 0.8016 |
| No log | 18.8696 | 434 | 0.7384 | 0.7564 | 0.7384 | 0.8593 |
| No log | 18.9565 | 436 | 0.7683 | 0.7468 | 0.7683 | 0.8765 |
| No log | 19.0435 | 438 | 0.7516 | 0.7355 | 0.7516 | 0.8670 |
| No log | 19.1304 | 440 | 0.6607 | 0.76 | 0.6607 | 0.8129 |
| No log | 19.2174 | 442 | 0.6014 | 0.7552 | 0.6014 | 0.7755 |
| No log | 19.3043 | 444 | 0.5984 | 0.7552 | 0.5984 | 0.7736 |
| No log | 19.3913 | 446 | 0.5899 | 0.7273 | 0.5899 | 0.7681 |
| No log | 19.4783 | 448 | 0.6044 | 0.7632 | 0.6044 | 0.7774 |
| No log | 19.5652 | 450 | 0.6503 | 0.7333 | 0.6503 | 0.8064 |
| No log | 19.6522 | 452 | 0.6377 | 0.7333 | 0.6377 | 0.7985 |
| No log | 19.7391 | 454 | 0.6340 | 0.7451 | 0.6340 | 0.7962 |
| No log | 19.8261 | 456 | 0.6477 | 0.7451 | 0.6477 | 0.8048 |
| No log | 19.9130 | 458 | 0.6841 | 0.7448 | 0.6841 | 0.8271 |
| No log | 20.0 | 460 | 0.6696 | 0.7376 | 0.6696 | 0.8183 |
| No log | 20.0870 | 462 | 0.6422 | 0.7586 | 0.6422 | 0.8014 |
| No log | 20.1739 | 464 | 0.6321 | 0.7483 | 0.6321 | 0.7951 |
| No log | 20.2609 | 466 | 0.6653 | 0.7586 | 0.6653 | 0.8157 |
| No log | 20.3478 | 468 | 0.7282 | 0.7162 | 0.7282 | 0.8533 |
| No log | 20.4348 | 470 | 0.8254 | 0.6939 | 0.8254 | 0.9085 |
| No log | 20.5217 | 472 | 0.8982 | 0.675 | 0.8982 | 0.9478 |
| No log | 20.6087 | 474 | 0.9288 | 0.6667 | 0.9288 | 0.9638 |
| No log | 20.6957 | 476 | 0.8928 | 0.6757 | 0.8928 | 0.9449 |
| No log | 20.7826 | 478 | 0.7766 | 0.6993 | 0.7766 | 0.8812 |
| No log | 20.8696 | 480 | 0.7318 | 0.6861 | 0.7318 | 0.8554 |
| No log | 20.9565 | 482 | 0.7215 | 0.7299 | 0.7215 | 0.8494 |
| No log | 21.0435 | 484 | 0.7576 | 0.7050 | 0.7576 | 0.8704 |
| No log | 21.1304 | 486 | 0.8378 | 0.6901 | 0.8378 | 0.9153 |
| No log | 21.2174 | 488 | 0.8329 | 0.6713 | 0.8329 | 0.9126 |
| No log | 21.3043 | 490 | 0.7573 | 0.6806 | 0.7573 | 0.8702 |
| No log | 21.3913 | 492 | 0.6638 | 0.7534 | 0.6638 | 0.8147 |
| No log | 21.4783 | 494 | 0.6208 | 0.7712 | 0.6208 | 0.7879 |
| No log | 21.5652 | 496 | 0.5694 | 0.7771 | 0.5694 | 0.7546 |
| No log | 21.6522 | 498 | 0.5471 | 0.7643 | 0.5471 | 0.7397 |
| 0.3686 | 21.7391 | 500 | 0.5431 | 0.7643 | 0.5431 | 0.7369 |
| 0.3686 | 21.8261 | 502 | 0.5557 | 0.7875 | 0.5557 | 0.7455 |
| 0.3686 | 21.9130 | 504 | 0.6282 | 0.7907 | 0.6282 | 0.7926 |
| 0.3686 | 22.0 | 506 | 0.6934 | 0.7746 | 0.6934 | 0.8327 |
| 0.3686 | 22.0870 | 508 | 0.7206 | 0.7283 | 0.7206 | 0.8489 |
| 0.3686 | 22.1739 | 510 | 0.6874 | 0.775 | 0.6874 | 0.8291 |
| 0.3686 | 22.2609 | 512 | 0.6474 | 0.7517 | 0.6474 | 0.8046 |
| 0.3686 | 22.3478 | 514 | 0.6205 | 0.7660 | 0.6205 | 0.7877 |
| 0.3686 | 22.4348 | 516 | 0.6134 | 0.7660 | 0.6134 | 0.7832 |
| 0.3686 | 22.5217 | 518 | 0.6145 | 0.7606 | 0.6145 | 0.7839 |
| 0.3686 | 22.6087 | 520 | 0.6451 | 0.7552 | 0.6451 | 0.8032 |
| 0.3686 | 22.6957 | 522 | 0.7010 | 0.7564 | 0.7010 | 0.8372 |
| 0.3686 | 22.7826 | 524 | 0.6915 | 0.7673 | 0.6915 | 0.8316 |
| 0.3686 | 22.8696 | 526 | 0.6316 | 0.775 | 0.6316 | 0.7947 |
| 0.3686 | 22.9565 | 528 | 0.5589 | 0.7925 | 0.5589 | 0.7476 |
| 0.3686 | 23.0435 | 530 | 0.5466 | 0.8101 | 0.5466 | 0.7393 |
| 0.3686 | 23.1304 | 532 | 0.5849 | 0.7922 | 0.5849 | 0.7648 |
| 0.3686 | 23.2174 | 534 | 0.6205 | 0.7397 | 0.6205 | 0.7877 |
| 0.3686 | 23.3043 | 536 | 0.6262 | 0.7586 | 0.6262 | 0.7913 |
| 0.3686 | 23.3913 | 538 | 0.6058 | 0.7778 | 0.6058 | 0.7783 |
| 0.3686 | 23.4783 | 540 | 0.5877 | 0.7862 | 0.5877 | 0.7666 |
| 0.3686 | 23.5652 | 542 | 0.5880 | 0.7838 | 0.5880 | 0.7668 |
| 0.3686 | 23.6522 | 544 | 0.6254 | 0.7534 | 0.6254 | 0.7908 |
| 0.3686 | 23.7391 | 546 | 0.6427 | 0.7417 | 0.6427 | 0.8017 |
| 0.3686 | 23.8261 | 548 | 0.6254 | 0.75 | 0.6254 | 0.7909 |
| 0.3686 | 23.9130 | 550 | 0.6155 | 0.7550 | 0.6155 | 0.7845 |
| 0.3686 | 24.0 | 552 | 0.6083 | 0.7613 | 0.6083 | 0.7800 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
MayBashendy/ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k6_task2_organization
|
MayBashendy
| 2025-01-15T12:32:22Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T11:45:03Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k6_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k6_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5851
- Qwk: 0.5238
- Mse: 0.5851
- Rmse: 0.7649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0588 | 2 | 4.2485 | -0.0156 | 4.2485 | 2.0612 |
| No log | 0.1176 | 4 | 2.5637 | -0.0104 | 2.5637 | 1.6012 |
| No log | 0.1765 | 6 | 1.5789 | -0.0456 | 1.5789 | 1.2566 |
| No log | 0.2353 | 8 | 1.2683 | -0.0259 | 1.2683 | 1.1262 |
| No log | 0.2941 | 10 | 1.1621 | -0.0337 | 1.1621 | 1.0780 |
| No log | 0.3529 | 12 | 1.0159 | -0.0140 | 1.0159 | 1.0079 |
| No log | 0.4118 | 14 | 0.8383 | 0.2268 | 0.8383 | 0.9156 |
| No log | 0.4706 | 16 | 0.8062 | 0.2054 | 0.8062 | 0.8979 |
| No log | 0.5294 | 18 | 0.8402 | 0.1922 | 0.8402 | 0.9166 |
| No log | 0.5882 | 20 | 1.0113 | 0.0292 | 1.0113 | 1.0056 |
| No log | 0.6471 | 22 | 0.9396 | 0.0680 | 0.9396 | 0.9693 |
| No log | 0.7059 | 24 | 0.7996 | 0.2435 | 0.7996 | 0.8942 |
| No log | 0.7647 | 26 | 0.7495 | 0.2155 | 0.7495 | 0.8657 |
| No log | 0.8235 | 28 | 0.7557 | 0.2328 | 0.7557 | 0.8693 |
| No log | 0.8824 | 30 | 0.7473 | 0.2713 | 0.7473 | 0.8645 |
| No log | 0.9412 | 32 | 0.8925 | 0.1834 | 0.8925 | 0.9447 |
| No log | 1.0 | 34 | 1.0835 | 0.2175 | 1.0835 | 1.0409 |
| No log | 1.0588 | 36 | 1.0439 | 0.2112 | 1.0439 | 1.0217 |
| No log | 1.1176 | 38 | 0.8152 | 0.2243 | 0.8152 | 0.9029 |
| No log | 1.1765 | 40 | 0.7194 | 0.3018 | 0.7194 | 0.8481 |
| No log | 1.2353 | 42 | 0.7743 | 0.2588 | 0.7743 | 0.8799 |
| No log | 1.2941 | 44 | 0.8231 | 0.1781 | 0.8231 | 0.9073 |
| No log | 1.3529 | 46 | 0.7435 | 0.3122 | 0.7435 | 0.8623 |
| No log | 1.4118 | 48 | 0.7604 | 0.2666 | 0.7604 | 0.8720 |
| No log | 1.4706 | 50 | 0.9134 | 0.2599 | 0.9134 | 0.9557 |
| No log | 1.5294 | 52 | 0.8661 | 0.2888 | 0.8661 | 0.9307 |
| No log | 1.5882 | 54 | 0.7149 | 0.2841 | 0.7149 | 0.8455 |
| No log | 1.6471 | 56 | 0.7210 | 0.2769 | 0.7210 | 0.8491 |
| No log | 1.7059 | 58 | 0.8713 | 0.1589 | 0.8713 | 0.9334 |
| No log | 1.7647 | 60 | 0.8904 | 0.1283 | 0.8904 | 0.9436 |
| No log | 1.8235 | 62 | 0.7626 | 0.2642 | 0.7626 | 0.8733 |
| No log | 1.8824 | 64 | 0.7187 | 0.3132 | 0.7187 | 0.8478 |
| No log | 1.9412 | 66 | 0.8067 | 0.2907 | 0.8067 | 0.8982 |
| No log | 2.0 | 68 | 0.8208 | 0.2830 | 0.8208 | 0.9060 |
| No log | 2.0588 | 70 | 0.7667 | 0.2983 | 0.7667 | 0.8756 |
| No log | 2.1176 | 72 | 0.7410 | 0.3213 | 0.7410 | 0.8608 |
| No log | 2.1765 | 74 | 0.7136 | 0.2980 | 0.7136 | 0.8447 |
| No log | 2.2353 | 76 | 0.6835 | 0.3174 | 0.6835 | 0.8267 |
| No log | 2.2941 | 78 | 0.6724 | 0.3509 | 0.6724 | 0.8200 |
| No log | 2.3529 | 80 | 0.6937 | 0.3915 | 0.6937 | 0.8329 |
| No log | 2.4118 | 82 | 0.7255 | 0.3296 | 0.7255 | 0.8518 |
| No log | 2.4706 | 84 | 0.7434 | 0.3053 | 0.7434 | 0.8622 |
| No log | 2.5294 | 86 | 0.7694 | 0.3353 | 0.7694 | 0.8772 |
| No log | 2.5882 | 88 | 0.8723 | 0.3636 | 0.8723 | 0.9339 |
| No log | 2.6471 | 90 | 0.8411 | 0.3477 | 0.8411 | 0.9171 |
| No log | 2.7059 | 92 | 0.6901 | 0.4623 | 0.6901 | 0.8307 |
| No log | 2.7647 | 94 | 0.6437 | 0.4589 | 0.6437 | 0.8023 |
| No log | 2.8235 | 96 | 0.6571 | 0.4623 | 0.6571 | 0.8106 |
| No log | 2.8824 | 98 | 0.7280 | 0.4235 | 0.7280 | 0.8532 |
| No log | 2.9412 | 100 | 0.8504 | 0.3167 | 0.8504 | 0.9221 |
| No log | 3.0 | 102 | 0.8681 | 0.3267 | 0.8681 | 0.9317 |
| No log | 3.0588 | 104 | 0.8187 | 0.3179 | 0.8187 | 0.9048 |
| No log | 3.1176 | 106 | 0.7494 | 0.3920 | 0.7494 | 0.8657 |
| No log | 3.1765 | 108 | 0.6629 | 0.4437 | 0.6629 | 0.8142 |
| No log | 3.2353 | 110 | 0.6774 | 0.4997 | 0.6774 | 0.8230 |
| No log | 3.2941 | 112 | 0.7155 | 0.4949 | 0.7155 | 0.8459 |
| No log | 3.3529 | 114 | 0.8639 | 0.4722 | 0.8639 | 0.9294 |
| No log | 3.4118 | 116 | 0.9335 | 0.4192 | 0.9335 | 0.9662 |
| No log | 3.4706 | 118 | 0.7860 | 0.4967 | 0.7860 | 0.8866 |
| No log | 3.5294 | 120 | 0.8043 | 0.4562 | 0.8043 | 0.8968 |
| No log | 3.5882 | 122 | 0.8883 | 0.4266 | 0.8883 | 0.9425 |
| No log | 3.6471 | 124 | 0.8110 | 0.4864 | 0.8110 | 0.9006 |
| No log | 3.7059 | 126 | 0.6893 | 0.4937 | 0.6893 | 0.8303 |
| No log | 3.7647 | 128 | 0.8110 | 0.4875 | 0.8110 | 0.9006 |
| No log | 3.8235 | 130 | 0.9066 | 0.4615 | 0.9066 | 0.9522 |
| No log | 3.8824 | 132 | 0.8055 | 0.5385 | 0.8055 | 0.8975 |
| No log | 3.9412 | 134 | 0.7313 | 0.4896 | 0.7313 | 0.8551 |
| No log | 4.0 | 136 | 0.7418 | 0.4825 | 0.7418 | 0.8613 |
| No log | 4.0588 | 138 | 0.7470 | 0.4723 | 0.7470 | 0.8643 |
| No log | 4.1176 | 140 | 0.7239 | 0.5155 | 0.7239 | 0.8508 |
| No log | 4.1765 | 142 | 0.6995 | 0.5010 | 0.6995 | 0.8363 |
| No log | 4.2353 | 144 | 0.6883 | 0.4595 | 0.6883 | 0.8296 |
| No log | 4.2941 | 146 | 0.7031 | 0.4769 | 0.7031 | 0.8385 |
| No log | 4.3529 | 148 | 0.7061 | 0.4804 | 0.7061 | 0.8403 |
| No log | 4.4118 | 150 | 0.7192 | 0.5183 | 0.7192 | 0.8480 |
| No log | 4.4706 | 152 | 0.7615 | 0.5452 | 0.7615 | 0.8726 |
| No log | 4.5294 | 154 | 0.7200 | 0.5110 | 0.7200 | 0.8485 |
| No log | 4.5882 | 156 | 0.7410 | 0.4783 | 0.7410 | 0.8608 |
| No log | 4.6471 | 158 | 0.7483 | 0.4747 | 0.7483 | 0.8650 |
| No log | 4.7059 | 160 | 0.7235 | 0.5315 | 0.7235 | 0.8506 |
| No log | 4.7647 | 162 | 0.7217 | 0.5064 | 0.7217 | 0.8495 |
| No log | 4.8235 | 164 | 0.7235 | 0.5290 | 0.7235 | 0.8506 |
| No log | 4.8824 | 166 | 0.7096 | 0.5103 | 0.7096 | 0.8424 |
| No log | 4.9412 | 168 | 0.7022 | 0.4665 | 0.7022 | 0.8380 |
| No log | 5.0 | 170 | 0.6739 | 0.4512 | 0.6739 | 0.8209 |
| No log | 5.0588 | 172 | 0.6859 | 0.4587 | 0.6859 | 0.8282 |
| No log | 5.1176 | 174 | 0.7779 | 0.5032 | 0.7779 | 0.8820 |
| No log | 5.1765 | 176 | 0.9455 | 0.4098 | 0.9455 | 0.9724 |
| No log | 5.2353 | 178 | 0.8335 | 0.4479 | 0.8335 | 0.9129 |
| No log | 5.2941 | 180 | 0.7403 | 0.4956 | 0.7403 | 0.8604 |
| No log | 5.3529 | 182 | 0.8064 | 0.4327 | 0.8064 | 0.8980 |
| No log | 5.4118 | 184 | 0.7402 | 0.4811 | 0.7402 | 0.8604 |
| No log | 5.4706 | 186 | 0.6703 | 0.4196 | 0.6703 | 0.8187 |
| No log | 5.5294 | 188 | 0.7171 | 0.4628 | 0.7171 | 0.8468 |
| No log | 5.5882 | 190 | 0.6880 | 0.3967 | 0.6880 | 0.8294 |
| No log | 5.6471 | 192 | 0.6416 | 0.4439 | 0.6416 | 0.8010 |
| No log | 5.7059 | 194 | 0.6648 | 0.4515 | 0.6648 | 0.8153 |
| No log | 5.7647 | 196 | 0.6668 | 0.4902 | 0.6668 | 0.8166 |
| No log | 5.8235 | 198 | 0.7114 | 0.5057 | 0.7114 | 0.8434 |
| No log | 5.8824 | 200 | 0.8108 | 0.4496 | 0.8108 | 0.9004 |
| No log | 5.9412 | 202 | 0.7854 | 0.4541 | 0.7854 | 0.8862 |
| No log | 6.0 | 204 | 0.6801 | 0.4663 | 0.6801 | 0.8247 |
| No log | 6.0588 | 206 | 0.6470 | 0.4712 | 0.6470 | 0.8044 |
| No log | 6.1176 | 208 | 0.6386 | 0.4576 | 0.6386 | 0.7991 |
| No log | 6.1765 | 210 | 0.6587 | 0.4120 | 0.6587 | 0.8116 |
| No log | 6.2353 | 212 | 0.7042 | 0.3974 | 0.7042 | 0.8392 |
| No log | 6.2941 | 214 | 0.7193 | 0.4004 | 0.7193 | 0.8481 |
| No log | 6.3529 | 216 | 0.7123 | 0.4666 | 0.7123 | 0.8440 |
| No log | 6.4118 | 218 | 0.7234 | 0.4607 | 0.7234 | 0.8505 |
| No log | 6.4706 | 220 | 0.7197 | 0.5033 | 0.7197 | 0.8484 |
| No log | 6.5294 | 222 | 0.7207 | 0.4620 | 0.7207 | 0.8490 |
| No log | 6.5882 | 224 | 0.6765 | 0.5343 | 0.6765 | 0.8225 |
| No log | 6.6471 | 226 | 0.7690 | 0.4761 | 0.7690 | 0.8769 |
| No log | 6.7059 | 228 | 1.0657 | 0.3568 | 1.0657 | 1.0323 |
| No log | 6.7647 | 230 | 1.1470 | 0.3344 | 1.1470 | 1.0710 |
| No log | 6.8235 | 232 | 0.9939 | 0.3778 | 0.9939 | 0.9969 |
| No log | 6.8824 | 234 | 0.7471 | 0.4560 | 0.7471 | 0.8643 |
| No log | 6.9412 | 236 | 0.6309 | 0.4554 | 0.6309 | 0.7943 |
| No log | 7.0 | 238 | 0.7472 | 0.4660 | 0.7472 | 0.8644 |
| No log | 7.0588 | 240 | 0.8777 | 0.4507 | 0.8777 | 0.9369 |
| No log | 7.1176 | 242 | 0.8329 | 0.4823 | 0.8329 | 0.9126 |
| No log | 7.1765 | 244 | 0.7236 | 0.5522 | 0.7236 | 0.8506 |
| No log | 7.2353 | 246 | 0.8029 | 0.4660 | 0.8029 | 0.8960 |
| No log | 7.2941 | 248 | 0.8907 | 0.5006 | 0.8907 | 0.9438 |
| No log | 7.3529 | 250 | 0.7899 | 0.4520 | 0.7899 | 0.8887 |
| No log | 7.4118 | 252 | 0.7340 | 0.4031 | 0.7340 | 0.8568 |
| No log | 7.4706 | 254 | 0.6632 | 0.4763 | 0.6632 | 0.8143 |
| No log | 7.5294 | 256 | 0.6525 | 0.4423 | 0.6525 | 0.8078 |
| No log | 7.5882 | 258 | 0.6569 | 0.5359 | 0.6569 | 0.8105 |
| No log | 7.6471 | 260 | 0.6665 | 0.5027 | 0.6665 | 0.8164 |
| No log | 7.7059 | 262 | 0.6648 | 0.4845 | 0.6648 | 0.8153 |
| No log | 7.7647 | 264 | 0.6536 | 0.4870 | 0.6536 | 0.8085 |
| No log | 7.8235 | 266 | 0.6566 | 0.4419 | 0.6566 | 0.8103 |
| No log | 7.8824 | 268 | 0.7273 | 0.4598 | 0.7273 | 0.8528 |
| No log | 7.9412 | 270 | 0.7580 | 0.3981 | 0.7580 | 0.8707 |
| No log | 8.0 | 272 | 0.7426 | 0.4591 | 0.7426 | 0.8617 |
| No log | 8.0588 | 274 | 0.7707 | 0.4424 | 0.7707 | 0.8779 |
| No log | 8.1176 | 276 | 0.7222 | 0.4897 | 0.7222 | 0.8498 |
| No log | 8.1765 | 278 | 0.7096 | 0.5026 | 0.7096 | 0.8424 |
| No log | 8.2353 | 280 | 0.7185 | 0.4843 | 0.7185 | 0.8476 |
| No log | 8.2941 | 282 | 0.7012 | 0.4945 | 0.7012 | 0.8374 |
| No log | 8.3529 | 284 | 0.7021 | 0.5295 | 0.7021 | 0.8379 |
| No log | 8.4118 | 286 | 0.6953 | 0.4943 | 0.6953 | 0.8339 |
| No log | 8.4706 | 288 | 0.7171 | 0.5111 | 0.7171 | 0.8468 |
| No log | 8.5294 | 290 | 0.7915 | 0.4689 | 0.7915 | 0.8897 |
| No log | 8.5882 | 292 | 0.7922 | 0.4562 | 0.7922 | 0.8901 |
| No log | 8.6471 | 294 | 0.7179 | 0.5305 | 0.7179 | 0.8473 |
| No log | 8.7059 | 296 | 0.6863 | 0.4300 | 0.6863 | 0.8285 |
| No log | 8.7647 | 298 | 0.7722 | 0.4761 | 0.7722 | 0.8788 |
| No log | 8.8235 | 300 | 0.7703 | 0.4838 | 0.7703 | 0.8777 |
| No log | 8.8824 | 302 | 0.6793 | 0.4749 | 0.6793 | 0.8242 |
| No log | 8.9412 | 304 | 0.6400 | 0.4825 | 0.6400 | 0.8000 |
| No log | 9.0 | 306 | 0.6309 | 0.4779 | 0.6309 | 0.7943 |
| No log | 9.0588 | 308 | 0.6125 | 0.4797 | 0.6125 | 0.7826 |
| No log | 9.1176 | 310 | 0.6178 | 0.4318 | 0.6178 | 0.7860 |
| No log | 9.1765 | 312 | 0.6304 | 0.4330 | 0.6304 | 0.7940 |
| No log | 9.2353 | 314 | 0.6627 | 0.4711 | 0.6627 | 0.8141 |
| No log | 9.2941 | 316 | 0.7113 | 0.5436 | 0.7113 | 0.8434 |
| No log | 9.3529 | 318 | 0.7140 | 0.5066 | 0.7140 | 0.8450 |
| No log | 9.4118 | 320 | 0.7302 | 0.5275 | 0.7302 | 0.8545 |
| No log | 9.4706 | 322 | 0.7854 | 0.5391 | 0.7854 | 0.8862 |
| No log | 9.5294 | 324 | 0.7996 | 0.5035 | 0.7996 | 0.8942 |
| No log | 9.5882 | 326 | 0.7257 | 0.5931 | 0.7257 | 0.8519 |
| No log | 9.6471 | 328 | 0.6872 | 0.5355 | 0.6872 | 0.8290 |
| No log | 9.7059 | 330 | 0.6687 | 0.5403 | 0.6687 | 0.8177 |
| No log | 9.7647 | 332 | 0.6758 | 0.4822 | 0.6758 | 0.8221 |
| No log | 9.8235 | 334 | 0.6939 | 0.4643 | 0.6939 | 0.8330 |
| No log | 9.8824 | 336 | 0.7078 | 0.4634 | 0.7078 | 0.8413 |
| No log | 9.9412 | 338 | 0.7023 | 0.5081 | 0.7023 | 0.8380 |
| No log | 10.0 | 340 | 0.6937 | 0.4858 | 0.6937 | 0.8329 |
| No log | 10.0588 | 342 | 0.7242 | 0.4408 | 0.7242 | 0.8510 |
| No log | 10.1176 | 344 | 0.7756 | 0.4040 | 0.7756 | 0.8807 |
| No log | 10.1765 | 346 | 0.7637 | 0.3981 | 0.7637 | 0.8739 |
| No log | 10.2353 | 348 | 0.7216 | 0.5171 | 0.7216 | 0.8495 |
| No log | 10.2941 | 350 | 0.7633 | 0.5101 | 0.7633 | 0.8737 |
| No log | 10.3529 | 352 | 0.7913 | 0.4946 | 0.7913 | 0.8895 |
| No log | 10.4118 | 354 | 0.8214 | 0.4734 | 0.8214 | 0.9063 |
| No log | 10.4706 | 356 | 0.8124 | 0.4948 | 0.8124 | 0.9013 |
| No log | 10.5294 | 358 | 0.7774 | 0.5228 | 0.7774 | 0.8817 |
| No log | 10.5882 | 360 | 0.7380 | 0.5405 | 0.7380 | 0.8591 |
| No log | 10.6471 | 362 | 0.7255 | 0.5367 | 0.7255 | 0.8517 |
| No log | 10.7059 | 364 | 0.6853 | 0.4181 | 0.6853 | 0.8278 |
| No log | 10.7647 | 366 | 0.6645 | 0.3809 | 0.6645 | 0.8151 |
| No log | 10.8235 | 368 | 0.6476 | 0.3557 | 0.6476 | 0.8047 |
| No log | 10.8824 | 370 | 0.6435 | 0.3628 | 0.6435 | 0.8022 |
| No log | 10.9412 | 372 | 0.6541 | 0.4212 | 0.6541 | 0.8088 |
| No log | 11.0 | 374 | 0.6862 | 0.5225 | 0.6862 | 0.8284 |
| No log | 11.0588 | 376 | 0.7527 | 0.5572 | 0.7527 | 0.8676 |
| No log | 11.1176 | 378 | 0.7487 | 0.5435 | 0.7487 | 0.8653 |
| No log | 11.1765 | 380 | 0.7236 | 0.5499 | 0.7236 | 0.8506 |
| No log | 11.2353 | 382 | 0.7257 | 0.4765 | 0.7257 | 0.8519 |
| No log | 11.2941 | 384 | 0.7522 | 0.5076 | 0.7522 | 0.8673 |
| No log | 11.3529 | 386 | 0.7128 | 0.4571 | 0.7128 | 0.8443 |
| No log | 11.4118 | 388 | 0.6497 | 0.3925 | 0.6497 | 0.8061 |
| No log | 11.4706 | 390 | 0.6308 | 0.3232 | 0.6308 | 0.7942 |
| No log | 11.5294 | 392 | 0.6387 | 0.3715 | 0.6387 | 0.7992 |
| No log | 11.5882 | 394 | 0.6364 | 0.3847 | 0.6364 | 0.7978 |
| No log | 11.6471 | 396 | 0.6358 | 0.4367 | 0.6358 | 0.7974 |
| No log | 11.7059 | 398 | 0.6556 | 0.4959 | 0.6556 | 0.8097 |
| No log | 11.7647 | 400 | 0.6475 | 0.5051 | 0.6475 | 0.8047 |
| No log | 11.8235 | 402 | 0.6302 | 0.4673 | 0.6302 | 0.7939 |
| No log | 11.8824 | 404 | 0.6903 | 0.4678 | 0.6903 | 0.8308 |
| No log | 11.9412 | 406 | 0.8763 | 0.4185 | 0.8763 | 0.9361 |
| No log | 12.0 | 408 | 0.9604 | 0.3992 | 0.9604 | 0.9800 |
| No log | 12.0588 | 410 | 0.8835 | 0.4185 | 0.8835 | 0.9399 |
| No log | 12.1176 | 412 | 0.7104 | 0.4840 | 0.7104 | 0.8429 |
| No log | 12.1765 | 414 | 0.6346 | 0.5216 | 0.6346 | 0.7966 |
| No log | 12.2353 | 416 | 0.6736 | 0.5743 | 0.6736 | 0.8207 |
| No log | 12.2941 | 418 | 0.6957 | 0.5704 | 0.6957 | 0.8341 |
| No log | 12.3529 | 420 | 0.6756 | 0.5818 | 0.6756 | 0.8220 |
| No log | 12.4118 | 422 | 0.6608 | 0.5818 | 0.6608 | 0.8129 |
| No log | 12.4706 | 424 | 0.6390 | 0.6041 | 0.6390 | 0.7994 |
| No log | 12.5294 | 426 | 0.6535 | 0.4786 | 0.6535 | 0.8084 |
| No log | 12.5882 | 428 | 0.7111 | 0.5341 | 0.7111 | 0.8433 |
| No log | 12.6471 | 430 | 0.6959 | 0.4936 | 0.6959 | 0.8342 |
| No log | 12.7059 | 432 | 0.6518 | 0.5336 | 0.6518 | 0.8073 |
| No log | 12.7647 | 434 | 0.6609 | 0.5686 | 0.6609 | 0.8130 |
| No log | 12.8235 | 436 | 0.7146 | 0.4976 | 0.7146 | 0.8454 |
| No log | 12.8824 | 438 | 0.7310 | 0.4914 | 0.7310 | 0.8550 |
| No log | 12.9412 | 440 | 0.7018 | 0.4922 | 0.7018 | 0.8377 |
| No log | 13.0 | 442 | 0.6455 | 0.5547 | 0.6455 | 0.8034 |
| No log | 13.0588 | 444 | 0.6285 | 0.4903 | 0.6285 | 0.7928 |
| No log | 13.1176 | 446 | 0.6367 | 0.4690 | 0.6367 | 0.7979 |
| No log | 13.1765 | 448 | 0.6682 | 0.5175 | 0.6682 | 0.8174 |
| No log | 13.2353 | 450 | 0.6803 | 0.4860 | 0.6803 | 0.8248 |
| No log | 13.2941 | 452 | 0.6586 | 0.5050 | 0.6586 | 0.8115 |
| No log | 13.3529 | 454 | 0.6274 | 0.4497 | 0.6274 | 0.7921 |
| No log | 13.4118 | 456 | 0.6181 | 0.4465 | 0.6181 | 0.7862 |
| No log | 13.4706 | 458 | 0.6224 | 0.4592 | 0.6224 | 0.7889 |
| No log | 13.5294 | 460 | 0.6388 | 0.4955 | 0.6388 | 0.7993 |
| No log | 13.5882 | 462 | 0.6646 | 0.4935 | 0.6646 | 0.8152 |
| No log | 13.6471 | 464 | 0.6527 | 0.4970 | 0.6527 | 0.8079 |
| No log | 13.7059 | 466 | 0.6422 | 0.5578 | 0.6422 | 0.8014 |
| No log | 13.7647 | 468 | 0.6360 | 0.5618 | 0.6360 | 0.7975 |
| No log | 13.8235 | 470 | 0.6323 | 0.5618 | 0.6323 | 0.7952 |
| No log | 13.8824 | 472 | 0.6272 | 0.5365 | 0.6272 | 0.7920 |
| No log | 13.9412 | 474 | 0.6172 | 0.5451 | 0.6172 | 0.7856 |
| No log | 14.0 | 476 | 0.6331 | 0.5373 | 0.6331 | 0.7957 |
| No log | 14.0588 | 478 | 0.6403 | 0.5440 | 0.6403 | 0.8002 |
| No log | 14.1176 | 480 | 0.6331 | 0.5795 | 0.6331 | 0.7956 |
| No log | 14.1765 | 482 | 0.6445 | 0.5454 | 0.6445 | 0.8028 |
| No log | 14.2353 | 484 | 0.6847 | 0.5153 | 0.6847 | 0.8275 |
| No log | 14.2941 | 486 | 0.6750 | 0.5281 | 0.6750 | 0.8216 |
| No log | 14.3529 | 488 | 0.6337 | 0.5109 | 0.6337 | 0.7960 |
| No log | 14.4118 | 490 | 0.6167 | 0.4863 | 0.6167 | 0.7853 |
| No log | 14.4706 | 492 | 0.6180 | 0.5391 | 0.6180 | 0.7861 |
| No log | 14.5294 | 494 | 0.6047 | 0.4623 | 0.6047 | 0.7776 |
| No log | 14.5882 | 496 | 0.5923 | 0.4450 | 0.5923 | 0.7696 |
| No log | 14.6471 | 498 | 0.5896 | 0.4418 | 0.5896 | 0.7679 |
| 0.3918 | 14.7059 | 500 | 0.5917 | 0.4559 | 0.5917 | 0.7692 |
| 0.3918 | 14.7647 | 502 | 0.5980 | 0.4770 | 0.5980 | 0.7733 |
| 0.3918 | 14.8235 | 504 | 0.6001 | 0.4938 | 0.6001 | 0.7747 |
| 0.3918 | 14.8824 | 506 | 0.6367 | 0.5600 | 0.6367 | 0.7979 |
| 0.3918 | 14.9412 | 508 | 0.7449 | 0.4813 | 0.7449 | 0.8631 |
| 0.3918 | 15.0 | 510 | 0.7550 | 0.4813 | 0.7550 | 0.8689 |
| 0.3918 | 15.0588 | 512 | 0.6861 | 0.5507 | 0.6861 | 0.8283 |
| 0.3918 | 15.1176 | 514 | 0.6178 | 0.4992 | 0.6178 | 0.7860 |
| 0.3918 | 15.1765 | 516 | 0.6035 | 0.4860 | 0.6035 | 0.7768 |
| 0.3918 | 15.2353 | 518 | 0.6112 | 0.4719 | 0.6112 | 0.7818 |
| 0.3918 | 15.2941 | 520 | 0.6147 | 0.5032 | 0.6147 | 0.7840 |
| 0.3918 | 15.3529 | 522 | 0.6193 | 0.5065 | 0.6193 | 0.7870 |
| 0.3918 | 15.4118 | 524 | 0.6320 | 0.4938 | 0.6320 | 0.7950 |
| 0.3918 | 15.4706 | 526 | 0.6353 | 0.4972 | 0.6353 | 0.7970 |
| 0.3918 | 15.5294 | 528 | 0.6292 | 0.4757 | 0.6292 | 0.7932 |
| 0.3918 | 15.5882 | 530 | 0.6250 | 0.4196 | 0.6250 | 0.7906 |
| 0.3918 | 15.6471 | 532 | 0.6276 | 0.4089 | 0.6276 | 0.7922 |
| 0.3918 | 15.7059 | 534 | 0.6329 | 0.4696 | 0.6329 | 0.7956 |
| 0.3918 | 15.7647 | 536 | 0.6420 | 0.4883 | 0.6420 | 0.8012 |
| 0.3918 | 15.8235 | 538 | 0.6517 | 0.5272 | 0.6517 | 0.8073 |
| 0.3918 | 15.8824 | 540 | 0.6652 | 0.5692 | 0.6652 | 0.8156 |
| 0.3918 | 15.9412 | 542 | 0.6888 | 0.5498 | 0.6888 | 0.8299 |
| 0.3918 | 16.0 | 544 | 0.6949 | 0.5377 | 0.6949 | 0.8336 |
| 0.3918 | 16.0588 | 546 | 0.6754 | 0.5366 | 0.6754 | 0.8218 |
| 0.3918 | 16.1176 | 548 | 0.6469 | 0.5408 | 0.6469 | 0.8043 |
| 0.3918 | 16.1765 | 550 | 0.6144 | 0.5503 | 0.6144 | 0.7838 |
| 0.3918 | 16.2353 | 552 | 0.6100 | 0.4808 | 0.6100 | 0.7811 |
| 0.3918 | 16.2941 | 554 | 0.6360 | 0.4962 | 0.6360 | 0.7975 |
| 0.3918 | 16.3529 | 556 | 0.6303 | 0.4286 | 0.6303 | 0.7939 |
| 0.3918 | 16.4118 | 558 | 0.6136 | 0.3695 | 0.6136 | 0.7833 |
| 0.3918 | 16.4706 | 560 | 0.5994 | 0.3584 | 0.5994 | 0.7742 |
| 0.3918 | 16.5294 | 562 | 0.6010 | 0.4234 | 0.6010 | 0.7753 |
| 0.3918 | 16.5882 | 564 | 0.6085 | 0.4677 | 0.6085 | 0.7801 |
| 0.3918 | 16.6471 | 566 | 0.6252 | 0.4803 | 0.6252 | 0.7907 |
| 0.3918 | 16.7059 | 568 | 0.6603 | 0.5610 | 0.6603 | 0.8126 |
| 0.3918 | 16.7647 | 570 | 0.6710 | 0.5707 | 0.6710 | 0.8191 |
| 0.3918 | 16.8235 | 572 | 0.6614 | 0.5907 | 0.6614 | 0.8133 |
| 0.3918 | 16.8824 | 574 | 0.6368 | 0.5139 | 0.6368 | 0.7980 |
| 0.3918 | 16.9412 | 576 | 0.6066 | 0.5026 | 0.6066 | 0.7789 |
| 0.3918 | 17.0 | 578 | 0.5901 | 0.4018 | 0.5901 | 0.7682 |
| 0.3918 | 17.0588 | 580 | 0.5974 | 0.4403 | 0.5974 | 0.7729 |
| 0.3918 | 17.1176 | 582 | 0.5963 | 0.4538 | 0.5963 | 0.7722 |
| 0.3918 | 17.1765 | 584 | 0.5798 | 0.4545 | 0.5798 | 0.7614 |
| 0.3918 | 17.2353 | 586 | 0.5773 | 0.4731 | 0.5773 | 0.7598 |
| 0.3918 | 17.2941 | 588 | 0.5876 | 0.4749 | 0.5876 | 0.7666 |
| 0.3918 | 17.3529 | 590 | 0.5970 | 0.4749 | 0.5970 | 0.7726 |
| 0.3918 | 17.4118 | 592 | 0.6078 | 0.4998 | 0.6078 | 0.7796 |
| 0.3918 | 17.4706 | 594 | 0.6095 | 0.4880 | 0.6095 | 0.7807 |
| 0.3918 | 17.5294 | 596 | 0.6004 | 0.4727 | 0.6004 | 0.7749 |
| 0.3918 | 17.5882 | 598 | 0.6079 | 0.4600 | 0.6079 | 0.7797 |
| 0.3918 | 17.6471 | 600 | 0.6002 | 0.4057 | 0.6002 | 0.7747 |
| 0.3918 | 17.7059 | 602 | 0.5849 | 0.4552 | 0.5849 | 0.7648 |
| 0.3918 | 17.7647 | 604 | 0.5880 | 0.4741 | 0.5880 | 0.7668 |
| 0.3918 | 17.8235 | 606 | 0.6024 | 0.5161 | 0.6024 | 0.7762 |
| 0.3918 | 17.8824 | 608 | 0.6131 | 0.5077 | 0.6131 | 0.7830 |
| 0.3918 | 17.9412 | 610 | 0.6283 | 0.5403 | 0.6283 | 0.7926 |
| 0.3918 | 18.0 | 612 | 0.6478 | 0.5352 | 0.6478 | 0.8049 |
| 0.3918 | 18.0588 | 614 | 0.6582 | 0.5954 | 0.6582 | 0.8113 |
| 0.3918 | 18.1176 | 616 | 0.7024 | 0.5628 | 0.7024 | 0.8381 |
| 0.3918 | 18.1765 | 618 | 0.7590 | 0.4995 | 0.7590 | 0.8712 |
| 0.3918 | 18.2353 | 620 | 0.7485 | 0.4870 | 0.7485 | 0.8651 |
| 0.3918 | 18.2941 | 622 | 0.6925 | 0.5853 | 0.6925 | 0.8322 |
| 0.3918 | 18.3529 | 624 | 0.6287 | 0.5391 | 0.6287 | 0.7929 |
| 0.3918 | 18.4118 | 626 | 0.6060 | 0.4935 | 0.6060 | 0.7784 |
| 0.3918 | 18.4706 | 628 | 0.5976 | 0.5013 | 0.5976 | 0.7731 |
| 0.3918 | 18.5294 | 630 | 0.6017 | 0.5160 | 0.6017 | 0.7757 |
| 0.3918 | 18.5882 | 632 | 0.6361 | 0.4799 | 0.6361 | 0.7975 |
| 0.3918 | 18.6471 | 634 | 0.6765 | 0.5188 | 0.6765 | 0.8225 |
| 0.3918 | 18.7059 | 636 | 0.6872 | 0.5158 | 0.6872 | 0.8289 |
| 0.3918 | 18.7647 | 638 | 0.6661 | 0.5466 | 0.6661 | 0.8161 |
| 0.3918 | 18.8235 | 640 | 0.6293 | 0.5415 | 0.6293 | 0.7933 |
| 0.3918 | 18.8824 | 642 | 0.6420 | 0.4364 | 0.6420 | 0.8012 |
| 0.3918 | 18.9412 | 644 | 0.6843 | 0.4755 | 0.6843 | 0.8272 |
| 0.3918 | 19.0 | 646 | 0.6889 | 0.4628 | 0.6889 | 0.8300 |
| 0.3918 | 19.0588 | 648 | 0.6765 | 0.4892 | 0.6765 | 0.8225 |
| 0.3918 | 19.1176 | 650 | 0.6667 | 0.5031 | 0.6667 | 0.8165 |
| 0.3918 | 19.1765 | 652 | 0.6394 | 0.4384 | 0.6394 | 0.7997 |
| 0.3918 | 19.2353 | 654 | 0.6206 | 0.4224 | 0.6206 | 0.7878 |
| 0.3918 | 19.2941 | 656 | 0.6130 | 0.4224 | 0.6130 | 0.7829 |
| 0.3918 | 19.3529 | 658 | 0.6222 | 0.4735 | 0.6222 | 0.7888 |
| 0.3918 | 19.4118 | 660 | 0.6539 | 0.4996 | 0.6539 | 0.8086 |
| 0.3918 | 19.4706 | 662 | 0.6528 | 0.4996 | 0.6528 | 0.8079 |
| 0.3918 | 19.5294 | 664 | 0.6141 | 0.4714 | 0.6141 | 0.7836 |
| 0.3918 | 19.5882 | 666 | 0.5948 | 0.5088 | 0.5948 | 0.7712 |
| 0.3918 | 19.6471 | 668 | 0.5990 | 0.5146 | 0.5990 | 0.7740 |
| 0.3918 | 19.7059 | 670 | 0.5988 | 0.5146 | 0.5988 | 0.7738 |
| 0.3918 | 19.7647 | 672 | 0.5902 | 0.4878 | 0.5902 | 0.7683 |
| 0.3918 | 19.8235 | 674 | 0.5974 | 0.4841 | 0.5974 | 0.7729 |
| 0.3918 | 19.8824 | 676 | 0.6222 | 0.4859 | 0.6222 | 0.7888 |
| 0.3918 | 19.9412 | 678 | 0.6215 | 0.4777 | 0.6215 | 0.7884 |
| 0.3918 | 20.0 | 680 | 0.5905 | 0.4258 | 0.5905 | 0.7684 |
| 0.3918 | 20.0588 | 682 | 0.5802 | 0.4862 | 0.5802 | 0.7617 |
| 0.3918 | 20.1176 | 684 | 0.5996 | 0.5132 | 0.5996 | 0.7743 |
| 0.3918 | 20.1765 | 686 | 0.6009 | 0.5132 | 0.6009 | 0.7751 |
| 0.3918 | 20.2353 | 688 | 0.5929 | 0.5203 | 0.5929 | 0.7700 |
| 0.3918 | 20.2941 | 690 | 0.5944 | 0.5409 | 0.5944 | 0.7710 |
| 0.3918 | 20.3529 | 692 | 0.6020 | 0.5545 | 0.6020 | 0.7759 |
| 0.3918 | 20.4118 | 694 | 0.6068 | 0.5618 | 0.6068 | 0.7790 |
| 0.3918 | 20.4706 | 696 | 0.6231 | 0.5865 | 0.6231 | 0.7894 |
| 0.3918 | 20.5294 | 698 | 0.6437 | 0.6175 | 0.6437 | 0.8023 |
| 0.3918 | 20.5882 | 700 | 0.6649 | 0.6175 | 0.6649 | 0.8154 |
| 0.3918 | 20.6471 | 702 | 0.6809 | 0.5997 | 0.6809 | 0.8252 |
| 0.3918 | 20.7059 | 704 | 0.6652 | 0.5888 | 0.6652 | 0.8156 |
| 0.3918 | 20.7647 | 706 | 0.6458 | 0.6170 | 0.6458 | 0.8036 |
| 0.3918 | 20.8235 | 708 | 0.6437 | 0.6106 | 0.6437 | 0.8023 |
| 0.3918 | 20.8824 | 710 | 0.6456 | 0.6043 | 0.6456 | 0.8035 |
| 0.3918 | 20.9412 | 712 | 0.6392 | 0.6161 | 0.6392 | 0.7995 |
| 0.3918 | 21.0 | 714 | 0.6180 | 0.5920 | 0.6180 | 0.7861 |
| 0.3918 | 21.0588 | 716 | 0.5833 | 0.5464 | 0.5833 | 0.7637 |
| 0.3918 | 21.1176 | 718 | 0.5713 | 0.5173 | 0.5713 | 0.7558 |
| 0.3918 | 21.1765 | 720 | 0.5694 | 0.5173 | 0.5694 | 0.7546 |
| 0.3918 | 21.2353 | 722 | 0.5676 | 0.5162 | 0.5676 | 0.7534 |
| 0.3918 | 21.2941 | 724 | 0.5711 | 0.5415 | 0.5711 | 0.7557 |
| 0.3918 | 21.3529 | 726 | 0.5801 | 0.5607 | 0.5801 | 0.7616 |
| 0.3918 | 21.4118 | 728 | 0.5782 | 0.5607 | 0.5782 | 0.7604 |
| 0.3918 | 21.4706 | 730 | 0.5819 | 0.5607 | 0.5819 | 0.7629 |
| 0.3918 | 21.5294 | 732 | 0.5789 | 0.5222 | 0.5789 | 0.7609 |
| 0.3918 | 21.5882 | 734 | 0.5782 | 0.5239 | 0.5782 | 0.7604 |
| 0.3918 | 21.6471 | 736 | 0.5770 | 0.4880 | 0.5770 | 0.7596 |
| 0.3918 | 21.7059 | 738 | 0.5823 | 0.4676 | 0.5823 | 0.7631 |
| 0.3918 | 21.7647 | 740 | 0.6076 | 0.4507 | 0.6076 | 0.7795 |
| 0.3918 | 21.8235 | 742 | 0.6266 | 0.4925 | 0.6266 | 0.7916 |
| 0.3918 | 21.8824 | 744 | 0.6113 | 0.4449 | 0.6113 | 0.7818 |
| 0.3918 | 21.9412 | 746 | 0.5919 | 0.4652 | 0.5919 | 0.7694 |
| 0.3918 | 22.0 | 748 | 0.5944 | 0.4666 | 0.5944 | 0.7710 |
| 0.3918 | 22.0588 | 750 | 0.6001 | 0.4642 | 0.6001 | 0.7747 |
| 0.3918 | 22.1176 | 752 | 0.5973 | 0.5018 | 0.5973 | 0.7729 |
| 0.3918 | 22.1765 | 754 | 0.5895 | 0.5011 | 0.5895 | 0.7678 |
| 0.3918 | 22.2353 | 756 | 0.5925 | 0.5367 | 0.5925 | 0.7697 |
| 0.3918 | 22.2941 | 758 | 0.5901 | 0.5595 | 0.5901 | 0.7682 |
| 0.3918 | 22.3529 | 760 | 0.5850 | 0.5079 | 0.5850 | 0.7649 |
| 0.3918 | 22.4118 | 762 | 0.5769 | 0.4900 | 0.5769 | 0.7596 |
| 0.3918 | 22.4706 | 764 | 0.5735 | 0.5093 | 0.5735 | 0.7573 |
| 0.3918 | 22.5294 | 766 | 0.5825 | 0.5158 | 0.5825 | 0.7632 |
| 0.3918 | 22.5882 | 768 | 0.5932 | 0.5129 | 0.5932 | 0.7702 |
| 0.3918 | 22.6471 | 770 | 0.5945 | 0.5129 | 0.5945 | 0.7710 |
| 0.3918 | 22.7059 | 772 | 0.6038 | 0.4726 | 0.6038 | 0.7770 |
| 0.3918 | 22.7647 | 774 | 0.6190 | 0.4927 | 0.6190 | 0.7867 |
| 0.3918 | 22.8235 | 776 | 0.6073 | 0.4735 | 0.6073 | 0.7793 |
| 0.3918 | 22.8824 | 778 | 0.5857 | 0.4909 | 0.5857 | 0.7653 |
| 0.3918 | 22.9412 | 780 | 0.5805 | 0.5124 | 0.5805 | 0.7619 |
| 0.3918 | 23.0 | 782 | 0.5867 | 0.5082 | 0.5867 | 0.7659 |
| 0.3918 | 23.0588 | 784 | 0.5916 | 0.4875 | 0.5916 | 0.7692 |
| 0.3918 | 23.1176 | 786 | 0.5859 | 0.5018 | 0.5859 | 0.7654 |
| 0.3918 | 23.1765 | 788 | 0.5807 | 0.5158 | 0.5807 | 0.7620 |
| 0.3918 | 23.2353 | 790 | 0.5749 | 0.5279 | 0.5749 | 0.7582 |
| 0.3918 | 23.2941 | 792 | 0.5766 | 0.5451 | 0.5766 | 0.7593 |
| 0.3918 | 23.3529 | 794 | 0.5789 | 0.5451 | 0.5789 | 0.7608 |
| 0.3918 | 23.4118 | 796 | 0.5855 | 0.5337 | 0.5855 | 0.7652 |
| 0.3918 | 23.4706 | 798 | 0.6022 | 0.5374 | 0.6022 | 0.7760 |
| 0.3918 | 23.5294 | 800 | 0.6309 | 0.5307 | 0.6309 | 0.7943 |
| 0.3918 | 23.5882 | 802 | 0.6382 | 0.5257 | 0.6382 | 0.7989 |
| 0.3918 | 23.6471 | 804 | 0.6009 | 0.5251 | 0.6009 | 0.7752 |
| 0.3918 | 23.7059 | 806 | 0.5728 | 0.5601 | 0.5728 | 0.7568 |
| 0.3918 | 23.7647 | 808 | 0.5721 | 0.6129 | 0.5721 | 0.7564 |
| 0.3918 | 23.8235 | 810 | 0.5689 | 0.6211 | 0.5689 | 0.7542 |
| 0.3918 | 23.8824 | 812 | 0.5595 | 0.6135 | 0.5595 | 0.7480 |
| 0.3918 | 23.9412 | 814 | 0.5570 | 0.5451 | 0.5570 | 0.7463 |
| 0.3918 | 24.0 | 816 | 0.5633 | 0.5228 | 0.5633 | 0.7505 |
| 0.3918 | 24.0588 | 818 | 0.5642 | 0.5415 | 0.5642 | 0.7512 |
| 0.3918 | 24.1176 | 820 | 0.5661 | 0.5427 | 0.5661 | 0.7524 |
| 0.3918 | 24.1765 | 822 | 0.5730 | 0.5524 | 0.5730 | 0.7570 |
| 0.3918 | 24.2353 | 824 | 0.5758 | 0.5729 | 0.5758 | 0.7588 |
| 0.3918 | 24.2941 | 826 | 0.5743 | 0.5625 | 0.5743 | 0.7578 |
| 0.3918 | 24.3529 | 828 | 0.5716 | 0.5517 | 0.5716 | 0.7560 |
| 0.3918 | 24.4118 | 830 | 0.5673 | 0.5199 | 0.5673 | 0.7532 |
| 0.3918 | 24.4706 | 832 | 0.5666 | 0.5178 | 0.5666 | 0.7527 |
| 0.3918 | 24.5294 | 834 | 0.5681 | 0.5402 | 0.5681 | 0.7537 |
| 0.3918 | 24.5882 | 836 | 0.5687 | 0.5493 | 0.5687 | 0.7541 |
| 0.3918 | 24.6471 | 838 | 0.5753 | 0.5812 | 0.5753 | 0.7585 |
| 0.3918 | 24.7059 | 840 | 0.5764 | 0.6486 | 0.5764 | 0.7592 |
| 0.3918 | 24.7647 | 842 | 0.5666 | 0.5256 | 0.5666 | 0.7527 |
| 0.3918 | 24.8235 | 844 | 0.5572 | 0.5549 | 0.5572 | 0.7465 |
| 0.3918 | 24.8824 | 846 | 0.5665 | 0.4823 | 0.5665 | 0.7527 |
| 0.3918 | 24.9412 | 848 | 0.5975 | 0.4975 | 0.5975 | 0.7730 |
| 0.3918 | 25.0 | 850 | 0.6125 | 0.4968 | 0.6125 | 0.7826 |
| 0.3918 | 25.0588 | 852 | 0.5985 | 0.4585 | 0.5985 | 0.7736 |
| 0.3918 | 25.1176 | 854 | 0.5667 | 0.4953 | 0.5667 | 0.7528 |
| 0.3918 | 25.1765 | 856 | 0.5642 | 0.5172 | 0.5642 | 0.7511 |
| 0.3918 | 25.2353 | 858 | 0.5741 | 0.5187 | 0.5741 | 0.7577 |
| 0.3918 | 25.2941 | 860 | 0.5808 | 0.5126 | 0.5808 | 0.7621 |
| 0.3918 | 25.3529 | 862 | 0.5812 | 0.5177 | 0.5812 | 0.7624 |
| 0.3918 | 25.4118 | 864 | 0.5927 | 0.5585 | 0.5927 | 0.7698 |
| 0.3918 | 25.4706 | 866 | 0.6099 | 0.5139 | 0.6099 | 0.7809 |
| 0.3918 | 25.5294 | 868 | 0.6299 | 0.4876 | 0.6299 | 0.7937 |
| 0.3918 | 25.5882 | 870 | 0.6153 | 0.4924 | 0.6153 | 0.7844 |
| 0.3918 | 25.6471 | 872 | 0.5982 | 0.5061 | 0.5982 | 0.7734 |
| 0.3918 | 25.7059 | 874 | 0.5903 | 0.5195 | 0.5903 | 0.7683 |
| 0.3918 | 25.7647 | 876 | 0.5891 | 0.4856 | 0.5891 | 0.7675 |
| 0.3918 | 25.8235 | 878 | 0.5887 | 0.4903 | 0.5887 | 0.7673 |
| 0.3918 | 25.8824 | 880 | 0.5904 | 0.4865 | 0.5904 | 0.7684 |
| 0.3918 | 25.9412 | 882 | 0.5950 | 0.4991 | 0.5950 | 0.7714 |
| 0.3918 | 26.0 | 884 | 0.6039 | 0.5532 | 0.6039 | 0.7771 |
| 0.3918 | 26.0588 | 886 | 0.6091 | 0.5369 | 0.6091 | 0.7805 |
| 0.3918 | 26.1176 | 888 | 0.6070 | 0.5536 | 0.6070 | 0.7791 |
| 0.3918 | 26.1765 | 890 | 0.6108 | 0.5658 | 0.6108 | 0.7816 |
| 0.3918 | 26.2353 | 892 | 0.6055 | 0.5658 | 0.6055 | 0.7781 |
| 0.3918 | 26.2941 | 894 | 0.5977 | 0.5658 | 0.5977 | 0.7731 |
| 0.3918 | 26.3529 | 896 | 0.5979 | 0.5539 | 0.5979 | 0.7732 |
| 0.3918 | 26.4118 | 898 | 0.5896 | 0.5329 | 0.5896 | 0.7678 |
| 0.3918 | 26.4706 | 900 | 0.5703 | 0.5209 | 0.5703 | 0.7552 |
| 0.3918 | 26.5294 | 902 | 0.5636 | 0.5271 | 0.5636 | 0.7508 |
| 0.3918 | 26.5882 | 904 | 0.5685 | 0.5119 | 0.5685 | 0.7540 |
| 0.3918 | 26.6471 | 906 | 0.5743 | 0.5228 | 0.5743 | 0.7578 |
| 0.3918 | 26.7059 | 908 | 0.5881 | 0.4840 | 0.5881 | 0.7669 |
| 0.3918 | 26.7647 | 910 | 0.5849 | 0.4853 | 0.5849 | 0.7648 |
| 0.3918 | 26.8235 | 912 | 0.5862 | 0.4788 | 0.5862 | 0.7656 |
| 0.3918 | 26.8824 | 914 | 0.5908 | 0.4895 | 0.5908 | 0.7686 |
| 0.3918 | 26.9412 | 916 | 0.5989 | 0.4594 | 0.5989 | 0.7739 |
| 0.3918 | 27.0 | 918 | 0.5888 | 0.4594 | 0.5888 | 0.7673 |
| 0.3918 | 27.0588 | 920 | 0.5724 | 0.4727 | 0.5724 | 0.7565 |
| 0.3918 | 27.1176 | 922 | 0.5651 | 0.5072 | 0.5651 | 0.7518 |
| 0.3918 | 27.1765 | 924 | 0.5699 | 0.4849 | 0.5699 | 0.7549 |
| 0.3918 | 27.2353 | 926 | 0.5909 | 0.5876 | 0.5909 | 0.7687 |
| 0.3918 | 27.2941 | 928 | 0.6105 | 0.6081 | 0.6105 | 0.7813 |
| 0.3918 | 27.3529 | 930 | 0.6054 | 0.6081 | 0.6054 | 0.7781 |
| 0.3918 | 27.4118 | 932 | 0.5801 | 0.6076 | 0.5801 | 0.7617 |
| 0.3918 | 27.4706 | 934 | 0.5679 | 0.5077 | 0.5679 | 0.7536 |
| 0.3918 | 27.5294 | 936 | 0.5841 | 0.5581 | 0.5841 | 0.7643 |
| 0.3918 | 27.5882 | 938 | 0.5862 | 0.5561 | 0.5862 | 0.7656 |
| 0.3918 | 27.6471 | 940 | 0.5726 | 0.5077 | 0.5726 | 0.7567 |
| 0.3918 | 27.7059 | 942 | 0.5775 | 0.6232 | 0.5775 | 0.7599 |
| 0.3918 | 27.7647 | 944 | 0.6216 | 0.5797 | 0.6216 | 0.7884 |
| 0.3918 | 27.8235 | 946 | 0.6529 | 0.5313 | 0.6529 | 0.8080 |
| 0.3918 | 27.8824 | 948 | 0.6343 | 0.5713 | 0.6343 | 0.7964 |
| 0.3918 | 27.9412 | 950 | 0.6166 | 0.5609 | 0.6166 | 0.7853 |
| 0.3918 | 28.0 | 952 | 0.6021 | 0.5544 | 0.6021 | 0.7760 |
| 0.3918 | 28.0588 | 954 | 0.6034 | 0.5544 | 0.6034 | 0.7768 |
| 0.3918 | 28.1176 | 956 | 0.5986 | 0.5633 | 0.5986 | 0.7737 |
| 0.3918 | 28.1765 | 958 | 0.5955 | 0.5227 | 0.5955 | 0.7717 |
| 0.3918 | 28.2353 | 960 | 0.6130 | 0.5510 | 0.6130 | 0.7830 |
| 0.3918 | 28.2941 | 962 | 0.6348 | 0.6071 | 0.6348 | 0.7967 |
| 0.3918 | 28.3529 | 964 | 0.6331 | 0.5617 | 0.6331 | 0.7956 |
| 0.3918 | 28.4118 | 966 | 0.6188 | 0.5164 | 0.6188 | 0.7867 |
| 0.3918 | 28.4706 | 968 | 0.6077 | 0.5027 | 0.6077 | 0.7796 |
| 0.3918 | 28.5294 | 970 | 0.6065 | 0.5287 | 0.6065 | 0.7788 |
| 0.3918 | 28.5882 | 972 | 0.6064 | 0.5373 | 0.6064 | 0.7787 |
| 0.3918 | 28.6471 | 974 | 0.6006 | 0.5259 | 0.6006 | 0.7750 |
| 0.3918 | 28.7059 | 976 | 0.5964 | 0.4998 | 0.5964 | 0.7722 |
| 0.3918 | 28.7647 | 978 | 0.5970 | 0.4998 | 0.5970 | 0.7726 |
| 0.3918 | 28.8235 | 980 | 0.5899 | 0.4998 | 0.5899 | 0.7681 |
| 0.3918 | 28.8824 | 982 | 0.5820 | 0.5042 | 0.5820 | 0.7629 |
| 0.3918 | 28.9412 | 984 | 0.5815 | 0.5042 | 0.5815 | 0.7626 |
| 0.3918 | 29.0 | 986 | 0.5882 | 0.5109 | 0.5882 | 0.7670 |
| 0.3918 | 29.0588 | 988 | 0.5910 | 0.4932 | 0.5910 | 0.7688 |
| 0.3918 | 29.1176 | 990 | 0.6070 | 0.4783 | 0.6070 | 0.7791 |
| 0.3918 | 29.1765 | 992 | 0.6357 | 0.5119 | 0.6357 | 0.7973 |
| 0.3918 | 29.2353 | 994 | 0.6589 | 0.5104 | 0.6589 | 0.8118 |
| 0.3918 | 29.2941 | 996 | 0.6479 | 0.5189 | 0.6479 | 0.8049 |
| 0.3918 | 29.3529 | 998 | 0.6206 | 0.5216 | 0.6206 | 0.7878 |
| 0.065 | 29.4118 | 1000 | 0.6013 | 0.5239 | 0.6013 | 0.7754 |
| 0.065 | 29.4706 | 1002 | 0.5965 | 0.5088 | 0.5965 | 0.7723 |
| 0.065 | 29.5294 | 1004 | 0.5949 | 0.5178 | 0.5949 | 0.7713 |
| 0.065 | 29.5882 | 1006 | 0.5877 | 0.4998 | 0.5877 | 0.7666 |
| 0.065 | 29.6471 | 1008 | 0.5784 | 0.5155 | 0.5784 | 0.7606 |
| 0.065 | 29.7059 | 1010 | 0.5779 | 0.5267 | 0.5779 | 0.7602 |
| 0.065 | 29.7647 | 1012 | 0.5902 | 0.5093 | 0.5902 | 0.7682 |
| 0.065 | 29.8235 | 1014 | 0.6111 | 0.4927 | 0.6111 | 0.7817 |
| 0.065 | 29.8824 | 1016 | 0.6095 | 0.4780 | 0.6095 | 0.7807 |
| 0.065 | 29.9412 | 1018 | 0.5898 | 0.4781 | 0.5898 | 0.7680 |
| 0.065 | 30.0 | 1020 | 0.5851 | 0.5238 | 0.5851 | 0.7649 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k6_task1_organization
|
MayBashendy
| 2025-01-15T12:32:04Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T12:14:36Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k6_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k6_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1023
- Qwk: 0.2302
- Mse: 2.1023
- Rmse: 1.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0769 | 2 | 6.9275 | -0.0056 | 6.9275 | 2.6320 |
| No log | 0.1538 | 4 | 4.4704 | 0.0702 | 4.4704 | 2.1143 |
| No log | 0.2308 | 6 | 3.7767 | -0.0208 | 3.7767 | 1.9434 |
| No log | 0.3077 | 8 | 4.9755 | -0.0855 | 4.9755 | 2.2306 |
| No log | 0.3846 | 10 | 4.8314 | -0.0889 | 4.8314 | 2.1980 |
| No log | 0.4615 | 12 | 3.2770 | -0.0686 | 3.2770 | 1.8102 |
| No log | 0.5385 | 14 | 2.3898 | 0.0584 | 2.3898 | 1.5459 |
| No log | 0.6154 | 16 | 2.4053 | 0.0 | 2.4053 | 1.5509 |
| No log | 0.6923 | 18 | 2.3197 | 0.0305 | 2.3197 | 1.5231 |
| No log | 0.7692 | 20 | 2.0640 | 0.1148 | 2.0640 | 1.4367 |
| No log | 0.8462 | 22 | 1.8753 | 0.1379 | 1.8753 | 1.3694 |
| No log | 0.9231 | 24 | 1.8365 | 0.1368 | 1.8365 | 1.3552 |
| No log | 1.0 | 26 | 1.7928 | 0.1930 | 1.7928 | 1.3389 |
| No log | 1.0769 | 28 | 1.7491 | 0.0784 | 1.7491 | 1.3225 |
| No log | 1.1538 | 30 | 1.7718 | 0.0971 | 1.7718 | 1.3311 |
| No log | 1.2308 | 32 | 1.8360 | 0.0769 | 1.8360 | 1.3550 |
| No log | 1.3077 | 34 | 1.9071 | 0.0748 | 1.9071 | 1.3810 |
| No log | 1.3846 | 36 | 2.0981 | 0.1322 | 2.0981 | 1.4485 |
| No log | 1.4615 | 38 | 1.9696 | 0.1186 | 1.9696 | 1.4034 |
| No log | 1.5385 | 40 | 1.7876 | 0.1053 | 1.7876 | 1.3370 |
| No log | 1.6154 | 42 | 1.7216 | 0.1818 | 1.7216 | 1.3121 |
| No log | 1.6923 | 44 | 1.7784 | 0.0962 | 1.7784 | 1.3336 |
| No log | 1.7692 | 46 | 1.8010 | 0.2679 | 1.8010 | 1.3420 |
| No log | 1.8462 | 48 | 1.7625 | 0.2957 | 1.7625 | 1.3276 |
| No log | 1.9231 | 50 | 1.6574 | 0.2951 | 1.6574 | 1.2874 |
| No log | 2.0 | 52 | 1.5691 | 0.3651 | 1.5691 | 1.2526 |
| No log | 2.0769 | 54 | 1.5583 | 0.2602 | 1.5583 | 1.2483 |
| No log | 2.1538 | 56 | 1.5564 | 0.3636 | 1.5564 | 1.2475 |
| No log | 2.2308 | 58 | 1.8582 | 0.3101 | 1.8582 | 1.3632 |
| No log | 2.3077 | 60 | 1.9456 | 0.2985 | 1.9456 | 1.3948 |
| No log | 2.3846 | 62 | 1.7541 | 0.3382 | 1.7541 | 1.3244 |
| No log | 2.4615 | 64 | 1.6197 | 0.2689 | 1.6197 | 1.2727 |
| No log | 2.5385 | 66 | 1.6497 | 0.1724 | 1.6497 | 1.2844 |
| No log | 2.6154 | 68 | 1.6558 | 0.1739 | 1.6558 | 1.2868 |
| No log | 2.6923 | 70 | 1.6264 | 0.1897 | 1.6264 | 1.2753 |
| No log | 2.7692 | 72 | 1.5811 | 0.3577 | 1.5811 | 1.2574 |
| No log | 2.8462 | 74 | 1.5280 | 0.3548 | 1.5280 | 1.2361 |
| No log | 2.9231 | 76 | 1.5087 | 0.3548 | 1.5087 | 1.2283 |
| No log | 3.0 | 78 | 1.5833 | 0.3939 | 1.5833 | 1.2583 |
| No log | 3.0769 | 80 | 1.8412 | 0.2609 | 1.8412 | 1.3569 |
| No log | 3.1538 | 82 | 1.8000 | 0.2920 | 1.8000 | 1.3416 |
| No log | 3.2308 | 84 | 1.6949 | 0.3676 | 1.6949 | 1.3019 |
| No log | 3.3077 | 86 | 1.7790 | 0.3358 | 1.7790 | 1.3338 |
| No log | 3.3846 | 88 | 1.8963 | 0.3333 | 1.8963 | 1.3771 |
| No log | 3.4615 | 90 | 1.9345 | 0.2857 | 1.9345 | 1.3908 |
| No log | 3.5385 | 92 | 1.8385 | 0.3478 | 1.8385 | 1.3559 |
| No log | 3.6154 | 94 | 1.7298 | 0.3259 | 1.7298 | 1.3152 |
| No log | 3.6923 | 96 | 1.7603 | 0.3382 | 1.7603 | 1.3268 |
| No log | 3.7692 | 98 | 1.8919 | 0.3000 | 1.8919 | 1.3755 |
| No log | 3.8462 | 100 | 1.8871 | 0.3650 | 1.8871 | 1.3737 |
| No log | 3.9231 | 102 | 1.7185 | 0.3636 | 1.7185 | 1.3109 |
| No log | 4.0 | 104 | 1.5748 | 0.2203 | 1.5748 | 1.2549 |
| No log | 4.0769 | 106 | 1.5630 | 0.2903 | 1.5630 | 1.2502 |
| No log | 4.1538 | 108 | 1.6675 | 0.3636 | 1.6675 | 1.2913 |
| No log | 4.2308 | 110 | 1.8266 | 0.3066 | 1.8266 | 1.3515 |
| No log | 4.3077 | 112 | 1.8425 | 0.3088 | 1.8425 | 1.3574 |
| No log | 4.3846 | 114 | 1.6335 | 0.2992 | 1.6335 | 1.2781 |
| No log | 4.4615 | 116 | 1.4517 | 0.2414 | 1.4517 | 1.2049 |
| No log | 4.5385 | 118 | 1.4063 | 0.3423 | 1.4063 | 1.1859 |
| No log | 4.6154 | 120 | 1.4478 | 0.3529 | 1.4478 | 1.2032 |
| No log | 4.6923 | 122 | 1.4713 | 0.3415 | 1.4713 | 1.2130 |
| No log | 4.7692 | 124 | 1.4835 | 0.4094 | 1.4835 | 1.2180 |
| No log | 4.8462 | 126 | 1.5031 | 0.4186 | 1.5031 | 1.2260 |
| No log | 4.9231 | 128 | 1.5503 | 0.4 | 1.5503 | 1.2451 |
| No log | 5.0 | 130 | 1.5101 | 0.4062 | 1.5101 | 1.2289 |
| No log | 5.0769 | 132 | 1.5969 | 0.4 | 1.5969 | 1.2637 |
| No log | 5.1538 | 134 | 1.7464 | 0.3459 | 1.7464 | 1.3215 |
| No log | 5.2308 | 136 | 1.6598 | 0.3485 | 1.6598 | 1.2883 |
| No log | 5.3077 | 138 | 1.4752 | 0.3871 | 1.4752 | 1.2146 |
| No log | 5.3846 | 140 | 1.3852 | 0.3140 | 1.3852 | 1.1769 |
| No log | 5.4615 | 142 | 1.4471 | 0.3692 | 1.4471 | 1.2030 |
| No log | 5.5385 | 144 | 1.9736 | 0.2657 | 1.9736 | 1.4049 |
| No log | 5.6154 | 146 | 2.2646 | 0.2192 | 2.2646 | 1.5049 |
| No log | 5.6923 | 148 | 2.0866 | 0.2657 | 2.0866 | 1.4445 |
| No log | 5.7692 | 150 | 1.8099 | 0.3165 | 1.8099 | 1.3453 |
| No log | 5.8462 | 152 | 1.6534 | 0.3852 | 1.6534 | 1.2859 |
| No log | 5.9231 | 154 | 1.6880 | 0.3529 | 1.6880 | 1.2992 |
| No log | 6.0 | 156 | 1.9886 | 0.2676 | 1.9886 | 1.4102 |
| No log | 6.0769 | 158 | 2.2375 | 0.1549 | 2.2375 | 1.4958 |
| No log | 6.1538 | 160 | 2.1110 | 0.1857 | 2.1110 | 1.4529 |
| No log | 6.2308 | 162 | 1.8239 | 0.3235 | 1.8239 | 1.3505 |
| No log | 6.3077 | 164 | 1.7347 | 0.3485 | 1.7347 | 1.3171 |
| No log | 6.3846 | 166 | 1.7939 | 0.3235 | 1.7939 | 1.3394 |
| No log | 6.4615 | 168 | 2.0819 | 0.2113 | 2.0819 | 1.4429 |
| No log | 6.5385 | 170 | 2.2585 | 0.1806 | 2.2585 | 1.5028 |
| No log | 6.6154 | 172 | 2.1435 | 0.2361 | 2.1435 | 1.4641 |
| No log | 6.6923 | 174 | 1.8139 | 0.2734 | 1.8139 | 1.3468 |
| No log | 6.7692 | 176 | 1.6765 | 0.3407 | 1.6765 | 1.2948 |
| No log | 6.8462 | 178 | 1.6456 | 0.4 | 1.6456 | 1.2828 |
| No log | 6.9231 | 180 | 1.6312 | 0.3740 | 1.6312 | 1.2772 |
| No log | 7.0 | 182 | 1.7430 | 0.3359 | 1.7430 | 1.3202 |
| No log | 7.0769 | 184 | 1.8327 | 0.2794 | 1.8327 | 1.3538 |
| No log | 7.1538 | 186 | 1.8168 | 0.2774 | 1.8168 | 1.3479 |
| No log | 7.2308 | 188 | 1.6506 | 0.3308 | 1.6506 | 1.2848 |
| No log | 7.3077 | 190 | 1.3571 | 0.4252 | 1.3571 | 1.1650 |
| No log | 7.3846 | 192 | 1.3120 | 0.4 | 1.3120 | 1.1454 |
| No log | 7.4615 | 194 | 1.4381 | 0.4275 | 1.4381 | 1.1992 |
| No log | 7.5385 | 196 | 1.7564 | 0.3000 | 1.7564 | 1.3253 |
| No log | 7.6154 | 198 | 2.2107 | 0.2069 | 2.2107 | 1.4868 |
| No log | 7.6923 | 200 | 2.5409 | -0.0292 | 2.5409 | 1.5940 |
| No log | 7.7692 | 202 | 2.5550 | -0.0438 | 2.5550 | 1.5985 |
| No log | 7.8462 | 204 | 2.4114 | 0.0699 | 2.4114 | 1.5529 |
| No log | 7.9231 | 206 | 2.0969 | 0.1549 | 2.0969 | 1.4481 |
| No log | 8.0 | 208 | 1.7141 | 0.3235 | 1.7141 | 1.3092 |
| No log | 8.0769 | 210 | 1.4594 | 0.4094 | 1.4594 | 1.2080 |
| No log | 8.1538 | 212 | 1.4004 | 0.4 | 1.4004 | 1.1834 |
| No log | 8.2308 | 214 | 1.5184 | 0.4 | 1.5184 | 1.2322 |
| No log | 8.3077 | 216 | 1.7736 | 0.3165 | 1.7736 | 1.3318 |
| No log | 8.3846 | 218 | 1.9138 | 0.2083 | 1.9138 | 1.3834 |
| No log | 8.4615 | 220 | 1.8375 | 0.2837 | 1.8375 | 1.3555 |
| No log | 8.5385 | 222 | 1.6468 | 0.3111 | 1.6468 | 1.2833 |
| No log | 8.6154 | 224 | 1.5440 | 0.3846 | 1.5440 | 1.2426 |
| No log | 8.6923 | 226 | 1.5017 | 0.4390 | 1.5017 | 1.2254 |
| No log | 8.7692 | 228 | 1.4905 | 0.4516 | 1.4905 | 1.2209 |
| No log | 8.8462 | 230 | 1.5342 | 0.375 | 1.5342 | 1.2386 |
| No log | 8.9231 | 232 | 1.5819 | 0.3692 | 1.5819 | 1.2577 |
| No log | 9.0 | 234 | 1.6323 | 0.3759 | 1.6323 | 1.2776 |
| No log | 9.0769 | 236 | 1.7617 | 0.3043 | 1.7617 | 1.3273 |
| No log | 9.1538 | 238 | 1.7586 | 0.2774 | 1.7586 | 1.3261 |
| No log | 9.2308 | 240 | 1.6349 | 0.3817 | 1.6349 | 1.2786 |
| No log | 9.3077 | 242 | 1.5181 | 0.375 | 1.5181 | 1.2321 |
| No log | 9.3846 | 244 | 1.4331 | 0.3492 | 1.4331 | 1.1971 |
| No log | 9.4615 | 246 | 1.4270 | 0.3780 | 1.4270 | 1.1946 |
| No log | 9.5385 | 248 | 1.5527 | 0.3788 | 1.5527 | 1.2461 |
| No log | 9.6154 | 250 | 1.8277 | 0.2878 | 1.8277 | 1.3519 |
| No log | 9.6923 | 252 | 1.8965 | 0.2714 | 1.8965 | 1.3771 |
| No log | 9.7692 | 254 | 1.8916 | 0.2878 | 1.8916 | 1.3754 |
| No log | 9.8462 | 256 | 1.7666 | 0.3382 | 1.7666 | 1.3291 |
| No log | 9.9231 | 258 | 1.6458 | 0.3910 | 1.6458 | 1.2829 |
| No log | 10.0 | 260 | 1.6844 | 0.3910 | 1.6844 | 1.2978 |
| No log | 10.0769 | 262 | 1.8749 | 0.2609 | 1.8749 | 1.3693 |
| No log | 10.1538 | 264 | 1.9623 | 0.2336 | 1.9623 | 1.4008 |
| No log | 10.2308 | 266 | 2.0164 | 0.2174 | 2.0164 | 1.4200 |
| No log | 10.3077 | 268 | 1.9008 | 0.2446 | 1.9008 | 1.3787 |
| No log | 10.3846 | 270 | 1.8635 | 0.2899 | 1.8635 | 1.3651 |
| No log | 10.4615 | 272 | 1.8457 | 0.2899 | 1.8457 | 1.3586 |
| No log | 10.5385 | 274 | 1.8511 | 0.2899 | 1.8511 | 1.3605 |
| No log | 10.6154 | 276 | 1.7922 | 0.2899 | 1.7922 | 1.3387 |
| No log | 10.6923 | 278 | 1.8583 | 0.2899 | 1.8583 | 1.3632 |
| No log | 10.7692 | 280 | 1.7983 | 0.3556 | 1.7983 | 1.3410 |
| No log | 10.8462 | 282 | 1.8143 | 0.3382 | 1.8143 | 1.3470 |
| No log | 10.9231 | 284 | 1.8287 | 0.3212 | 1.8287 | 1.3523 |
| No log | 11.0 | 286 | 1.7774 | 0.3382 | 1.7774 | 1.3332 |
| No log | 11.0769 | 288 | 1.7852 | 0.3212 | 1.7852 | 1.3361 |
| No log | 11.1538 | 290 | 1.9110 | 0.2590 | 1.9110 | 1.3824 |
| No log | 11.2308 | 292 | 2.0481 | 0.2411 | 2.0481 | 1.4311 |
| No log | 11.3077 | 294 | 2.0703 | 0.2411 | 2.0703 | 1.4389 |
| No log | 11.3846 | 296 | 1.8326 | 0.2590 | 1.8326 | 1.3537 |
| No log | 11.4615 | 298 | 1.5138 | 0.3566 | 1.5138 | 1.2304 |
| No log | 11.5385 | 300 | 1.4477 | 0.3780 | 1.4477 | 1.2032 |
| No log | 11.6154 | 302 | 1.5371 | 0.3692 | 1.5371 | 1.2398 |
| No log | 11.6923 | 304 | 1.7766 | 0.2590 | 1.7766 | 1.3329 |
| No log | 11.7692 | 306 | 2.0222 | 0.2143 | 2.0222 | 1.4220 |
| No log | 11.8462 | 308 | 2.1366 | 0.1871 | 2.1366 | 1.4617 |
| No log | 11.9231 | 310 | 2.0079 | 0.1871 | 2.0079 | 1.4170 |
| No log | 12.0 | 312 | 1.7520 | 0.3511 | 1.7520 | 1.3236 |
| No log | 12.0769 | 314 | 1.5289 | 0.3802 | 1.5289 | 1.2365 |
| No log | 12.1538 | 316 | 1.4179 | 0.3898 | 1.4179 | 1.1908 |
| No log | 12.2308 | 318 | 1.3929 | 0.4516 | 1.3929 | 1.1802 |
| No log | 12.3077 | 320 | 1.6046 | 0.2963 | 1.6046 | 1.2667 |
| No log | 12.3846 | 322 | 2.0543 | 0.2535 | 2.0543 | 1.4333 |
| No log | 12.4615 | 324 | 2.4063 | 0.2270 | 2.4063 | 1.5512 |
| No log | 12.5385 | 326 | 2.4551 | 0.2113 | 2.4551 | 1.5669 |
| No log | 12.6154 | 328 | 2.3467 | 0.2113 | 2.3467 | 1.5319 |
| No log | 12.6923 | 330 | 2.1283 | 0.2113 | 2.1283 | 1.4589 |
| No log | 12.7692 | 332 | 1.9246 | 0.2446 | 1.9246 | 1.3873 |
| No log | 12.8462 | 334 | 1.7449 | 0.2985 | 1.7449 | 1.3210 |
| No log | 12.9231 | 336 | 1.6519 | 0.4132 | 1.6519 | 1.2853 |
| No log | 13.0 | 338 | 1.6722 | 0.4098 | 1.6722 | 1.2931 |
| No log | 13.0769 | 340 | 1.7216 | 0.2857 | 1.7216 | 1.3121 |
| No log | 13.1538 | 342 | 1.7916 | 0.2628 | 1.7916 | 1.3385 |
| No log | 13.2308 | 344 | 2.0251 | 0.2270 | 2.0251 | 1.4231 |
| No log | 13.3077 | 346 | 2.1530 | 0.2621 | 2.1530 | 1.4673 |
| No log | 13.3846 | 348 | 2.1996 | 0.2535 | 2.1996 | 1.4831 |
| No log | 13.4615 | 350 | 2.0259 | 0.2535 | 2.0259 | 1.4233 |
| No log | 13.5385 | 352 | 1.8258 | 0.3066 | 1.8258 | 1.3512 |
| No log | 13.6154 | 354 | 1.6659 | 0.3511 | 1.6659 | 1.2907 |
| No log | 13.6923 | 356 | 1.6352 | 0.4032 | 1.6352 | 1.2787 |
| No log | 13.7692 | 358 | 1.6428 | 0.4098 | 1.6428 | 1.2817 |
| No log | 13.8462 | 360 | 1.6896 | 0.3465 | 1.6896 | 1.2998 |
| No log | 13.9231 | 362 | 1.7458 | 0.3158 | 1.7458 | 1.3213 |
| No log | 14.0 | 364 | 1.8685 | 0.2774 | 1.8685 | 1.3669 |
| No log | 14.0769 | 366 | 1.9395 | 0.2429 | 1.9395 | 1.3927 |
| No log | 14.1538 | 368 | 1.8290 | 0.3333 | 1.8290 | 1.3524 |
| No log | 14.2308 | 370 | 1.7674 | 0.3235 | 1.7674 | 1.3294 |
| No log | 14.3077 | 372 | 1.6785 | 0.3235 | 1.6785 | 1.2956 |
| No log | 14.3846 | 374 | 1.5667 | 0.4091 | 1.5667 | 1.2517 |
| No log | 14.4615 | 376 | 1.5535 | 0.3969 | 1.5535 | 1.2464 |
| No log | 14.5385 | 378 | 1.6077 | 0.4091 | 1.6077 | 1.2680 |
| No log | 14.6154 | 380 | 1.6691 | 0.3910 | 1.6691 | 1.2919 |
| No log | 14.6923 | 382 | 1.7313 | 0.3235 | 1.7313 | 1.3158 |
| No log | 14.7692 | 384 | 1.7110 | 0.3235 | 1.7110 | 1.3080 |
| No log | 14.8462 | 386 | 1.6179 | 0.3969 | 1.6179 | 1.2720 |
| No log | 14.9231 | 388 | 1.6247 | 0.3459 | 1.6247 | 1.2747 |
| No log | 15.0 | 390 | 1.6917 | 0.3212 | 1.6917 | 1.3006 |
| No log | 15.0769 | 392 | 1.7400 | 0.3043 | 1.7400 | 1.3191 |
| No log | 15.1538 | 394 | 1.8131 | 0.2857 | 1.8131 | 1.3465 |
| No log | 15.2308 | 396 | 1.7250 | 0.3382 | 1.7250 | 1.3134 |
| No log | 15.3077 | 398 | 1.6234 | 0.3731 | 1.6234 | 1.2741 |
| No log | 15.3846 | 400 | 1.6672 | 0.3731 | 1.6672 | 1.2912 |
| No log | 15.4615 | 402 | 1.6508 | 0.3731 | 1.6508 | 1.2848 |
| No log | 15.5385 | 404 | 1.6605 | 0.3212 | 1.6605 | 1.2886 |
| No log | 15.6154 | 406 | 1.8215 | 0.2857 | 1.8215 | 1.3496 |
| No log | 15.6923 | 408 | 1.9213 | 0.2695 | 1.9213 | 1.3861 |
| No log | 15.7692 | 410 | 1.8671 | 0.2628 | 1.8671 | 1.3664 |
| No log | 15.8462 | 412 | 1.6789 | 0.3407 | 1.6789 | 1.2957 |
| No log | 15.9231 | 414 | 1.5121 | 0.4286 | 1.5121 | 1.2297 |
| No log | 16.0 | 416 | 1.4929 | 0.4640 | 1.4929 | 1.2218 |
| No log | 16.0769 | 418 | 1.5600 | 0.4062 | 1.5600 | 1.2490 |
| No log | 16.1538 | 420 | 1.7432 | 0.2815 | 1.7432 | 1.3203 |
| No log | 16.2308 | 422 | 1.9998 | 0.1871 | 1.9998 | 1.4141 |
| No log | 16.3077 | 424 | 2.1962 | 0.2000 | 2.1962 | 1.4820 |
| No log | 16.3846 | 426 | 2.2117 | 0.2000 | 2.2117 | 1.4872 |
| No log | 16.4615 | 428 | 2.1421 | 0.2000 | 2.1421 | 1.4636 |
| No log | 16.5385 | 430 | 2.0120 | 0.2143 | 2.0120 | 1.4185 |
| No log | 16.6154 | 432 | 1.8857 | 0.25 | 1.8857 | 1.3732 |
| No log | 16.6923 | 434 | 1.8615 | 0.2628 | 1.8615 | 1.3644 |
| No log | 16.7692 | 436 | 1.9075 | 0.2609 | 1.9075 | 1.3811 |
| No log | 16.8462 | 438 | 1.9806 | 0.2174 | 1.9806 | 1.4073 |
| No log | 16.9231 | 440 | 2.0912 | 0.1871 | 2.0912 | 1.4461 |
| No log | 17.0 | 442 | 2.1281 | 0.1871 | 2.1281 | 1.4588 |
| No log | 17.0769 | 444 | 2.0521 | 0.1871 | 2.0521 | 1.4325 |
| No log | 17.1538 | 446 | 1.8757 | 0.2464 | 1.8757 | 1.3695 |
| No log | 17.2308 | 448 | 1.6782 | 0.3284 | 1.6782 | 1.2954 |
| No log | 17.3077 | 450 | 1.6164 | 0.3759 | 1.6164 | 1.2714 |
| No log | 17.3846 | 452 | 1.6502 | 0.3609 | 1.6502 | 1.2846 |
| No log | 17.4615 | 454 | 1.7342 | 0.2920 | 1.7342 | 1.3169 |
| No log | 17.5385 | 456 | 1.7119 | 0.2920 | 1.7119 | 1.3084 |
| No log | 17.6154 | 458 | 1.7253 | 0.2920 | 1.7253 | 1.3135 |
| No log | 17.6923 | 460 | 1.7881 | 0.2920 | 1.7881 | 1.3372 |
| No log | 17.7692 | 462 | 1.8963 | 0.2143 | 1.8963 | 1.3771 |
| No log | 17.8462 | 464 | 1.8805 | 0.2411 | 1.8805 | 1.3713 |
| No log | 17.9231 | 466 | 1.7983 | 0.2571 | 1.7983 | 1.3410 |
| No log | 18.0 | 468 | 1.7421 | 0.2941 | 1.7421 | 1.3199 |
| No log | 18.0769 | 470 | 1.7326 | 0.2941 | 1.7326 | 1.3163 |
| No log | 18.1538 | 472 | 1.7824 | 0.2774 | 1.7824 | 1.3351 |
| No log | 18.2308 | 474 | 1.8736 | 0.2302 | 1.8736 | 1.3688 |
| No log | 18.3077 | 476 | 1.9124 | 0.2029 | 1.9124 | 1.3829 |
| No log | 18.3846 | 478 | 1.8789 | 0.2190 | 1.8789 | 1.3707 |
| No log | 18.4615 | 480 | 1.7623 | 0.2388 | 1.7623 | 1.3275 |
| No log | 18.5385 | 482 | 1.5831 | 0.3538 | 1.5831 | 1.2582 |
| No log | 18.6154 | 484 | 1.4844 | 0.4000 | 1.4844 | 1.2184 |
| No log | 18.6923 | 486 | 1.4965 | 0.3307 | 1.4965 | 1.2233 |
| No log | 18.7692 | 488 | 1.6275 | 0.3359 | 1.6275 | 1.2757 |
| No log | 18.8462 | 490 | 1.8117 | 0.2647 | 1.8117 | 1.3460 |
| No log | 18.9231 | 492 | 1.8712 | 0.2647 | 1.8712 | 1.3679 |
| No log | 19.0 | 494 | 1.7784 | 0.2963 | 1.7784 | 1.3336 |
| No log | 19.0769 | 496 | 1.6606 | 0.3636 | 1.6606 | 1.2886 |
| No log | 19.1538 | 498 | 1.6373 | 0.3511 | 1.6373 | 1.2796 |
| 0.3621 | 19.2308 | 500 | 1.6432 | 0.3511 | 1.6432 | 1.2819 |
| 0.3621 | 19.3077 | 502 | 1.6127 | 0.3538 | 1.6127 | 1.2699 |
| 0.3621 | 19.3846 | 504 | 1.6164 | 0.3538 | 1.6164 | 1.2714 |
| 0.3621 | 19.4615 | 506 | 1.7241 | 0.3182 | 1.7241 | 1.3131 |
| 0.3621 | 19.5385 | 508 | 1.8258 | 0.2963 | 1.8258 | 1.3512 |
| 0.3621 | 19.6154 | 510 | 1.9375 | 0.2302 | 1.9375 | 1.3919 |
| 0.3621 | 19.6923 | 512 | 2.0132 | 0.2302 | 2.0132 | 1.4189 |
| 0.3621 | 19.7692 | 514 | 2.1023 | 0.2302 | 2.1023 | 1.4499 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
BeardedMonster/SabiYarn-125M-finetune
|
BeardedMonster
| 2025-01-15T12:31:44Z
| 22
| 0
|
transformers
|
[
"transformers",
"safetensors",
"nanogpt-j",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-07-31T16:47:53Z
|
---
library_name: transformers
tags: []
---
# SabiYarn
Test the whole generation capabilities here: https://huggingface.co/spaces/BeardedMonster/SabiYarn_125M
Pretrained model on Nigerian languages including English using a causal language modeling (CLM) Multi-task objective. Finetuned on the following downstream tasks: NER, sentiment analysis, topic classification, translation.
## Model Details
### Model Description
SabiYarn-125M is the first of a series of transformer models (adopted from nanogpt and inspired by GPT-J's architecture) pretrained on a large corpus of Nigerian language data in a self-supervised fashion. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right.
The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens. It also makes sure attention
is not calculated across documents.
This way, the model learns an inner representation of the languages that can then be used to extract features useful for downstream tasks. The model is best at what
it was pretrained for however, which is generating coherent texts.
This is the smallest version, with 125M parameters.
- **Developed by:** Aletheia.ai Research Lab
- **Funded by [optional]:** Personal
- **Shared by [optional]:** Jeffreypaul
- **Model type:** GPTJX (Adopted from NanoGPT)
- **Language(s) (NLP):** Majorly English, Yoruba, Hausa, Igbo, Pidgin and some others: Fulah/Fulfulde, Efik, Urhobo.
### Model Sources [optional]
- **Demo:** https://huggingface.co/spaces/BeardedMonster/SabiYarn_125M
## Uses
You can use the raw model for text generation or fine-tune it to a downstream task.
## Bias, Risks, and Limitations
The training data used for this model is mostly an aggregation of data available on huggingface for nigerian languages. We know it contains a lot of unfiltered content from the internet, which is far from neutral.
Because large-scale language models of this size do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
Additionally, language models often reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import GenerationConfig
generation_config = GenerationConfig(
max_length=100, # Adjust this based on your translation requirements
max_new_tokens=50, # Ensure sufficient tokens for your translations
num_beams=5, # Moderate number of beams for a balance between speed and quality
do_sample=False, # Disable sampling to make output deterministic
temperature=1.0, # Neutral temperature since sampling is off
top_k=0, # Disable top-k sampling (since sampling is off)
top_p=0, # Disable top-p (nucleus) sampling (since sampling is off)
repetition_penalty=4.0, # Neutral repetition penalty for translation
length_penalty=3.0, # No penalty for sequence length; modify if your translations tend to be too short/long
early_stopping=True # Stop early when all beams finish to speed up generation
)
repo_name = "BeardedMonster/SabiYarn-125M-finetune"
tokenizer_name = "BeardedMonster/SabiYarn-125M"
model = AutoModelForCausalLM.from_pretrained(repo_name, trust_remote_code=True)
tokenizer= AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)
Use the following tags for the following downstream tasks:
- Translation
```python
<translate> <yor>, <translate> .... <ibo>, <translate> ... <hau>, <translate> .... <efi>, <translate> .... <pcm>, <translate> ..... <urh>
```
- Topic classification
```python
<classify> ...... <topic>
```
- Sentiment Analysis
```python
<classify> .... <sentiment>
```
- Named Entity Recognition
```python
<NER>.... <tag>
```
You should typically put user's input between these 2 tags. Currently, model also doesnt perform very well on NER due to the scarce data on this.
### Model Architecture and Objective
Architecture is very similar to GPT-J
|
aleegis09/6563dd2c-a986-4604-b48c-3a5848287abc
|
aleegis09
| 2025-01-15T12:29:33Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T11:55:13Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6563dd2c-a986-4604-b48c-3a5848287abc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 3b3212a50f486d00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b3212a50f486d00_train_data.json
type:
field_instruction: query
field_output: ori_review
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis09/6563dd2c-a986-4604-b48c-3a5848287abc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3b3212a50f486d00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b1835142-c2f8-424f-bb55-34bf09e30efe
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1835142-c2f8-424f-bb55-34bf09e30efe
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6563dd2c-a986-4604-b48c-3a5848287abc
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.255 | 0.0016 | 1 | 2.5185 |
| 7.1064 | 0.0801 | 50 | 1.8420 |
| 6.9319 | 0.1602 | 100 | 1.8480 |
| 6.8434 | 0.2403 | 150 | 1.7523 |
| 7.2686 | 0.3204 | 200 | 1.6913 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nicholasKluge/TeenyTinyLlama-460m
|
nicholasKluge
| 2025-01-15T12:26:45Z
| 1,269
| 11
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"pt",
"dataset:nicholasKluge/Pt-Corpus-Instruct",
"arxiv:2401.16640",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-02T13:59:11Z
|
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
datasets:
- nicholasKluge/Pt-Corpus-Instruct
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: 'A PUCRS é uma universidade '
example_title: Exemplo
- text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de
example_title: Exemplo
- text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.1
top_k: 50
top_p: 1.0
max_new_tokens: 150
co2_eq_emissions:
emissions: 41100
source: CodeCarbon
training_type: pre-training
geographical_location: Germany
hardware_used: NVIDIA A100-SXM4-40GB
model-index:
- name: TeenyTinyLlama-460m
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 20.15
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 25.73
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 27.02
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 53.61
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 13.0
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 46.41
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 33.59
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 22.99
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 17.28
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
---
# TeenyTinyLlama-460m
<img src="./logo.png" alt="A curious llama exploring a mushroom forest." height="200">
## Model Summary
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. Hence, we developed the _TeenyTinyLlama_ pair: two compact models for Brazilian Portuguese text generation.
Read our article [here](https://www.sciencedirect.com/science/article/pii/S2666827024000343).
## Details
- **Architecture:** a Transformer-based model pre-trained via causal language modeling
- **Size:** 468,239,360 parameters
- **Context length:** 2048 tokens
- **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens)
- **Language:** Portuguese
- **Number of steps:** 1,200,000
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Training time**: ~ 280 hours
- **Emissions:** 41.1 KgCO2 (Germany)
- **Total energy consumption:** 115.69 kWh
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model. The main libraries used are:
- [Transformers](https://github.com/huggingface/transformers)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Datasets](https://github.com/huggingface/datasets)
- [Tokenizers](https://github.com/huggingface/tokenizers)
- [Sentencepiece](https://github.com/google/sentencepiece)
- [Accelerate](https://github.com/huggingface/accelerate)
- [FlashAttention](https://github.com/Dao-AILab/flash-attention)
- [Codecarbon](https://github.com/mlco2/codecarbon)
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Basic usage
Using the `pipeline`:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="nicholasKluge/TeenyTinyLlama-460m")
completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)
for comp in completions:
print(f"🤖 {comp['generated_text']}")
```
Using the `AutoTokenizer` and `AutoModelForCausalLM`:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and the tokenizer
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-460m", revision='main')
model = AutoModelForCausalLM.from_pretrained("nicholasKluge/TeenyTinyLlama-460m", revision='main')
# Pass the model to your device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
model.to(device)
# Tokenize the inputs and pass them to the device
inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)
# Generate some text
completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)
# Print the generated text
for i, completion in enumerate(completions):
print(f'🤖 {tokenizer.decode(completion)}')
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
nicholasKluge/TeenyTinyLlama-460m-awq
|
nicholasKluge
| 2025-01-15T12:26:02Z
| 94
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"pt",
"dataset:nicholasKluge/Pt-Corpus-Instruct",
"arxiv:2401.16640",
"base_model:nicholasKluge/TeenyTinyLlama-460m",
"base_model:quantized:nicholasKluge/TeenyTinyLlama-460m",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-21T22:52:00Z
|
---
license: apache-2.0
datasets:
- nicholasKluge/Pt-Corpus-Instruct
language:
- pt
metrics:
- perplexity
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
widget:
- text: 'A PUCRS é uma universidade '
example_title: Exemplo
- text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de
example_title: Exemplo
- text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.1
top_k: 50
top_p: 1.
max_new_tokens: 150
co2_eq_emissions:
emissions: 41100
source: CodeCarbon
training_type: pre-training
geographical_location: Germany
hardware_used: NVIDIA A100-SXM4-40GB
base_model:
- nicholasKluge/TeenyTinyLlama-460m
---
# TeenyTinyLlama-460m-awq
<img src="460m-llama.png" alt="A curious llama exploring a mushroom forest." height="200">
## Model Summary
**Note: This model is a quantized version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m). Quantization was performed using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), allowing this version to be 80% lighter, 20% faster, and with almost no performance loss. A GPU is required to run the AWQ-quantized models.**
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. Hence, we developed the _TeenyTinyLlama_ pair: two compact models for Brazilian Portuguese text generation.
Read our article [here](https://www.sciencedirect.com/science/article/pii/S2666827024000343).
## Details
- **Architecture:** a Transformer-based model pre-trained via causal language modeling
- **Size:** 468,239,360 parameters
- **Context length:** 2048 tokens
- **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens)
- **Language:** Portuguese
- **Number of steps:** 1,200,000
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Training time**: ~ 280 hours
- **Emissions:** 41.1 KgCO2 (Germany)
- **Total energy consumption:** 115.69 kWh
- **Quantization Configuration:**
- `bits`: 4
- `group_size`: 128
- `quant_method`: "awq"
- `version`: "gemm"
- `zero_point`: True
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model. The main libraries used are:
- [Transformers](https://github.com/huggingface/transformers)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Datasets](https://github.com/huggingface/datasets)
- [Tokenizers](https://github.com/huggingface/tokenizers)
- [Sentencepiece](https://github.com/google/sentencepiece)
- [Accelerate](https://github.com/huggingface/accelerate)
- [FlashAttention](https://github.com/Dao-AILab/flash-attention)
- [Codecarbon](https://github.com/mlco2/codecarbon)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Basic usage
**Note: Using quantized models required the installation of `autoawq==0.1.7`. A GPU is required to run the AWQ-quantized models.**
Using the `pipeline`:
```python
!pip install autoawq==0.1.7 -q
from transformers import pipeline
generator = pipeline("text-generation", model="nicholasKluge/TeenyTinyLlama-460m-awq")
completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)
for comp in completions:
print(f"🤖 {comp['generated_text']}")
```
Using the `AutoTokenizer` and `AutoModelForCausalLM`:
```python
!pip install autoawq==0.1.7 -q
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and the tokenizer
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-460m-awq", revision='main')
model = AutoModelForCausalLM.from_pretrained("nicholasKluge/TeenyTinyLlama-460m-awq", revision='main')
# Pass the model to your device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
model.to(device)
# Tokenize the inputs and pass them to the device
inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)
# Generate some text
completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)
# Print the generated text
for i, completion in enumerate(completions):
print(f'🤖 {tokenizer.decode(completion)}')
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
thakkkkkk/9b151d29-c5d3-4386-a929-c561ff09d717
|
thakkkkkk
| 2025-01-15T12:23:49Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:10:33Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9b151d29-c5d3-4386-a929-c561ff09d717
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2f52f5d4dd7c3b59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f52f5d4dd7c3b59_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/9b151d29-c5d3-4386-a929-c561ff09d717
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/2f52f5d4dd7c3b59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0ad070a1-afeb-4188-a303-62e6e389155d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ad070a1-afeb-4188-a303-62e6e389155d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9b151d29-c5d3-4386-a929-c561ff09d717
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0093 | 0.3751 | 200 | 1.9968 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nbninh/172c7694-a669-4533-8464-b01b6d31a288
|
nbninh
| 2025-01-15T12:23:24Z
| 5
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:13:14Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 172c7694-a669-4533-8464-b01b6d31a288
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a4928fa9f76b3319_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a4928fa9f76b3319_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/172c7694-a669-4533-8464-b01b6d31a288
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a4928fa9f76b3319_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: da2c4866-8529-49e5-8b28-ddb4da385107
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: da2c4866-8529-49e5-8b28-ddb4da385107
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 172c7694-a669-4533-8464-b01b6d31a288
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8131 | 0.2159 | 200 | 0.6855 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Keltezaa/avril-lavigne-2000s-flux-lora
|
Keltezaa
| 2025-01-15T12:22:19Z
| 702
| 2
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"model",
"woman",
"1990s",
"celebrity",
"realistic",
"hot",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:22:17Z
|
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- model
- woman
- 1990s
- celebrity
- realistic
- hot
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: avrilflx
widget:
- text: 'portrait inspired by Peter Lindbergh''s photographic style of avrilflx, a blonde woman with black eyeliner looking directly into the camera with an intimate and deep expression.'
output:
url: >-
51623099.jpeg
- text: 'Create a hyper-realistic portrait of a regal woman inspired by the "Game of Thrones" aesthetic, but dressed in battle-ready attire. She stands in a windswept mountain pass, her armor gleaming under the muted sunlight. Her outfit combines elegance with practicality: a fitted leather cuirass adorned with intricate carvings of wolves and stars, layered over chainmail. A dark cloak with a fur-trimmed collar billows behind her, hinting at her noble lineage. Her blonde hair is tied back in a loose braid, with small strands framing her strong, determined face. She carries a finely crafted longsword at her side, its hilt encrusted with subtle gemstones. Her piercing gaze reflects both beauty and unyielding strength. The scene is detailed with rugged terrain, distant snow-capped peaks, and the faint howl of the wind, evoking an air of mystery and power'
output:
url: >-
51623100.jpeg
- text: 'High quality realistic beauty shot of avrilflx. A close-up shot of a woman. She is wearing a white blouse. Her messy blonde hair is styled like an emo girl. The backdrop is a light gray. She is giving a beautiful smile showing her white teeth. She is wearing black eyeliner.'
output:
url: >-
51625044.jpeg
- text: 'An ethereal depiction of an ancient goddess or princess, inspired by classical mythology. The woman has delicate, noble features and is adorned in a flowing, white Grecian-style gown with gold accents along the neckline. Her long, golden hair is styled in loose waves and intricate braids, cascading over her shoulder. A sheer, patterned veil drapes over her head, framing her serene expression and bright, piercing blue eyes. She stands in soft, natural light, with blurred, sandy stone structures in the background, evoking the grandeur of an ancient palace or temple. The atmosphere radiates grace, poise, and a sense of timeless beauty.'
output:
url: >-
51623101.jpeg
- text: 'An intimate close-up of a woman captures her face in detail, highlighting every feature with soft, enveloping light. The camera, a Sony A7III with a 50mm f/1.2 lens, focuses on her seductive and daring gaze, full of intensity and mystery. The black eyeliner creates a perfect line that enhances her captivating look.
Her lips are slightly parted, with a subtle shine that contrasts with her pale, smooth skin. Her platinum hair falls in a messy but stylized manner over her forehead, framing her face as if it were a canvas of contained energy. Her background is almost non-existent, blurred into soft shadows that allow all the focus to be on her.
The image conveys a sense of strength, mystery and a subtle but powerful sensuality, reflecting her bold character and enigmatic presence.,Midjourney v6'
output:
url: >-
51623107.jpeg
- text: 'High quality realistic beauty shot of avrilflx. A close-up shot of a woman. She is wearing a white blouse. Her messy blonde hair is styled like an emo girl. The backdrop is a light gray. She is giving a beautiful smile. She is wearing black eyeliner.'
output:
url: >-
51624712.jpeg
- text: 'A cyborg blonde woman, in the center of a futuristic background, bathed in soft, even light. Her proportional and detailed face, with piercing eyes, gazes directly at the viewer, exuding an air of mastery. the day time lighting accentuates her striking features. The overall effect is a true masterpiece, boasting best quality visuals that draw the viewer in.'
output:
url: >-
51623102.jpeg
- text: 'black and white portrait inspired by Peter Lindbergh''s photographic style of avrilflx, a blonde woman with black eyeliner looking directly into the camera with an intimate and deep expression.'
output:
url: >-
51623103.jpeg
- text: 'An intimate close-up of a woman captures her face in detail, highlighting every feature with soft, enveloping light. The camera, a Sony A7III with a 50mm f/1.2 lens, focuses on her seductive and daring gaze, full of intensity and mystery. The black eyeliner creates a perfect line that enhances her captivating look.
Her lips are slightly parted, with a subtle shine that contrasts with her pale, smooth skin. Her platinum hair falls in a messy but stylized manner over her forehead, framing her face as if it were a canvas of contained energy. Her background is almost non-existent, blurred into soft shadows that allow all the focus to be on her.
The image conveys a sense of strength, mystery and a subtle but powerful sensuality, reflecting her bold character and enigmatic presence.,Midjourney v6'
output:
url: >-
51623105.jpeg
- text: 'A modern portrait of a woman with blonde emo hair styled to frame her face. She has a serene expression, with piercing eyes with black eyeliner and a flawless, natural makeup look that enhances her features. She wears a simple white sweater, exuding elegance and sophistication. The background is a clean, gradient blue, providing a crisp contrast to her light complexion and understated gold earrings. The lighting is soft and diffused, creating a warm and inviting atmosphere while highlighting her skin''s natural texture'
output:
url: >-
51624437.jpeg
---
# Avril Lavigne - 2000s (Flux LoRa)
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Helps create HD images of singer Avril Lavigne from 2000s.</p><p></p><p>Use "Blonde hair" and "black eyeliner" in prompt to replicate her style from 2000s.</p>
## Trigger words
You should use `avrilflx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/avril-lavigne-2000s-flux-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/avril-lavigne-2000s-flux-lora', weight_name='avrilflx_flux-lora-000002.safetensors')
image = pipeline('A modern portrait of a woman with blonde emo hair styled to frame her face. She has a serene expression, with piercing eyes with black eyeliner and a flawless, natural makeup look that enhances her features. She wears a simple white sweater, exuding elegance and sophistication. The background is a clean, gradient blue, providing a crisp contrast to her light complexion and understated gold earrings. The lighting is soft and diffused, creating a warm and inviting atmosphere while highlighting her skin's natural texture').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
adammandic87/b80b8909-704b-42ca-a65f-3ee8d294eb46
|
adammandic87
| 2025-01-15T12:21:38Z
| 5
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-01-15T12:20:54Z
|
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b80b8909-704b-42ca-a65f-3ee8d294eb46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1fdfb3f4e449665_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1fdfb3f4e449665_train_data.json
type:
field_instruction: question
field_output: best_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/b80b8909-704b-42ca-a65f-3ee8d294eb46
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c1fdfb3f4e449665_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 212c681b-11df-434d-a645-a52b9b33936f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 212c681b-11df-434d-a645-a52b9b33936f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b80b8909-704b-42ca-a65f-3ee8d294eb46
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0132 | 1 | nan |
| 0.0 | 0.0397 | 3 | nan |
| 0.0 | 0.0795 | 6 | nan |
| 0.0 | 0.1192 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Keltezaa/kristen-bell
|
Keltezaa
| 2025-01-15T12:20:17Z
| 99
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"woman",
"actor",
"celebrity",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:20:15Z
|
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- woman
- actor
- celebrity
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: kristen
widget:
- text: ' '
output:
url: >-
50568466.jpeg
- text: ' '
output:
url: >-
50568509.jpeg
- text: ' '
output:
url: >-
50568521.jpeg
- text: ' '
output:
url: >-
50568561.jpeg
- text: ' '
output:
url: >-
50568661.jpeg
- text: ' '
output:
url: >-
50568692.jpeg
- text: ' '
output:
url: >-
50568716.jpeg
- text: ' '
output:
url: >-
50568740.jpeg
- text: ' '
output:
url: >-
50568807.jpeg
---
# kristen bell
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Kristen Anne Bell (born July 18, 1980) is an American actress.</p><p></p><p></p><p></p><p></p>
## Trigger words
You should use `kristen` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/kristen-bell/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/kristen-bell', weight_name='kristen_bell_dev_f1_k.safetensors')
image = pipeline('`kristen`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Xinging/llama2-7b_sft_0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531_template
|
Xinging
| 2025-01-15T12:20:15Z
| 14
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T09:56:59Z
|
---
library_name: transformers
license: other
base_model: meta-llama/Llama-2-7b-hf
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama2-7b_sft_0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b_sft_0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the 0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.20.3
|
nhung01/6ffe8f65-c228-4a61-8aea-c925877d5d33
|
nhung01
| 2025-01-15T12:20:11Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:10:24Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6ffe8f65-c228-4a61-8aea-c925877d5d33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2f52f5d4dd7c3b59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f52f5d4dd7c3b59_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/6ffe8f65-c228-4a61-8aea-c925877d5d33
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2f52f5d4dd7c3b59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0ad070a1-afeb-4188-a303-62e6e389155d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ad070a1-afeb-4188-a303-62e6e389155d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6ffe8f65-c228-4a61-8aea-c925877d5d33
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.953 | 0.1875 | 200 | 2.0691 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sudo7/aragpt2-fine-tuned-lyrics
|
sudo7
| 2025-01-15T12:20:08Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T10:22:33Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Keltezaa/rachael-leigh-cook-2000s-flux
|
Keltezaa
| 2025-01-15T12:19:59Z
| 37
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"woman",
"actress",
"celebrity",
"girls",
"realistic",
"celebrity,",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:19:56Z
|
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- woman
- actress
- celebrity
- girls
- realistic
- celebrity,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
widget:
- text: 'Breathtaking medium shot photography of ohwx, (:1.3) , Space station,style by Eileen Gray,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), bare shoulder, detailed texture quality wool top, simple black choker, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), background is a blurry flower field, realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: RachaelLeighCook_flux_lora_v1_Weight-1.1
output:
url: >-
50456442.jpeg
- text: 'Breathtaking over the shoulder shot photography of ohwx looking at viewer, imperfections, necklace, looking over shoulders, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus, focus on background:1.5), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: RachaelLeighCook_flux_lora_v1_Weight-1.1
output:
url: >-
50456498.jpeg
- text: 'Breathtaking over the shoulder shot photography of ohwx looking at viewer, high collar white blouse, imperfections, necklace with ornament falling down her back, looking over shoulders, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus on subject, blurred background:1.4), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: RachaelLeighCook_flux_lora_v1_Weight-1.1
output:
url: >-
50456550.jpeg
- text: 'Breathtaking medium shot photography of ohwx, collarbone, smile, (upper body framing:1.3), bare shoulder, beautiful fashion top, necklace, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), background is a blurry flower field, realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: RachaelLeighCook_flux_lora_v1_Weight-1.1
output:
url: >-
50456580.jpeg
- text: 'Breathtaking medium shot photography of ohwx, (Mesh long-sleeve top with rhinestone embellishments and cuffs:1.3) , niflheim icy realm in norse mythology,style by John Piper,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), bare shoulder, detailed texture quality wool top, simple black choker, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: RachaelLeighCook_flux_lora_v1_Weight-1.1
output:
url: >-
50456751.jpeg
- text: 'Breathtaking medium shot photography of ohwx, (Halter neck top with open back and floral embroidery:1.3) , mystical Vale of Avalon,style by Miwa Komatsu,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), bare shoulder, detailed texture quality wool top, simple black choker, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: RachaelLeighCook_flux_lora_v1_Weight-1.1
output:
url: >-
50456773.jpeg
---
# 👑 Rachael Leigh Cook (2000s)(Flux) 🎬
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h1 id="***-if-you-love-it-like-it!-***-j86xw6qjv">👍 <span style="color:rgb(64, 192, 87)">***</span> <strong><em><span style="color:rgb(34, 139, 230)">If you love it, like it!</span></em></strong> <span style="color:rgb(64, 192, 87)">***</span>👍</h1><p><em>workflow: </em><a target="_blank" rel="ugc" href="https://civitai.com/models/1088678"><em>https://civitai.com/models/1088678</em></a></p><h1 id="rachael-leigh-cook-(2000s)-rk431afk5">👑 <span style="color:#7950f2">Rachael Leigh Cook</span><span style="color:rgb(121, 80, 242)"> (</span><span style="color:#228be6">2000s</span><span style="color:rgb(121, 80, 242)">)</span>🎬</h1><p><strong>About my celebrities loras</strong><br />90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.</p><p>I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.</p><p>The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.</p><p></p><p>This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).</p><p>Trained with ai-toolkit, so merging it is not easy.</p><p></p><p><strong>To get the best result</strong></p><p><em>Guidance: </em><strong><em>2.2-3</em></strong></p><p><em>Steps (dev): </em><strong><em>30-40</em></strong></p><p><em>daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75</em></p><p><em>Resolution: </em><strong><em>Upscale the latent by 1.25 or 1.5</em></strong><em> you'll get awsome result. (take longer time but worth it)</em></p><p><em>Trigger word is (may work better in certain context): </em><strong><em>ohwx</em></strong></p><p></p><p>Enjoy!</p>
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/rachael-leigh-cook-2000s-flux/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/rachael-leigh-cook-2000s-flux', weight_name='RachaelLeighCook_flux_lora_v1.safetensors')
image = pipeline('Breathtaking medium shot photography of ohwx, (Halter neck top with open back and floral embroidery:1.3) , mystical Vale of Avalon,style by Miwa Komatsu,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), bare shoulder, detailed texture quality wool top, simple black choker, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
glif-loradex-trainer/scrillax_Sheesh_Bot
|
glif-loradex-trainer
| 2025-01-15T12:19:51Z
| 312
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2025-01-15T12:19:30Z
|
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1736943509591__000001500_0.jpg
text: wounded centaur, mythical creature Sheesh
- output:
url: samples/1736943534454__000001500_1.jpg
text: ruins of athens, snake Sheesh
- output:
url: samples/1736943559336__000001500_2.jpg
text: Yeti Sheesh
base_model: black-forest-labs/FLUX.1-dev
trigger: "Sheesh"
instance_prompt: "Sheesh"
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Sheesh_Bot
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `scrillax`.
<Gallery />
## Trigger words
You should use `Sheesh` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/scrillax_Sheesh_Bot/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Keltezaa/olivia-wilde-flux
|
Keltezaa
| 2025-01-15T12:19:48Z
| 67
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"woman",
"actress",
"celebrity",
"girls",
"realistic",
"celebrity,",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:19:45Z
|
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- woman
- actress
- celebrity
- girls
- realistic
- celebrity,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
widget:
- text: 'Breathtaking over the shoulder shot photography of ohwx looking at viewer, imperfections, necklace, looking over shoulders, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus, focus on background:1.5), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: OliviaWilde_flux_lora_v1_Weight-1.1
output:
url: >-
50265848.jpeg
- text: 'Breathtaking medium shot photography of ohwx, (Off-shoulder silky blouse with ruched bust and flared sleeves:1.3) , Outhouse,style by Eve Arnold,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), bare shoulder, detailed texture quality wool top, simple black choker, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), background is a blurry flower field, realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: OliviaWilde_flux_lora_v1_Weight-1.1
output:
url: >-
50265911.jpeg
- text: 'Breathtaking medium shot photography of ohwx, (Asymmetric one-shoulder top with draped fabric and shimmer:1.3) , elusive Land of Shadows,A towering sand dune offers panoramic views of the desert below,Tropical storm clouds with heavy rain and strong winds,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: OliviaWilde_flux_lora_v1_Weight-1.1
output:
url: >-
50266093.jpeg
---
# 👑 Olivia Wilde (Flux) 🎬
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h1 id="***-if-you-love-it-like-it!-***-41en9ow9n">👍 <span style="color:rgb(64, 192, 87)">***</span> <strong><em><span style="color:rgb(34, 139, 230)">If you love it, like it!</span></em></strong> <span style="color:rgb(64, 192, 87)">***</span>👍</h1><p><em>workflow: </em><a target="_blank" rel="ugc" href="https://civitai.com/models/1088678"><em>https://civitai.com/models/1088678</em></a></p><h1 id="olivia-wilde-7l1tu9rzf">👑 <span style="color:rgb(121, 80, 242)">Olivia Wilde </span>🎬</h1><p><strong>About my celebrities loras</strong><br />90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.</p><p>I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.</p><p>The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.</p><p></p><p>This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).</p><p>Trained with ai-toolkit, so merging it is not easy.</p><p></p><p><strong>To get the best result</strong></p><p><em>Guidance: </em><strong><em>2.2-3</em></strong></p><p><em>Steps (dev): </em><strong><em>30-40</em></strong></p><p><em>daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75</em></p><p><em>Resolution: </em><strong><em>Upscale the latent by 1.25 or 1.5</em></strong><em> you'll get awsome result. (take longer time but worth it)</em></p><p><em>Trigger word is (may work better in certain context): </em><strong><em>ohwx</em></strong></p><p></p><p>Enjoy!</p>
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/olivia-wilde-flux/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/olivia-wilde-flux', weight_name='OliviaWilde_flux_lora_v1.safetensors')
image = pipeline('Breathtaking medium shot photography of ohwx, (Asymmetric one-shoulder top with draped fabric and shimmer:1.3) , elusive Land of Shadows,A towering sand dune offers panoramic views of the desert below,Tropical storm clouds with heavy rain and strong winds,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
jhuisman/jaschahuisman-flux-lora
|
jhuisman
| 2025-01-15T12:19:32Z
| 10
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T11:35:19Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NKVDM
---
# Niekvandam Flux Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NKVDM` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jhuisman/niekvandam-flux-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
AbdulRehman-hf/my_face_model
|
AbdulRehman-hf
| 2025-01-15T12:17:39Z
| 34
| 1
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T11:30:51Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: abdul
---
# My_Face_Model
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `abdul` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AbdulRehman-hf/my_face_model', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/katherine-heigl-90s-flux
|
Keltezaa
| 2025-01-15T12:16:47Z
| 276
| 2
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"woman",
"actress",
"celebrity",
"girls",
"realistic",
"celebrity,",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:16:44Z
|
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- woman
- actress
- celebrity
- girls
- realistic
- celebrity,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ohwx
widget:
- text: 'Breathtaking over the shoulder shot photography of ohwx looking at viewerOff-shoulder silky blouse with ruched bust and flared sleeves , necklace, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus on subject, blurred background:1.4), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: KatherineHeigl_flux_lora_v1_Weight-1.0
output:
url: >-
49761287.jpeg
- text: 'Breathtaking medium shot photography of ohwx, (Corset top with boning and satin ribbon lace-up:1.3) , Townhouse,style by René Laloux,soft lighting, upper body shot, collarbone, smile, (upper body framing:1.3), bare shoulder, detailed texture quality wool top, simple black choker, sensual lips, eyelashes, (entirely visible beautiful hairstyle:1.2), fine hair detail, perfect eyes, iris pattern, eyes makeup, (perfectly sharp:1.3), (head to shoulders composition:1.4), background is a blurry flower field, realistic textures, (deep focus:1.1), negative space around subject, 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: KatherineHeigl_flux_lora_v1_Weight-1.1
output:
url: >-
49761301.jpeg
- text: 'Breathtaking over the shoulder shot photography of ohwx looking at viewer, high collar white blouse, imperfections, necklace with ornament falling down her back, looking over shoulders, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus on subject, blurred background:1.4), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3'
parameters:
negative_prompt: KatherineHeigl_flux_lora_v1_Weight-1.0
output:
url: >-
49761313.jpeg
---
# 👑 Katherine Heigl (90s)(Flux) 🎬
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h1 id="***-if-you-love-it-like-it!-***-vva150sqr">👍 <span style="color:rgb(64, 192, 87)">***</span> <strong><em><span style="color:rgb(34, 139, 230)">If you love it, like it!</span></em></strong> <span style="color:rgb(64, 192, 87)">***</span>👍</h1><p><em>workflow: </em><a target="_blank" rel="ugc" href="https://civitai.com/models/1088678"><em>https://civitai.com/models/1088678</em></a></p><h1 id="katherine-heigl-e23hl0afj">👑 <span style="color:rgb(121, 80, 242)">Katherine Heigl </span>🎬</h1><p><strong>About my celebrities loras</strong><br />90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it's because there is info on hands in the dataset used to train the lora, but that will not happen with my loras.</p><p>I've trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras.</p><p>The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho.</p><p></p><p>This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32).</p><p>Trained with ai-toolkit, so merging it is not easy.</p><p></p><p><strong>To get the best result</strong></p><p><em>Guidance: </em><strong><em>2.2-3</em></strong></p><p><em>Steps (dev): </em><strong><em>30-40</em></strong></p><p><em>daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75</em></p><p><em>Resolution: </em><strong><em>Upscale the latent by 1.25 or 1.5</em></strong><em> you'll get awsome result. (take longer time but worth it)</em></p><p><em>Trigger word is (may work better in certain context): </em><strong><em>ohwx</em></strong></p><p></p><p>Enjoy!</p>
## Trigger words
You should use `ohwx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/katherine-heigl-90s-flux/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/katherine-heigl-90s-flux', weight_name='KatherineHeigl_flux_lora_v1.safetensors')
image = pipeline('Breathtaking over the shoulder shot photography of ohwx looking at viewer, high collar white blouse, imperfections, necklace with ornament falling down her back, looking over shoulders, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus on subject, blurred background:1.4), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aloosh19/bert-SSLT2-mrpc
|
aloosh19
| 2025-01-15T12:16:40Z
| 7
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-14T12:10:24Z
|
---
base_model:
- google-bert/bert-base-uncased
library_name: transformers
---
|
Keltezaa/olivia-hussey-1970s-flux-lora
|
Keltezaa
| 2025-01-15T12:16:30Z
| 25
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"young",
"1960s",
"woman",
"actress",
"celebrity",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T12:16:26Z
|
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- young
- 1960s
- woman
- actress
- celebrity
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: ' '
output:
url: >-
48441212.jpeg
- text: 'Create a close-up, high-fashion portrait of a woman with striking eyes and a calm, confident expression. She wears a sophisticated black outfit that exudes elegance and minimalism. The background is a smooth gradient of dark gray, enhancing the overall refined and contemporary aesthetic. The lighting is soft yet directional, highlighting her flawless complexion and subtle makeup, with a focus on her eyes and cheekbones. The mood of the portrait is polished, modern, and artistic.,ashgflx'
output:
url: >-
48441262.jpeg
- text: 'close up face portrait of oliviahflx, a brunette woman in martincstyle look, plain white studio backdrop, framed centered'
output:
url: >-
48441261.jpeg
- text: '(photo of a woman), editorial photoshoot, close up, in a floral garden, shot on film camera, kodak colors, elegant pose, perfect lighting, beautiful sky showing through the leaves, she is wearing a top with floral prints, beautiful smile, 90mm lens, extremely detailed, in an exotic forest location'
output:
url: >-
48441258.jpeg
- text: 'black and white portrait inspired by Peter Lindbergh''s photographic style of oliviahflx, a woman looking directly into the camera. ** Lighting ** must be soft and natural,avoiding exaggerated perfection,and focusing on capturing the human essence and the natural beauty of women.'
output:
url: >-
48441253.jpeg
- text: 'Create a hyper-realistic portrait of a regal woman inspired by the "Game of Thrones" aesthetic, but dressed in battle-ready attire. She stands in a windswept mountain pass, her armor gleaming under the muted sunlight. Her outfit combines elegance with practicality: a fitted leather cuirass adorned with intricate carvings of wolves and stars, layered over chainmail. A dark cloak with a fur-trimmed collar billows behind her, hinting at her noble lineage. Her hair is tied back in a loose braid, with small strands framing her strong, determined face. She carries a finely crafted longsword at her side, its hilt encrusted with subtle gemstones. Her piercing gaze reflects both beauty and unyielding strength. The scene is detailed with rugged terrain, distant snow-capped peaks, and the faint howl of the wind, evoking an air of mystery and power,oliviahflx'
output:
url: >-
48441271.jpeg
- text: 'Create a close-up, high-fashion portrait of a woman with striking blue eyes and a calm, confident expression. She wears a sophisticated black outfit that exudes elegance and minimalism. The background is a smooth gradient of dark gray, enhancing the overall refined and contemporary aesthetic. The lighting is soft yet directional, highlighting her flawless complexion and subtle makeup, with a focus on her eyes and cheekbones. The mood of the portrait is polished, modern, and artistic.,ashgflx'
output:
url: >-
48441204.jpeg
- text: 'A close-up portrait of an elegant woman in a softly lit studio setting. She is wearing a light pastel blue wool jacket with intricate white trim and decorative pearl-like buttons, exuding a refined vintage style. Her blonde hair is styled in loose waves and tied back with a delicate white cotton headscarf featuring subtle floral patterns. She has a serene, contemplative expression, with her warm-toned makeup emphasizing natural beauty, including soft brown lipstick and winged eyeliner. Her look is complemented by luxurious accessories, such as drop earrings and a brooch adorned with intricate details, including emerald and diamond accents. The background is minimal and softly lit, casting gentle shadows that highlight her timeless sophistication and poise. The atmosphere is chic and polished, reminiscent of a high-fashion editorial shoot.'
output:
url: >-
48441257.jpeg
- text: 'Create a close-up, high-fashion portrait of a woman with striking eyes and a calm, confident expression. She wears a sophisticated black outfit that exudes elegance and minimalism. The background is a smooth gradient of dark gray, enhancing the overall refined and contemporary aesthetic. The lighting is soft yet directional, highlighting her flawless complexion and subtle makeup, with a focus on her eyes and cheekbones. The mood of the portrait is polished, modern, and artistic.,ashgflx'
output:
url: >-
48441215.jpeg
- text: 'A woman poses elegantly on the sun-kissed beach, wearing a flowy dress. Framed by the warm tones of Kodak film, her serene features are bathed in soft, golden light. The camera captures a close-up shot and there is a breathtaking sky behind her, a canvas of puffy clouds and brilliant blue.,brookesflx'
output:
url: >-
48441255.jpeg
- text: '"A clean, minimalist portrait of oliviahflx, a confident brunette woman with striking blue eyes and a calm expression. She is wearing a light-colored sweater and gold hoop earrings that add a subtle touch of elegance. The soft, diffused lighting highlights her flawless skin and natural beauty, while the neutral background emphasizes her serene and sophisticated demeanor. She has a blac mole on top of her lips on the left side"'
output:
url: >-
48446372.jpeg
- text: '(photo of a woman), editorial photoshoot, close up, in a floral garden, shot on film camera, kodak colors, elegant pose, perfect lighting, beautiful sky showing through the leaves, she is wearing a top with floral prints, beautiful smile, 90mm lens, extremely detailed, in an exotic forest location'
output:
url: >-
48441256.jpeg
- text: 'An intimate close-up of a woman captures her face in detail, highlighting every feature with soft, enveloping light. The camera, a Sony A7III with a 50mm f/1.2 lens, focuses on her seductive and daring gaze, full of intensity and mystery. The black eyeliner creates a perfect line that enhances her captivating look.
Her lips are slightly parted, with a subtle shine that contrasts with her pale, smooth skin. Her blonde hair falls in a messy but stylized manner over her forehead, framing her face as if it were a canvas of contained energy. Her background is almost non-existent, blurred into soft shadows that allow all the focus to be on her.
The image conveys a sense of strength, mystery and a subtle but powerful sensuality, reflecting her bold character and enigmatic presence.,Midjourney v6'
output:
url: >-
48445672.jpeg
- text: 'High quality realistic beauty shot of oliviahflx. A close-up shot of a blue eyed woman. She is wearing a camisole blouse. Her lips are a light red color. The backdrop is a light gray. She is giving a beautiful smile.'
output:
url: >-
48441250.jpeg
- text: '"A clean, minimalist portrait of oliviahflx, a confident woman with striking blue eyes and a calm expression. She is wearing a light-colored sweater and gold hoop earrings that add a subtle touch of elegance. The soft, diffused lighting highlights her flawless skin and natural beauty, while the neutral background emphasizes her serene and sophisticated demeanor. She has a blac mole on top of her lips on the left side"'
output:
url: >-
48445674.jpeg
- text: 'Create a hyper-realistic portrait of a regal woman inspired by the "Game of Thrones" aesthetic, but dressed in battle-ready attire. She stands in a windswept mountain pass, her armor gleaming under the muted sunlight. Her outfit combines elegance with practicality: a fitted leather cuirass adorned with intricate carvings of wolves and stars, layered over chainmail. A dark cloak with a fur-trimmed collar billows behind her, hinting at her noble lineage. Her hair is tied back in a loose braid, with small strands framing her strong, determined face. Her piercing gaze reflects both beauty and unyielding strength. The scene is detailed with rugged terrain, distant snow-capped peaks, and the faint howl of the wind, evoking an air of mystery and power'
output:
url: >-
48441242.jpeg
- text: '"A clean, minimalist portrait of oliviahflx, a confident woman with striking blue eyes and a calm expression. She is wearing a light-colored sweater and gold hoop earrings that add a subtle touch of elegance. The soft, diffused lighting highlights her flawless skin and natural beauty, while the neutral background emphasizes her serene and sophisticated demeanor. "'
output:
url: >-
48446371.jpeg
---
# Olivia Hussey - 1970s (Flux LoRa)
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Helps create HD images of Olivia Hussey from 1970s - She is known for playing Juliet's Role in the Romeo and Juliet Movie.</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/olivia-hussey-1970s-flux-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/olivia-hussey-1970s-flux-lora', weight_name='Olivia_Hussey.safetensors')
image = pipeline('"A clean, minimalist portrait of oliviahflx, a confident woman with striking blue eyes and a calm expression. She is wearing a light-colored sweater and gold hoop earrings that add a subtle touch of elegance. The soft, diffused lighting highlights her flawless skin and natural beauty, while the neutral background emphasizes her serene and sophisticated demeanor. "').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
samoline/61e48999-1e47-47b6-a8a8-03ede983f822
|
samoline
| 2025-01-15T12:16:12Z
| 14
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Henrychur/MMed-Llama-3-8B-EnIns",
"base_model:adapter:Henrychur/MMed-Llama-3-8B-EnIns",
"license:llama3",
"region:us"
] | null | 2025-01-15T11:37:17Z
|
---
library_name: peft
license: llama3
base_model: Henrychur/MMed-Llama-3-8B-EnIns
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 61e48999-1e47-47b6-a8a8-03ede983f822
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Henrychur/MMed-Llama-3-8B-EnIns
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3caf93e309c0d240_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3caf93e309c0d240_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/61e48999-1e47-47b6-a8a8-03ede983f822
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/3caf93e309c0d240_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 1ed6696b-303b-45a3-b84a-f185a55757de
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 1ed6696b-303b-45a3-b84a-f185a55757de
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 61e48999-1e47-47b6-a8a8-03ede983f822
This model is a fine-tuned version of [Henrychur/MMed-Llama-3-8B-EnIns](https://huggingface.co/Henrychur/MMed-Llama-3-8B-EnIns) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.6815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.3123 | 0.0000 | 1 | 13.6835 |
| 11.326 | 0.0000 | 2 | 13.6815 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/3afb6b85-6a0d-474c-99ca-c50d688d4d10
|
lesso04
| 2025-01-15T12:14:52Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:10:18Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3afb6b85-6a0d-474c-99ca-c50d688d4d10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 2f52f5d4dd7c3b59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f52f5d4dd7c3b59_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/3afb6b85-6a0d-474c-99ca-c50d688d4d10
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/2f52f5d4dd7c3b59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0ad070a1-afeb-4188-a303-62e6e389155d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ad070a1-afeb-4188-a303-62e6e389155d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3afb6b85-6a0d-474c-99ca-c50d688d4d10
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0047 | 5 | nan |
| 0.0 | 0.0094 | 10 | nan |
| 0.0 | 0.0141 | 15 | nan |
| 0.0 | 0.0188 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhungphammmmm/df195273-1c6c-4882-853a-c079519ba77e
|
nhungphammmmm
| 2025-01-15T12:14:44Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T12:09:48Z
|
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df195273-1c6c-4882-853a-c079519ba77e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1fdfb3f4e449665_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1fdfb3f4e449665_train_data.json
type:
field_instruction: question
field_output: best_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/df195273-1c6c-4882-853a-c079519ba77e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c1fdfb3f4e449665_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 212c681b-11df-434d-a645-a52b9b33936f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 212c681b-11df-434d-a645-a52b9b33936f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# df195273-1c6c-4882-853a-c079519ba77e
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 76
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2474 | 0.9934 | 75 | 0.9551 |
| 1.6941 | 1.0066 | 76 | 0.9566 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k5_task1_organization
|
MayBashendy
| 2025-01-15T12:14:03Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T11:56:18Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k5_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k5_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5840
- Qwk: 0.3459
- Mse: 1.5840
- Rmse: 1.2586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0909 | 2 | 6.8706 | 0.0242 | 6.8706 | 2.6212 |
| No log | 0.1818 | 4 | 4.5537 | 0.12 | 4.5537 | 2.1339 |
| No log | 0.2727 | 6 | 4.4912 | -0.0648 | 4.4912 | 2.1192 |
| No log | 0.3636 | 8 | 5.3606 | -0.0803 | 5.3606 | 2.3153 |
| No log | 0.4545 | 10 | 3.8817 | -0.0804 | 3.8817 | 1.9702 |
| No log | 0.5455 | 12 | 2.3578 | 0.1194 | 2.3578 | 1.5355 |
| No log | 0.6364 | 14 | 1.8213 | 0.2342 | 1.8213 | 1.3495 |
| No log | 0.7273 | 16 | 1.8044 | 0.1714 | 1.8044 | 1.3433 |
| No log | 0.8182 | 18 | 2.0548 | 0.1579 | 2.0548 | 1.4334 |
| No log | 0.9091 | 20 | 2.2141 | 0.0 | 2.2141 | 1.4880 |
| No log | 1.0 | 22 | 2.3652 | 0.0292 | 2.3652 | 1.5379 |
| No log | 1.0909 | 24 | 2.2941 | 0.0146 | 2.2941 | 1.5146 |
| No log | 1.1818 | 26 | 2.2398 | 0.0435 | 2.2398 | 1.4966 |
| No log | 1.2727 | 28 | 1.8855 | 0.2759 | 1.8855 | 1.3731 |
| No log | 1.3636 | 30 | 1.6558 | 0.2056 | 1.6558 | 1.2868 |
| No log | 1.4545 | 32 | 1.5973 | 0.2056 | 1.5973 | 1.2639 |
| No log | 1.5455 | 34 | 1.5514 | 0.1869 | 1.5514 | 1.2455 |
| No log | 1.6364 | 36 | 1.5700 | 0.2342 | 1.5700 | 1.2530 |
| No log | 1.7273 | 38 | 1.7613 | 0.3607 | 1.7613 | 1.3272 |
| No log | 1.8182 | 40 | 1.9252 | 0.2923 | 1.9252 | 1.3875 |
| No log | 1.9091 | 42 | 1.8372 | 0.2923 | 1.8372 | 1.3554 |
| No log | 2.0 | 44 | 1.7235 | 0.3548 | 1.7235 | 1.3128 |
| No log | 2.0909 | 46 | 1.5153 | 0.2143 | 1.5153 | 1.2310 |
| No log | 2.1818 | 48 | 1.5024 | 0.0971 | 1.5024 | 1.2257 |
| No log | 2.2727 | 50 | 1.5660 | 0.1698 | 1.5660 | 1.2514 |
| No log | 2.3636 | 52 | 1.4505 | 0.3063 | 1.4505 | 1.2044 |
| No log | 2.4545 | 54 | 1.3445 | 0.3866 | 1.3445 | 1.1595 |
| No log | 2.5455 | 56 | 1.5586 | 0.4394 | 1.5586 | 1.2484 |
| No log | 2.6364 | 58 | 1.5318 | 0.4930 | 1.5318 | 1.2376 |
| No log | 2.7273 | 60 | 1.2694 | 0.496 | 1.2694 | 1.1267 |
| No log | 2.8182 | 62 | 1.3984 | 0.3103 | 1.3984 | 1.1826 |
| No log | 2.9091 | 64 | 1.8260 | 0.2261 | 1.8260 | 1.3513 |
| No log | 3.0 | 66 | 1.7139 | 0.2982 | 1.7139 | 1.3092 |
| No log | 3.0909 | 68 | 1.3967 | 0.3932 | 1.3967 | 1.1818 |
| No log | 3.1818 | 70 | 1.1238 | 0.4667 | 1.1238 | 1.0601 |
| No log | 3.2727 | 72 | 1.0528 | 0.4706 | 1.0528 | 1.0261 |
| No log | 3.3636 | 74 | 1.0872 | 0.4706 | 1.0872 | 1.0427 |
| No log | 3.4545 | 76 | 1.1764 | 0.4724 | 1.1764 | 1.0846 |
| No log | 3.5455 | 78 | 1.3521 | 0.4444 | 1.3521 | 1.1628 |
| No log | 3.6364 | 80 | 1.8338 | 0.1571 | 1.8338 | 1.3542 |
| No log | 3.7273 | 82 | 2.1613 | 0.1022 | 2.1613 | 1.4701 |
| No log | 3.8182 | 84 | 2.2225 | 0.0 | 2.2225 | 1.4908 |
| No log | 3.9091 | 86 | 2.1272 | 0.0763 | 2.1272 | 1.4585 |
| No log | 4.0 | 88 | 1.9312 | 0.2482 | 1.9312 | 1.3897 |
| No log | 4.0909 | 90 | 1.9328 | 0.2482 | 1.9328 | 1.3902 |
| No log | 4.1818 | 92 | 1.9758 | 0.2590 | 1.9758 | 1.4056 |
| No log | 4.2727 | 94 | 1.8551 | 0.3022 | 1.8551 | 1.3620 |
| No log | 4.3636 | 96 | 1.6201 | 0.2836 | 1.6201 | 1.2728 |
| No log | 4.4545 | 98 | 1.5038 | 0.3259 | 1.5038 | 1.2263 |
| No log | 4.5455 | 100 | 1.5352 | 0.4030 | 1.5352 | 1.2390 |
| No log | 4.6364 | 102 | 1.7073 | 0.3212 | 1.7073 | 1.3066 |
| No log | 4.7273 | 104 | 1.6467 | 0.3759 | 1.6467 | 1.2832 |
| No log | 4.8182 | 106 | 1.6483 | 0.3594 | 1.6483 | 1.2838 |
| No log | 4.9091 | 108 | 1.6225 | 0.3407 | 1.6225 | 1.2738 |
| No log | 5.0 | 110 | 1.6973 | 0.3212 | 1.6973 | 1.3028 |
| No log | 5.0909 | 112 | 1.6400 | 0.3212 | 1.6400 | 1.2806 |
| No log | 5.1818 | 114 | 1.5024 | 0.2985 | 1.5024 | 1.2257 |
| No log | 5.2727 | 116 | 1.4355 | 0.3939 | 1.4355 | 1.1981 |
| No log | 5.3636 | 118 | 1.4879 | 0.3158 | 1.4879 | 1.2198 |
| No log | 5.4545 | 120 | 1.4161 | 0.4211 | 1.4161 | 1.1900 |
| No log | 5.5455 | 122 | 1.3863 | 0.4211 | 1.3863 | 1.1774 |
| No log | 5.6364 | 124 | 1.4558 | 0.3704 | 1.4558 | 1.2066 |
| No log | 5.7273 | 126 | 1.4664 | 0.3942 | 1.4664 | 1.2109 |
| No log | 5.8182 | 128 | 1.3583 | 0.4818 | 1.3583 | 1.1655 |
| No log | 5.9091 | 130 | 1.4895 | 0.4234 | 1.4895 | 1.2204 |
| No log | 6.0 | 132 | 1.7681 | 0.3000 | 1.7681 | 1.3297 |
| No log | 6.0909 | 134 | 1.7664 | 0.2695 | 1.7664 | 1.3291 |
| No log | 6.1818 | 136 | 1.5607 | 0.4088 | 1.5607 | 1.2493 |
| No log | 6.2727 | 138 | 1.3862 | 0.4651 | 1.3862 | 1.1774 |
| No log | 6.3636 | 140 | 1.4199 | 0.4186 | 1.4199 | 1.1916 |
| No log | 6.4545 | 142 | 1.5884 | 0.3308 | 1.5884 | 1.2603 |
| No log | 6.5455 | 144 | 1.6536 | 0.3913 | 1.6536 | 1.2859 |
| No log | 6.6364 | 146 | 1.7013 | 0.3597 | 1.7013 | 1.3044 |
| No log | 6.7273 | 148 | 1.7384 | 0.3453 | 1.7384 | 1.3185 |
| No log | 6.8182 | 150 | 1.7644 | 0.3453 | 1.7644 | 1.3283 |
| No log | 6.9091 | 152 | 1.8095 | 0.2878 | 1.8095 | 1.3452 |
| No log | 7.0 | 154 | 1.8180 | 0.2878 | 1.8180 | 1.3483 |
| No log | 7.0909 | 156 | 1.5243 | 0.3704 | 1.5243 | 1.2346 |
| No log | 7.1818 | 158 | 1.4358 | 0.3969 | 1.4358 | 1.1983 |
| No log | 7.2727 | 160 | 1.4661 | 0.3407 | 1.4661 | 1.2108 |
| No log | 7.3636 | 162 | 1.7613 | 0.2571 | 1.7613 | 1.3271 |
| No log | 7.4545 | 164 | 2.0139 | 0.2270 | 2.0139 | 1.4191 |
| No log | 7.5455 | 166 | 2.1845 | 0.1418 | 2.1845 | 1.4780 |
| No log | 7.6364 | 168 | 1.9770 | 0.2174 | 1.9770 | 1.4061 |
| No log | 7.7273 | 170 | 1.6052 | 0.3731 | 1.6052 | 1.2670 |
| No log | 7.8182 | 172 | 1.4131 | 0.4275 | 1.4131 | 1.1888 |
| No log | 7.9091 | 174 | 1.4421 | 0.3881 | 1.4421 | 1.2009 |
| No log | 8.0 | 176 | 1.6931 | 0.3407 | 1.6931 | 1.3012 |
| No log | 8.0909 | 178 | 1.7649 | 0.2899 | 1.7649 | 1.3285 |
| No log | 8.1818 | 180 | 1.5767 | 0.3407 | 1.5767 | 1.2557 |
| No log | 8.2727 | 182 | 1.4026 | 0.4394 | 1.4026 | 1.1843 |
| No log | 8.3636 | 184 | 1.4598 | 0.4296 | 1.4598 | 1.2082 |
| No log | 8.4545 | 186 | 1.4396 | 0.4559 | 1.4396 | 1.1998 |
| No log | 8.5455 | 188 | 1.6514 | 0.3212 | 1.6514 | 1.2851 |
| No log | 8.6364 | 190 | 1.7398 | 0.3212 | 1.7398 | 1.3190 |
| No log | 8.7273 | 192 | 1.5474 | 0.3942 | 1.5474 | 1.2440 |
| No log | 8.8182 | 194 | 1.4336 | 0.4179 | 1.4336 | 1.1973 |
| No log | 8.9091 | 196 | 1.4346 | 0.4091 | 1.4346 | 1.1978 |
| No log | 9.0 | 198 | 1.5731 | 0.3582 | 1.5731 | 1.2542 |
| No log | 9.0909 | 200 | 1.4968 | 0.3582 | 1.4968 | 1.2234 |
| No log | 9.1818 | 202 | 1.3101 | 0.5077 | 1.3101 | 1.1446 |
| No log | 9.2727 | 204 | 1.2901 | 0.5116 | 1.2901 | 1.1358 |
| No log | 9.3636 | 206 | 1.2367 | 0.5000 | 1.2367 | 1.1121 |
| No log | 9.4545 | 208 | 1.1997 | 0.5041 | 1.1997 | 1.0953 |
| No log | 9.5455 | 210 | 1.2436 | 0.528 | 1.2436 | 1.1152 |
| No log | 9.6364 | 212 | 1.4268 | 0.4308 | 1.4268 | 1.1945 |
| No log | 9.7273 | 214 | 1.5813 | 0.3582 | 1.5813 | 1.2575 |
| No log | 9.8182 | 216 | 1.5420 | 0.3969 | 1.5420 | 1.2418 |
| No log | 9.9091 | 218 | 1.4934 | 0.4531 | 1.4934 | 1.2221 |
| No log | 10.0 | 220 | 1.4864 | 0.4341 | 1.4864 | 1.2192 |
| No log | 10.0909 | 222 | 1.5623 | 0.3881 | 1.5623 | 1.2499 |
| No log | 10.1818 | 224 | 1.5122 | 0.4394 | 1.5122 | 1.2297 |
| No log | 10.2727 | 226 | 1.4307 | 0.5039 | 1.4307 | 1.1961 |
| No log | 10.3636 | 228 | 1.4624 | 0.4806 | 1.4624 | 1.2093 |
| No log | 10.4545 | 230 | 1.5415 | 0.3939 | 1.5415 | 1.2416 |
| No log | 10.5455 | 232 | 1.6682 | 0.3556 | 1.6682 | 1.2916 |
| No log | 10.6364 | 234 | 1.6007 | 0.3939 | 1.6007 | 1.2652 |
| No log | 10.7273 | 236 | 1.4173 | 0.4375 | 1.4173 | 1.1905 |
| No log | 10.8182 | 238 | 1.2418 | 0.4628 | 1.2418 | 1.1144 |
| No log | 10.9091 | 240 | 1.1790 | 0.4833 | 1.1790 | 1.0858 |
| No log | 11.0 | 242 | 1.2143 | 0.5156 | 1.2143 | 1.1020 |
| No log | 11.0909 | 244 | 1.3290 | 0.4962 | 1.3290 | 1.1528 |
| No log | 11.1818 | 246 | 1.4388 | 0.4296 | 1.4388 | 1.1995 |
| No log | 11.2727 | 248 | 1.5572 | 0.3212 | 1.5572 | 1.2479 |
| No log | 11.3636 | 250 | 1.4754 | 0.3759 | 1.4754 | 1.2147 |
| No log | 11.4545 | 252 | 1.3411 | 0.5426 | 1.3411 | 1.1581 |
| No log | 11.5455 | 254 | 1.3390 | 0.5625 | 1.3390 | 1.1571 |
| No log | 11.6364 | 256 | 1.4012 | 0.4885 | 1.4012 | 1.1837 |
| No log | 11.7273 | 258 | 1.4428 | 0.4478 | 1.4428 | 1.2012 |
| No log | 11.8182 | 260 | 1.3370 | 0.5152 | 1.3370 | 1.1563 |
| No log | 11.9091 | 262 | 1.2423 | 0.5625 | 1.2423 | 1.1146 |
| No log | 12.0 | 264 | 1.2537 | 0.5649 | 1.2537 | 1.1197 |
| No log | 12.0909 | 266 | 1.3821 | 0.5075 | 1.3821 | 1.1756 |
| No log | 12.1818 | 268 | 1.6418 | 0.3022 | 1.6418 | 1.2813 |
| No log | 12.2727 | 270 | 1.7039 | 0.3022 | 1.7039 | 1.3053 |
| No log | 12.3636 | 272 | 1.5771 | 0.3676 | 1.5771 | 1.2558 |
| No log | 12.4545 | 274 | 1.5311 | 0.4060 | 1.5311 | 1.2374 |
| No log | 12.5455 | 276 | 1.4889 | 0.4580 | 1.4889 | 1.2202 |
| No log | 12.6364 | 278 | 1.5068 | 0.4275 | 1.5068 | 1.2275 |
| No log | 12.7273 | 280 | 1.5146 | 0.4328 | 1.5146 | 1.2307 |
| No log | 12.8182 | 282 | 1.5921 | 0.3623 | 1.5921 | 1.2618 |
| No log | 12.9091 | 284 | 1.5620 | 0.3885 | 1.5620 | 1.2498 |
| No log | 13.0 | 286 | 1.4116 | 0.4593 | 1.4116 | 1.1881 |
| No log | 13.0909 | 288 | 1.3806 | 0.4962 | 1.3806 | 1.1750 |
| No log | 13.1818 | 290 | 1.3804 | 0.4615 | 1.3804 | 1.1749 |
| No log | 13.2727 | 292 | 1.3286 | 0.528 | 1.3286 | 1.1527 |
| No log | 13.3636 | 294 | 1.3524 | 0.5 | 1.3524 | 1.1629 |
| No log | 13.4545 | 296 | 1.4617 | 0.4462 | 1.4617 | 1.2090 |
| No log | 13.5455 | 298 | 1.6156 | 0.3433 | 1.6156 | 1.2711 |
| No log | 13.6364 | 300 | 1.5963 | 0.3433 | 1.5963 | 1.2635 |
| No log | 13.7273 | 302 | 1.6456 | 0.3407 | 1.6456 | 1.2828 |
| No log | 13.8182 | 304 | 1.5998 | 0.3433 | 1.5998 | 1.2648 |
| No log | 13.9091 | 306 | 1.6562 | 0.3407 | 1.6562 | 1.2870 |
| No log | 14.0 | 308 | 1.6529 | 0.3433 | 1.6529 | 1.2857 |
| No log | 14.0909 | 310 | 1.5907 | 0.3759 | 1.5907 | 1.2612 |
| No log | 14.1818 | 312 | 1.5256 | 0.3910 | 1.5256 | 1.2352 |
| No log | 14.2727 | 314 | 1.5206 | 0.4030 | 1.5206 | 1.2331 |
| No log | 14.3636 | 316 | 1.5911 | 0.3796 | 1.5911 | 1.2614 |
| No log | 14.4545 | 318 | 1.6345 | 0.3121 | 1.6345 | 1.2785 |
| No log | 14.5455 | 320 | 1.7117 | 0.3121 | 1.7117 | 1.3083 |
| No log | 14.6364 | 322 | 1.6836 | 0.3121 | 1.6836 | 1.2975 |
| No log | 14.7273 | 324 | 1.5810 | 0.3121 | 1.5810 | 1.2574 |
| No log | 14.8182 | 326 | 1.4354 | 0.4148 | 1.4354 | 1.1981 |
| No log | 14.9091 | 328 | 1.3537 | 0.5079 | 1.3537 | 1.1635 |
| No log | 15.0 | 330 | 1.4004 | 0.4885 | 1.4004 | 1.1834 |
| No log | 15.0909 | 332 | 1.5020 | 0.4060 | 1.5020 | 1.2256 |
| No log | 15.1818 | 334 | 1.5804 | 0.3650 | 1.5804 | 1.2572 |
| No log | 15.2727 | 336 | 1.6491 | 0.3235 | 1.6491 | 1.2842 |
| No log | 15.3636 | 338 | 1.6443 | 0.3504 | 1.6443 | 1.2823 |
| No log | 15.4545 | 340 | 1.5557 | 0.3676 | 1.5557 | 1.2473 |
| No log | 15.5455 | 342 | 1.3950 | 0.4394 | 1.3950 | 1.1811 |
| No log | 15.6364 | 344 | 1.3218 | 0.5038 | 1.3218 | 1.1497 |
| No log | 15.7273 | 346 | 1.4066 | 0.4394 | 1.4066 | 1.1860 |
| No log | 15.8182 | 348 | 1.5501 | 0.3676 | 1.5501 | 1.2450 |
| No log | 15.9091 | 350 | 1.4747 | 0.3582 | 1.4747 | 1.2144 |
| No log | 16.0 | 352 | 1.5083 | 0.3582 | 1.5083 | 1.2281 |
| No log | 16.0909 | 354 | 1.6487 | 0.3165 | 1.6487 | 1.2840 |
| No log | 16.1818 | 356 | 1.6896 | 0.3143 | 1.6896 | 1.2999 |
| No log | 16.2727 | 358 | 1.7706 | 0.2676 | 1.7706 | 1.3306 |
| No log | 16.3636 | 360 | 1.6590 | 0.3453 | 1.6590 | 1.2880 |
| No log | 16.4545 | 362 | 1.5596 | 0.3478 | 1.5596 | 1.2488 |
| No log | 16.5455 | 364 | 1.5449 | 0.3796 | 1.5449 | 1.2429 |
| No log | 16.6364 | 366 | 1.6290 | 0.3676 | 1.6290 | 1.2763 |
| No log | 16.7273 | 368 | 1.7740 | 0.2837 | 1.7740 | 1.3319 |
| No log | 16.8182 | 370 | 1.9521 | 0.2411 | 1.9521 | 1.3972 |
| No log | 16.9091 | 372 | 1.9071 | 0.2571 | 1.9071 | 1.3810 |
| No log | 17.0 | 374 | 1.6937 | 0.3478 | 1.6937 | 1.3014 |
| No log | 17.0909 | 376 | 1.3730 | 0.4496 | 1.3730 | 1.1717 |
| No log | 17.1818 | 378 | 1.1612 | 0.5645 | 1.1612 | 1.0776 |
| No log | 17.2727 | 380 | 1.1002 | 0.5806 | 1.1002 | 1.0489 |
| No log | 17.3636 | 382 | 1.1357 | 0.5397 | 1.1357 | 1.0657 |
| No log | 17.4545 | 384 | 1.3203 | 0.4697 | 1.3203 | 1.1490 |
| No log | 17.5455 | 386 | 1.6017 | 0.3478 | 1.6017 | 1.2656 |
| No log | 17.6364 | 388 | 1.7109 | 0.3188 | 1.7109 | 1.3080 |
| No log | 17.7273 | 390 | 1.6573 | 0.3478 | 1.6573 | 1.2874 |
| No log | 17.8182 | 392 | 1.4678 | 0.3676 | 1.4678 | 1.2115 |
| No log | 17.9091 | 394 | 1.3405 | 0.4741 | 1.3405 | 1.1578 |
| No log | 18.0 | 396 | 1.3196 | 0.4741 | 1.3196 | 1.1487 |
| No log | 18.0909 | 398 | 1.4438 | 0.4088 | 1.4438 | 1.2016 |
| No log | 18.1818 | 400 | 1.5150 | 0.3676 | 1.5150 | 1.2309 |
| No log | 18.2727 | 402 | 1.6033 | 0.3504 | 1.6033 | 1.2662 |
| No log | 18.3636 | 404 | 1.6203 | 0.3504 | 1.6203 | 1.2729 |
| No log | 18.4545 | 406 | 1.6050 | 0.3676 | 1.6050 | 1.2669 |
| No log | 18.5455 | 408 | 1.6324 | 0.3504 | 1.6324 | 1.2777 |
| No log | 18.6364 | 410 | 1.5730 | 0.3676 | 1.5730 | 1.2542 |
| No log | 18.7273 | 412 | 1.4614 | 0.3759 | 1.4614 | 1.2089 |
| No log | 18.8182 | 414 | 1.3911 | 0.4688 | 1.3911 | 1.1794 |
| No log | 18.9091 | 416 | 1.4258 | 0.4122 | 1.4258 | 1.1941 |
| No log | 19.0 | 418 | 1.4709 | 0.4 | 1.4709 | 1.2128 |
| No log | 19.0909 | 420 | 1.6050 | 0.3382 | 1.6050 | 1.2669 |
| No log | 19.1818 | 422 | 1.6628 | 0.3235 | 1.6628 | 1.2895 |
| No log | 19.2727 | 424 | 1.6224 | 0.3459 | 1.6224 | 1.2737 |
| No log | 19.3636 | 426 | 1.5484 | 0.3846 | 1.5484 | 1.2443 |
| No log | 19.4545 | 428 | 1.4464 | 0.4839 | 1.4464 | 1.2027 |
| No log | 19.5455 | 430 | 1.3737 | 0.4959 | 1.3737 | 1.1721 |
| No log | 19.6364 | 432 | 1.3675 | 0.496 | 1.3675 | 1.1694 |
| No log | 19.7273 | 434 | 1.4818 | 0.3676 | 1.4818 | 1.2173 |
| No log | 19.8182 | 436 | 1.7647 | 0.3212 | 1.7647 | 1.3284 |
| No log | 19.9091 | 438 | 1.9277 | 0.2878 | 1.9277 | 1.3884 |
| No log | 20.0 | 440 | 1.8568 | 0.2941 | 1.8568 | 1.3626 |
| No log | 20.0909 | 442 | 1.7175 | 0.2941 | 1.7175 | 1.3105 |
| No log | 20.1818 | 444 | 1.6060 | 0.3759 | 1.6060 | 1.2673 |
| No log | 20.2727 | 446 | 1.5320 | 0.3969 | 1.5320 | 1.2378 |
| No log | 20.3636 | 448 | 1.4896 | 0.3969 | 1.4896 | 1.2205 |
| No log | 20.4545 | 450 | 1.4353 | 0.4154 | 1.4353 | 1.1980 |
| No log | 20.5455 | 452 | 1.3613 | 0.4286 | 1.3613 | 1.1667 |
| No log | 20.6364 | 454 | 1.3880 | 0.4252 | 1.3880 | 1.1781 |
| No log | 20.7273 | 456 | 1.4123 | 0.4496 | 1.4123 | 1.1884 |
| No log | 20.8182 | 458 | 1.4129 | 0.4275 | 1.4129 | 1.1886 |
| No log | 20.9091 | 460 | 1.3080 | 0.48 | 1.3080 | 1.1437 |
| No log | 21.0 | 462 | 1.2757 | 0.5 | 1.2757 | 1.1295 |
| No log | 21.0909 | 464 | 1.2879 | 0.5 | 1.2879 | 1.1349 |
| No log | 21.1818 | 466 | 1.3556 | 0.4762 | 1.3556 | 1.1643 |
| No log | 21.2727 | 468 | 1.5139 | 0.3731 | 1.5139 | 1.2304 |
| No log | 21.3636 | 470 | 1.6971 | 0.2941 | 1.6971 | 1.3027 |
| No log | 21.4545 | 472 | 1.7255 | 0.2941 | 1.7255 | 1.3136 |
| No log | 21.5455 | 474 | 1.6266 | 0.3459 | 1.6266 | 1.2754 |
| No log | 21.6364 | 476 | 1.5223 | 0.3759 | 1.5223 | 1.2338 |
| No log | 21.7273 | 478 | 1.3618 | 0.4496 | 1.3618 | 1.1670 |
| No log | 21.8182 | 480 | 1.2994 | 0.5312 | 1.2994 | 1.1399 |
| No log | 21.9091 | 482 | 1.2503 | 0.528 | 1.2503 | 1.1182 |
| No log | 22.0 | 484 | 1.2992 | 0.5312 | 1.2992 | 1.1398 |
| No log | 22.0909 | 486 | 1.4324 | 0.4296 | 1.4324 | 1.1968 |
| No log | 22.1818 | 488 | 1.7009 | 0.2899 | 1.7009 | 1.3042 |
| No log | 22.2727 | 490 | 1.8672 | 0.2571 | 1.8672 | 1.3664 |
| No log | 22.3636 | 492 | 1.8131 | 0.2899 | 1.8131 | 1.3465 |
| No log | 22.4545 | 494 | 1.6237 | 0.3478 | 1.6237 | 1.2742 |
| No log | 22.5455 | 496 | 1.4523 | 0.4296 | 1.4523 | 1.2051 |
| No log | 22.6364 | 498 | 1.4190 | 0.4697 | 1.4190 | 1.1912 |
| 0.3783 | 22.7273 | 500 | 1.5080 | 0.4 | 1.5080 | 1.2280 |
| 0.3783 | 22.8182 | 502 | 1.6746 | 0.3212 | 1.6746 | 1.2941 |
| 0.3783 | 22.9091 | 504 | 1.7347 | 0.3188 | 1.7347 | 1.3171 |
| 0.3783 | 23.0 | 506 | 1.6808 | 0.3212 | 1.6808 | 1.2965 |
| 0.3783 | 23.0909 | 508 | 1.5253 | 0.3731 | 1.5253 | 1.2350 |
| 0.3783 | 23.1818 | 510 | 1.3566 | 0.4697 | 1.3566 | 1.1647 |
| 0.3783 | 23.2727 | 512 | 1.3154 | 0.4882 | 1.3154 | 1.1469 |
| 0.3783 | 23.3636 | 514 | 1.3595 | 0.4320 | 1.3595 | 1.1660 |
| 0.3783 | 23.4545 | 516 | 1.4158 | 0.4091 | 1.4158 | 1.1899 |
| 0.3783 | 23.5455 | 518 | 1.4526 | 0.4030 | 1.4526 | 1.2052 |
| 0.3783 | 23.6364 | 520 | 1.5010 | 0.4030 | 1.5010 | 1.2252 |
| 0.3783 | 23.7273 | 522 | 1.5754 | 0.4030 | 1.5754 | 1.2552 |
| 0.3783 | 23.8182 | 524 | 1.6426 | 0.3676 | 1.6426 | 1.2816 |
| 0.3783 | 23.9091 | 526 | 1.6226 | 0.3731 | 1.6226 | 1.2738 |
| 0.3783 | 24.0 | 528 | 1.6062 | 0.3459 | 1.6062 | 1.2674 |
| 0.3783 | 24.0909 | 530 | 1.6226 | 0.3459 | 1.6226 | 1.2738 |
| 0.3783 | 24.1818 | 532 | 1.5840 | 0.3459 | 1.5840 | 1.2586 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nadejdatarabukina/f5056bd4-89ca-4201-bf8a-a0f0f5b88862
|
nadejdatarabukina
| 2025-01-15T12:12:42Z
| 14
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T12:10:17Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5056bd4-89ca-4201-bf8a-a0f0f5b88862
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2f52f5d4dd7c3b59_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f52f5d4dd7c3b59_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 6
gradient_checkpointing: false
group_by_length: false
hub_model_id: nadejdatarabukina/f5056bd4-89ca-4201-bf8a-a0f0f5b88862
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/2f52f5d4dd7c3b59_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0ad070a1-afeb-4188-a303-62e6e389155d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ad070a1-afeb-4188-a303-62e6e389155d
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f5056bd4-89ca-4201-bf8a-a0f0f5b88862
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0014 | 1 | nan |
| 0.0 | 0.0113 | 8 | nan |
| 0.0 | 0.0225 | 16 | nan |
| 0.0 | 0.0338 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
CodyNeo/glass_fine_tuned_deepfake_detection
|
CodyNeo
| 2025-01-15T12:11:02Z
| 42
| 0
| null |
[
"safetensors",
"vit",
"image-classification",
"dataset:glassona/Deepfake-190kf",
"base_model:dima806/deepfake_vs_real_image_detection",
"base_model:finetune:dima806/deepfake_vs_real_image_detection",
"region:us"
] |
image-classification
| 2025-01-15T02:43:54Z
|
---
datasets:
- glassona/Deepfake-190kf
base_model:
- dima806/deepfake_vs_real_image_detection
pipeline_tag: image-classification
---
|
thaffggg/c463b8c1-7d59-4435-9925-4fc39c79827d
|
thaffggg
| 2025-01-15T12:09:16Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T10:33:11Z
|
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c463b8c1-7d59-4435-9925-4fc39c79827d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d38a75ba6d61a259_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d38a75ba6d61a259_train_data.json
type:
field_input: GSHARE
field_instruction: PC
field_output: GA table
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/c463b8c1-7d59-4435-9925-4fc39c79827d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d38a75ba6d61a259_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 53d88cf6-c13c-4533-b8c3-4073bbb47744
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 53d88cf6-c13c-4533-b8c3-4073bbb47744
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c463b8c1-7d59-4435-9925-4fc39c79827d
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5365 | 0.0057 | 200 | 0.3325 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jhuisman/niekvandam-flux-lora
|
jhuisman
| 2025-01-15T12:06:24Z
| 16
| 1
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T11:44:26Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NKVDM
---
# Niekvandam Flux Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NKVDM` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jhuisman/niekvandam-flux-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
prxy5607/d7a4464c-fc85-4fdf-a030-4e8bee2963d5
|
prxy5607
| 2025-01-15T12:06:23Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | 2025-01-15T12:00:37Z
|
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d7a4464c-fc85-4fdf-a030-4e8bee2963d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 311f161d394c0f20_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/311f161d394c0f20_train_data.json
type:
field_input: answer_1
field_instruction: question
field_output: answer_2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5607/d7a4464c-fc85-4fdf-a030-4e8bee2963d5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/311f161d394c0f20_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d87d5d3f-f10b-4275-a4e4-bfe0eaa3d151
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d87d5d3f-f10b-4275-a4e4-bfe0eaa3d151
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d7a4464c-fc85-4fdf-a030-4e8bee2963d5
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 173
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1748 | 0.0173 | 1 | 2.7163 |
| 2.1854 | 0.8658 | 50 | 2.3032 |
| 1.327 | 1.7359 | 100 | 2.2705 |
| 1.8221 | 2.6061 | 150 | 2.2442 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
chauhoang/c3c0829d-9f68-466a-be5c-f566d07829a1
|
chauhoang
| 2025-01-15T12:06:21Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"region:us"
] | null | 2025-01-15T10:27:36Z
|
---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3c0829d-9f68-466a-be5c-f566d07829a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f7246e3e926d3451_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f7246e3e926d3451_train_data.json
type:
field_input: post
field_instruction: prompt
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/c3c0829d-9f68-466a-be5c-f566d07829a1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f7246e3e926d3451_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3239f24a-4b85-4fb3-ad6f-d0e423363394
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3239f24a-4b85-4fb3-ad6f-d0e423363394
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c3c0829d-9f68-466a-be5c-f566d07829a1
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.3396 |
| 2.204 | 0.0007 | 10 | 2.0032 |
| 1.8704 | 0.0013 | 20 | 1.7884 |
| 1.7061 | 0.0020 | 30 | 1.7476 |
| 1.6744 | 0.0026 | 40 | 1.7331 |
| 1.8317 | 0.0033 | 50 | 1.7308 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thalllsssss/6a0cb56a-3b86-4f99-a379-223018ca7038
|
thalllsssss
| 2025-01-15T12:03:31Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:43:34Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a0cb56a-3b86-4f99-a379-223018ca7038
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1efd5fc4ddb558a1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1efd5fc4ddb558a1_train_data.json
type:
field_input: article_title
field_instruction: passage
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/6a0cb56a-3b86-4f99-a379-223018ca7038
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1efd5fc4ddb558a1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d481345d-f6f1-4ade-b101-28925f9ecb10
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d481345d-f6f1-4ade-b101-28925f9ecb10
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6a0cb56a-3b86-4f99-a379-223018ca7038
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3717 | 0.1631 | 200 | 1.5251 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fongios/modernbert-base-conll2012_ontonotesv5-english_v4-ner
|
fongios
| 2025-01-15T12:02:35Z
| 17
| 0
|
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"token-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-01-08T08:52:50Z
|
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: modernbert-base-conll2012_ontonotesv5-english_v4-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-base-conll2012_ontonotesv5-english_v4-ner
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0679
- Precision: 0.8636
- Recall: 0.8704
- F1: 0.8670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0698 | 1.0 | 2350 | 0.0795 | 0.8121 | 0.8344 | 0.8231 |
| 0.0356 | 2.0 | 4700 | 0.0707 | 0.8438 | 0.8575 | 0.8506 |
| 0.0184 | 3.0 | 7050 | 0.0795 | 0.8461 | 0.8567 | 0.8513 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.0+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
khaoulael43/MR_ocr
|
khaoulael43
| 2025-01-15T12:02:34Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-01-15T12:01:30Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cvoffer/5595a1f2-87e9-40ba-8f79-710df4df370c
|
cvoffer
| 2025-01-15T12:02:29Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"region:us"
] | null | 2025-01-15T10:33:32Z
|
---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5595a1f2-87e9-40ba-8f79-710df4df370c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jhflow/mistral7b-lora-multi-turn-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d0a2c031ff985a80_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d0a2c031ff985a80_train_data.json
type:
field_input: cifstr
field_instruction: description
field_output: description_w_bondlengths
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: cvoffer/5595a1f2-87e9-40ba-8f79-710df4df370c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 80GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/d0a2c031ff985a80_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e5b11ba-eec1-46bc-aa83-a0403970b08e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e5b11ba-eec1-46bc-aa83-a0403970b08e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5595a1f2-87e9-40ba-8f79-710df4df370c
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0007 | 10 | nan |
| 0.0 | 0.0011 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cunghoctienganh/75006cde-38ad-447b-9160-7ccef36c0eaa
|
cunghoctienganh
| 2025-01-15T12:02:27Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:43:31Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75006cde-38ad-447b-9160-7ccef36c0eaa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1efd5fc4ddb558a1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1efd5fc4ddb558a1_train_data.json
type:
field_input: article_title
field_instruction: passage
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/75006cde-38ad-447b-9160-7ccef36c0eaa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1efd5fc4ddb558a1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d481345d-f6f1-4ade-b101-28925f9ecb10
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d481345d-f6f1-4ade-b101-28925f9ecb10
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 75006cde-38ad-447b-9160-7ccef36c0eaa
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3509 | 0.1631 | 200 | 1.5237 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Kort/Cm11
|
Kort
| 2025-01-15T12:00:53Z
| 1,218
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T11:05:12Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
standobrev/ppo-Huggy
|
standobrev
| 2025-01-15T12:00:09Z
| 24
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-01-15T12:00:05Z
|
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: standobrev/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lesso09/96bd4178-c6cd-40ea-b73f-1b953d68d293
|
lesso09
| 2025-01-15T11:59:28Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:29:18Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 96bd4178-c6cd-40ea-b73f-1b953d68d293
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 82602250e0cc45ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/82602250e0cc45ea_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/96bd4178-c6cd-40ea-b73f-1b953d68d293
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/82602250e0cc45ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 96bd4178-c6cd-40ea-b73f-1b953d68d293
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8361 | 0.0002 | 1 | 0.7384 |
| 0.6478 | 0.0008 | 5 | 0.7304 |
| 0.6054 | 0.0017 | 10 | 0.6887 |
| 0.6566 | 0.0025 | 15 | 0.6769 |
| 0.6887 | 0.0034 | 20 | 0.6606 |
| 0.6363 | 0.0042 | 25 | 0.6571 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k9_task1_organization
|
MayBashendy
| 2025-01-15T11:59:27Z
| 184
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-31T15:22:00Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k9_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k9_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2582
- Qwk: 0.4444
- Mse: 1.2582
- Rmse: 1.1217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0476 | 2 | 6.6472 | 0.0308 | 6.6472 | 2.5782 |
| No log | 0.0952 | 4 | 4.2066 | 0.0684 | 4.2066 | 2.0510 |
| No log | 0.1429 | 6 | 3.0801 | 0.0121 | 3.0801 | 1.7550 |
| No log | 0.1905 | 8 | 2.6526 | 0.1111 | 2.6526 | 1.6287 |
| No log | 0.2381 | 10 | 2.2310 | 0.2090 | 2.2310 | 1.4936 |
| No log | 0.2857 | 12 | 1.8827 | 0.1587 | 1.8827 | 1.3721 |
| No log | 0.3333 | 14 | 1.7119 | 0.1947 | 1.7119 | 1.3084 |
| No log | 0.3810 | 16 | 1.6687 | 0.1468 | 1.6687 | 1.2918 |
| No log | 0.4286 | 18 | 1.7077 | 0.2143 | 1.7077 | 1.3068 |
| No log | 0.4762 | 20 | 1.9203 | 0.3200 | 1.9203 | 1.3858 |
| No log | 0.5238 | 22 | 2.2319 | 0.1119 | 2.2319 | 1.4940 |
| No log | 0.5714 | 24 | 2.2575 | 0.1119 | 2.2575 | 1.5025 |
| No log | 0.6190 | 26 | 2.0160 | 0.2901 | 2.0160 | 1.4198 |
| No log | 0.6667 | 28 | 1.8143 | 0.3175 | 1.8143 | 1.3470 |
| No log | 0.7143 | 30 | 1.8343 | 0.2812 | 1.8343 | 1.3543 |
| No log | 0.7619 | 32 | 1.8075 | 0.2748 | 1.8075 | 1.3444 |
| No log | 0.8095 | 34 | 1.8152 | 0.2941 | 1.8152 | 1.3473 |
| No log | 0.8571 | 36 | 1.8236 | 0.2958 | 1.8236 | 1.3504 |
| No log | 0.9048 | 38 | 1.9339 | 0.3613 | 1.9339 | 1.3906 |
| No log | 0.9524 | 40 | 2.0391 | 0.3797 | 2.0391 | 1.4280 |
| No log | 1.0 | 42 | 1.8790 | 0.3425 | 1.8790 | 1.3708 |
| No log | 1.0476 | 44 | 1.9106 | 0.3660 | 1.9106 | 1.3822 |
| No log | 1.0952 | 46 | 1.7269 | 0.3121 | 1.7269 | 1.3141 |
| No log | 1.1429 | 48 | 1.6129 | 0.2537 | 1.6129 | 1.2700 |
| No log | 1.1905 | 50 | 1.7054 | 0.2479 | 1.7054 | 1.3059 |
| No log | 1.2381 | 52 | 1.8428 | 0.2131 | 1.8428 | 1.3575 |
| No log | 1.2857 | 54 | 1.6470 | 0.3387 | 1.6470 | 1.2833 |
| No log | 1.3333 | 56 | 1.7371 | 0.2290 | 1.7371 | 1.3180 |
| No log | 1.3810 | 58 | 1.9184 | 0.3014 | 1.9184 | 1.3851 |
| No log | 1.4286 | 60 | 1.9393 | 0.3046 | 1.9393 | 1.3926 |
| No log | 1.4762 | 62 | 1.6272 | 0.3759 | 1.6272 | 1.2756 |
| No log | 1.5238 | 64 | 1.4414 | 0.4242 | 1.4414 | 1.2006 |
| No log | 1.5714 | 66 | 1.3736 | 0.4361 | 1.3736 | 1.1720 |
| No log | 1.6190 | 68 | 1.5664 | 0.3433 | 1.5664 | 1.2516 |
| No log | 1.6667 | 70 | 1.6920 | 0.3571 | 1.6920 | 1.3008 |
| No log | 1.7143 | 72 | 1.8273 | 0.3776 | 1.8273 | 1.3518 |
| No log | 1.7619 | 74 | 1.3764 | 0.4593 | 1.3764 | 1.1732 |
| No log | 1.8095 | 76 | 1.2880 | 0.4672 | 1.2880 | 1.1349 |
| No log | 1.8571 | 78 | 1.5029 | 0.4354 | 1.5029 | 1.2259 |
| No log | 1.9048 | 80 | 1.7368 | 0.4286 | 1.7368 | 1.3179 |
| No log | 1.9524 | 82 | 1.5037 | 0.4805 | 1.5037 | 1.2262 |
| No log | 2.0 | 84 | 1.2445 | 0.5429 | 1.2445 | 1.1156 |
| No log | 2.0476 | 86 | 1.1250 | 0.4839 | 1.1250 | 1.0607 |
| No log | 2.0952 | 88 | 1.0446 | 0.5366 | 1.0446 | 1.0221 |
| No log | 2.1429 | 90 | 1.0996 | 0.5455 | 1.0996 | 1.0486 |
| No log | 2.1905 | 92 | 1.7179 | 0.4908 | 1.7179 | 1.3107 |
| No log | 2.2381 | 94 | 1.9387 | 0.3721 | 1.9387 | 1.3924 |
| No log | 2.2857 | 96 | 1.6140 | 0.4720 | 1.6140 | 1.2704 |
| No log | 2.3333 | 98 | 1.2756 | 0.5306 | 1.2756 | 1.1294 |
| No log | 2.3810 | 100 | 1.3475 | 0.5139 | 1.3475 | 1.1608 |
| No log | 2.4286 | 102 | 1.5214 | 0.4755 | 1.5214 | 1.2334 |
| No log | 2.4762 | 104 | 1.5458 | 0.4755 | 1.5458 | 1.2433 |
| No log | 2.5238 | 106 | 1.4883 | 0.4667 | 1.4883 | 1.2200 |
| No log | 2.5714 | 108 | 1.3098 | 0.5503 | 1.3098 | 1.1445 |
| No log | 2.6190 | 110 | 1.1070 | 0.5455 | 1.1070 | 1.0521 |
| No log | 2.6667 | 112 | 1.0150 | 0.5507 | 1.0150 | 1.0075 |
| No log | 2.7143 | 114 | 0.9982 | 0.5649 | 0.9982 | 0.9991 |
| No log | 2.7619 | 116 | 1.0078 | 0.5354 | 1.0078 | 1.0039 |
| No log | 2.8095 | 118 | 1.0581 | 0.4878 | 1.0581 | 1.0286 |
| No log | 2.8571 | 120 | 1.1103 | 0.4878 | 1.1103 | 1.0537 |
| No log | 2.9048 | 122 | 1.0968 | 0.5 | 1.0968 | 1.0473 |
| No log | 2.9524 | 124 | 1.0680 | 0.4677 | 1.0680 | 1.0334 |
| No log | 3.0 | 126 | 1.0890 | 0.5271 | 1.0890 | 1.0436 |
| No log | 3.0476 | 128 | 1.0813 | 0.5758 | 1.0813 | 1.0399 |
| No log | 3.0952 | 130 | 1.0751 | 0.6061 | 1.0751 | 1.0369 |
| No log | 3.1429 | 132 | 1.0860 | 0.5564 | 1.0860 | 1.0421 |
| No log | 3.1905 | 134 | 1.0899 | 0.5821 | 1.0899 | 1.0440 |
| No log | 3.2381 | 136 | 1.0943 | 0.5564 | 1.0943 | 1.0461 |
| No log | 3.2857 | 138 | 1.1219 | 0.5038 | 1.1219 | 1.0592 |
| No log | 3.3333 | 140 | 1.1773 | 0.4252 | 1.1773 | 1.0850 |
| No log | 3.3810 | 142 | 1.2472 | 0.4480 | 1.2472 | 1.1168 |
| No log | 3.4286 | 144 | 1.2752 | 0.4480 | 1.2752 | 1.1293 |
| No log | 3.4762 | 146 | 1.2123 | 0.4923 | 1.2123 | 1.1011 |
| No log | 3.5238 | 148 | 1.1778 | 0.5248 | 1.1778 | 1.0853 |
| No log | 3.5714 | 150 | 1.1573 | 0.5674 | 1.1573 | 1.0758 |
| No log | 3.6190 | 152 | 1.1160 | 0.5874 | 1.1160 | 1.0564 |
| No log | 3.6667 | 154 | 1.1368 | 0.6014 | 1.1368 | 1.0662 |
| No log | 3.7143 | 156 | 1.2000 | 0.6111 | 1.2000 | 1.0954 |
| No log | 3.7619 | 158 | 1.1680 | 0.5874 | 1.1680 | 1.0807 |
| No log | 3.8095 | 160 | 1.1147 | 0.5781 | 1.1147 | 1.0558 |
| No log | 3.8571 | 162 | 1.1758 | 0.528 | 1.1758 | 1.0844 |
| No log | 3.9048 | 164 | 1.2083 | 0.5354 | 1.2083 | 1.0992 |
| No log | 3.9524 | 166 | 1.1355 | 0.5954 | 1.1355 | 1.0656 |
| No log | 4.0 | 168 | 1.0779 | 0.5496 | 1.0779 | 1.0382 |
| No log | 4.0476 | 170 | 1.4606 | 0.5229 | 1.4606 | 1.2085 |
| No log | 4.0952 | 172 | 1.9892 | 0.3934 | 1.9892 | 1.4104 |
| No log | 4.1429 | 174 | 1.8532 | 0.4372 | 1.8532 | 1.3613 |
| No log | 4.1905 | 176 | 1.5890 | 0.5486 | 1.5890 | 1.2606 |
| No log | 4.2381 | 178 | 1.3341 | 0.5422 | 1.3341 | 1.1550 |
| No log | 4.2857 | 180 | 1.2200 | 0.6135 | 1.2200 | 1.1045 |
| No log | 4.3333 | 182 | 1.1306 | 0.6115 | 1.1306 | 1.0633 |
| No log | 4.3810 | 184 | 1.0350 | 0.5906 | 1.0350 | 1.0174 |
| No log | 4.4286 | 186 | 1.1259 | 0.5752 | 1.1259 | 1.0611 |
| No log | 4.4762 | 188 | 1.2317 | 0.56 | 1.2317 | 1.1098 |
| No log | 4.5238 | 190 | 1.2799 | 0.52 | 1.2799 | 1.1313 |
| No log | 4.5714 | 192 | 1.2145 | 0.5315 | 1.2145 | 1.1021 |
| No log | 4.6190 | 194 | 1.1996 | 0.5315 | 1.1996 | 1.0953 |
| No log | 4.6667 | 196 | 1.1966 | 0.5248 | 1.1966 | 1.0939 |
| No log | 4.7143 | 198 | 1.3458 | 0.4636 | 1.3458 | 1.1601 |
| No log | 4.7619 | 200 | 1.6028 | 0.4684 | 1.6028 | 1.2660 |
| No log | 4.8095 | 202 | 1.2595 | 0.5298 | 1.2595 | 1.1223 |
| No log | 4.8571 | 204 | 0.9573 | 0.6014 | 0.9573 | 0.9784 |
| No log | 4.9048 | 206 | 0.9309 | 0.5797 | 0.9309 | 0.9648 |
| No log | 4.9524 | 208 | 0.9859 | 0.5630 | 0.9859 | 0.9929 |
| No log | 5.0 | 210 | 1.0402 | 0.5612 | 1.0402 | 1.0199 |
| No log | 5.0476 | 212 | 1.0670 | 0.5694 | 1.0670 | 1.0330 |
| No log | 5.0952 | 214 | 1.0413 | 0.5906 | 1.0413 | 1.0204 |
| No log | 5.1429 | 216 | 0.9632 | 0.6216 | 0.9632 | 0.9814 |
| No log | 5.1905 | 218 | 0.9811 | 0.6667 | 0.9811 | 0.9905 |
| No log | 5.2381 | 220 | 1.2863 | 0.5618 | 1.2863 | 1.1342 |
| No log | 5.2857 | 222 | 1.6420 | 0.53 | 1.6420 | 1.2814 |
| No log | 5.3333 | 224 | 1.5117 | 0.5503 | 1.5117 | 1.2295 |
| No log | 5.3810 | 226 | 1.3211 | 0.6 | 1.3211 | 1.1494 |
| No log | 5.4286 | 228 | 1.3365 | 0.5875 | 1.3365 | 1.1561 |
| No log | 5.4762 | 230 | 1.4392 | 0.5290 | 1.4392 | 1.1997 |
| No log | 5.5238 | 232 | 1.7117 | 0.4634 | 1.7117 | 1.3083 |
| No log | 5.5714 | 234 | 1.9319 | 0.3799 | 1.9319 | 1.3899 |
| No log | 5.6190 | 236 | 1.5185 | 0.4868 | 1.5185 | 1.2323 |
| No log | 5.6667 | 238 | 1.0712 | 0.5857 | 1.0712 | 1.0350 |
| No log | 5.7143 | 240 | 0.9442 | 0.6047 | 0.9442 | 0.9717 |
| No log | 5.7619 | 242 | 0.9660 | 0.5954 | 0.9660 | 0.9828 |
| No log | 5.8095 | 244 | 1.0431 | 0.6299 | 1.0431 | 1.0213 |
| No log | 5.8571 | 246 | 1.0299 | 0.625 | 1.0299 | 1.0148 |
| No log | 5.9048 | 248 | 0.9538 | 0.5984 | 0.9538 | 0.9766 |
| No log | 5.9524 | 250 | 0.9605 | 0.5857 | 0.9605 | 0.9801 |
| No log | 6.0 | 252 | 1.1218 | 0.5867 | 1.1218 | 1.0592 |
| No log | 6.0476 | 254 | 1.3765 | 0.5629 | 1.3765 | 1.1733 |
| No log | 6.0952 | 256 | 1.5652 | 0.5810 | 1.5652 | 1.2511 |
| No log | 6.1429 | 258 | 1.4702 | 0.5862 | 1.4702 | 1.2125 |
| No log | 6.1905 | 260 | 1.1371 | 0.6184 | 1.1371 | 1.0664 |
| No log | 6.2381 | 262 | 0.9405 | 0.5778 | 0.9405 | 0.9698 |
| No log | 6.2857 | 264 | 0.9160 | 0.6260 | 0.9160 | 0.9571 |
| No log | 6.3333 | 266 | 0.9142 | 0.6165 | 0.9142 | 0.9561 |
| No log | 6.3810 | 268 | 0.9365 | 0.6212 | 0.9365 | 0.9677 |
| No log | 6.4286 | 270 | 0.9625 | 0.6260 | 0.9625 | 0.9811 |
| No log | 6.4762 | 272 | 0.9824 | 0.6457 | 0.9824 | 0.9911 |
| No log | 6.5238 | 274 | 0.9827 | 0.5354 | 0.9827 | 0.9913 |
| No log | 6.5714 | 276 | 1.0377 | 0.5303 | 1.0377 | 1.0187 |
| No log | 6.6190 | 278 | 1.2391 | 0.5828 | 1.2391 | 1.1131 |
| No log | 6.6667 | 280 | 1.8004 | 0.5055 | 1.8004 | 1.3418 |
| No log | 6.7143 | 282 | 2.2646 | 0.4039 | 2.2646 | 1.5049 |
| No log | 6.7619 | 284 | 1.9862 | 0.4330 | 1.9862 | 1.4093 |
| No log | 6.8095 | 286 | 1.5252 | 0.5318 | 1.5252 | 1.2350 |
| No log | 6.8571 | 288 | 1.2645 | 0.4935 | 1.2645 | 1.1245 |
| No log | 6.9048 | 290 | 1.1673 | 0.5714 | 1.1673 | 1.0804 |
| No log | 6.9524 | 292 | 1.0807 | 0.5874 | 1.0807 | 1.0396 |
| No log | 7.0 | 294 | 0.9874 | 0.5692 | 0.9874 | 0.9937 |
| No log | 7.0476 | 296 | 0.9655 | 0.5692 | 0.9655 | 0.9826 |
| No log | 7.0952 | 298 | 1.0016 | 0.5920 | 1.0016 | 1.0008 |
| No log | 7.1429 | 300 | 1.0313 | 0.5455 | 1.0313 | 1.0155 |
| No log | 7.1905 | 302 | 1.0149 | 0.5528 | 1.0149 | 1.0074 |
| No log | 7.2381 | 304 | 1.0110 | 0.5397 | 1.0110 | 1.0055 |
| No log | 7.2857 | 306 | 1.0593 | 0.4848 | 1.0593 | 1.0292 |
| No log | 7.3333 | 308 | 1.1083 | 0.5294 | 1.1083 | 1.0527 |
| No log | 7.3810 | 310 | 1.0839 | 0.5441 | 1.0839 | 1.0411 |
| No log | 7.4286 | 312 | 1.1438 | 0.5578 | 1.1438 | 1.0695 |
| No log | 7.4762 | 314 | 1.2661 | 0.5844 | 1.2661 | 1.1252 |
| No log | 7.5238 | 316 | 1.3836 | 0.4968 | 1.3836 | 1.1763 |
| No log | 7.5714 | 318 | 1.3845 | 0.5256 | 1.3845 | 1.1766 |
| No log | 7.6190 | 320 | 1.3110 | 0.5584 | 1.3110 | 1.1450 |
| No log | 7.6667 | 322 | 1.2582 | 0.5563 | 1.2582 | 1.1217 |
| No log | 7.7143 | 324 | 1.2498 | 0.5430 | 1.2498 | 1.1179 |
| No log | 7.7619 | 326 | 1.3593 | 0.5605 | 1.3593 | 1.1659 |
| No log | 7.8095 | 328 | 1.4905 | 0.5509 | 1.4905 | 1.2208 |
| No log | 7.8571 | 330 | 1.5618 | 0.4941 | 1.5618 | 1.2497 |
| No log | 7.9048 | 332 | 1.7452 | 0.4667 | 1.7452 | 1.3211 |
| No log | 7.9524 | 334 | 2.0877 | 0.3731 | 2.0877 | 1.4449 |
| No log | 8.0 | 336 | 2.4492 | 0.3417 | 2.4492 | 1.5650 |
| No log | 8.0476 | 338 | 2.1721 | 0.4020 | 2.1721 | 1.4738 |
| No log | 8.0952 | 340 | 1.7283 | 0.4615 | 1.7283 | 1.3146 |
| No log | 8.1429 | 342 | 1.4234 | 0.5359 | 1.4234 | 1.1931 |
| No log | 8.1905 | 344 | 1.3259 | 0.5526 | 1.3259 | 1.1515 |
| No log | 8.2381 | 346 | 1.2695 | 0.5526 | 1.2695 | 1.1267 |
| No log | 8.2857 | 348 | 1.2073 | 0.5828 | 1.2073 | 1.0988 |
| No log | 8.3333 | 350 | 1.1797 | 0.5897 | 1.1797 | 1.0861 |
| No log | 8.3810 | 352 | 1.2101 | 0.5987 | 1.2101 | 1.1000 |
| No log | 8.4286 | 354 | 1.2036 | 0.5987 | 1.2036 | 1.0971 |
| No log | 8.4762 | 356 | 1.1513 | 0.5676 | 1.1513 | 1.0730 |
| No log | 8.5238 | 358 | 1.1727 | 0.5676 | 1.1727 | 1.0829 |
| No log | 8.5714 | 360 | 1.2555 | 0.6104 | 1.2555 | 1.1205 |
| No log | 8.6190 | 362 | 1.2919 | 0.6065 | 1.2919 | 1.1366 |
| No log | 8.6667 | 364 | 1.2634 | 0.6065 | 1.2634 | 1.1240 |
| No log | 8.7143 | 366 | 1.2480 | 0.5772 | 1.2480 | 1.1171 |
| No log | 8.7619 | 368 | 1.1525 | 0.5556 | 1.1525 | 1.0735 |
| No log | 8.8095 | 370 | 1.0760 | 0.5612 | 1.0760 | 1.0373 |
| No log | 8.8571 | 372 | 1.0583 | 0.5414 | 1.0583 | 1.0287 |
| No log | 8.9048 | 374 | 1.1234 | 0.5255 | 1.1234 | 1.0599 |
| No log | 8.9524 | 376 | 1.2110 | 0.5037 | 1.2110 | 1.1004 |
| No log | 9.0 | 378 | 1.1975 | 0.5113 | 1.1975 | 1.0943 |
| No log | 9.0476 | 380 | 1.1380 | 0.4844 | 1.1380 | 1.0668 |
| No log | 9.0952 | 382 | 1.0735 | 0.4640 | 1.0735 | 1.0361 |
| No log | 9.1429 | 384 | 1.0548 | 0.5231 | 1.0548 | 1.0270 |
| No log | 9.1905 | 386 | 1.1861 | 0.5733 | 1.1861 | 1.0891 |
| No log | 9.2381 | 388 | 1.3574 | 0.5732 | 1.3574 | 1.1651 |
| No log | 9.2857 | 390 | 1.4468 | 0.5537 | 1.4468 | 1.2028 |
| No log | 9.3333 | 392 | 1.4303 | 0.5363 | 1.4303 | 1.1960 |
| No log | 9.3810 | 394 | 1.3496 | 0.5542 | 1.3496 | 1.1617 |
| No log | 9.4286 | 396 | 1.3178 | 0.5660 | 1.3178 | 1.1480 |
| No log | 9.4762 | 398 | 1.2733 | 0.5621 | 1.2733 | 1.1284 |
| No log | 9.5238 | 400 | 1.1571 | 0.5906 | 1.1571 | 1.0757 |
| No log | 9.5714 | 402 | 1.0931 | 0.6225 | 1.0931 | 1.0455 |
| No log | 9.6190 | 404 | 1.0566 | 0.6040 | 1.0566 | 1.0279 |
| No log | 9.6667 | 406 | 1.0302 | 0.5547 | 1.0302 | 1.0150 |
| No log | 9.7143 | 408 | 1.0111 | 0.5926 | 1.0111 | 1.0055 |
| No log | 9.7619 | 410 | 1.0276 | 0.5564 | 1.0276 | 1.0137 |
| No log | 9.8095 | 412 | 1.0673 | 0.5038 | 1.0673 | 1.0331 |
| No log | 9.8571 | 414 | 1.1402 | 0.5333 | 1.1402 | 1.0678 |
| No log | 9.9048 | 416 | 1.2083 | 0.5401 | 1.2083 | 1.0992 |
| No log | 9.9524 | 418 | 1.1879 | 0.5522 | 1.1879 | 1.0899 |
| No log | 10.0 | 420 | 1.1910 | 0.5333 | 1.1910 | 1.0913 |
| No log | 10.0476 | 422 | 1.2275 | 0.5263 | 1.2275 | 1.1079 |
| No log | 10.0952 | 424 | 1.3173 | 0.5 | 1.3173 | 1.1477 |
| No log | 10.1429 | 426 | 1.3448 | 0.5255 | 1.3448 | 1.1596 |
| No log | 10.1905 | 428 | 1.3363 | 0.5255 | 1.3363 | 1.1560 |
| No log | 10.2381 | 430 | 1.2974 | 0.5 | 1.2974 | 1.1390 |
| No log | 10.2857 | 432 | 1.2605 | 0.4823 | 1.2605 | 1.1227 |
| No log | 10.3333 | 434 | 1.2343 | 0.5315 | 1.2343 | 1.1110 |
| No log | 10.3810 | 436 | 1.2371 | 0.5211 | 1.2371 | 1.1122 |
| No log | 10.4286 | 438 | 1.2290 | 0.4965 | 1.2290 | 1.1086 |
| No log | 10.4762 | 440 | 1.2682 | 0.4604 | 1.2682 | 1.1261 |
| No log | 10.5238 | 442 | 1.2583 | 0.4493 | 1.2583 | 1.1217 |
| No log | 10.5714 | 444 | 1.3011 | 0.4648 | 1.3011 | 1.1407 |
| No log | 10.6190 | 446 | 1.3544 | 0.5342 | 1.3544 | 1.1638 |
| No log | 10.6667 | 448 | 1.3958 | 0.5170 | 1.3958 | 1.1814 |
| No log | 10.7143 | 450 | 1.5708 | 0.5359 | 1.5708 | 1.2533 |
| No log | 10.7619 | 452 | 1.6767 | 0.4815 | 1.6767 | 1.2949 |
| No log | 10.8095 | 454 | 1.6674 | 0.5098 | 1.6674 | 1.2913 |
| No log | 10.8571 | 456 | 1.4777 | 0.5235 | 1.4777 | 1.2156 |
| No log | 10.9048 | 458 | 1.3322 | 0.4755 | 1.3322 | 1.1542 |
| No log | 10.9524 | 460 | 1.2564 | 0.4789 | 1.2564 | 1.1209 |
| No log | 11.0 | 462 | 1.2475 | 0.4930 | 1.2475 | 1.1169 |
| No log | 11.0476 | 464 | 1.2884 | 0.5175 | 1.2884 | 1.1351 |
| No log | 11.0952 | 466 | 1.3710 | 0.5 | 1.3710 | 1.1709 |
| No log | 11.1429 | 468 | 1.4118 | 0.5034 | 1.4118 | 1.1882 |
| No log | 11.1905 | 470 | 1.4078 | 0.4892 | 1.4078 | 1.1865 |
| No log | 11.2381 | 472 | 1.3778 | 0.4818 | 1.3778 | 1.1738 |
| No log | 11.2857 | 474 | 1.3127 | 0.4706 | 1.3127 | 1.1457 |
| No log | 11.3333 | 476 | 1.2595 | 0.4733 | 1.2595 | 1.1223 |
| No log | 11.3810 | 478 | 1.2553 | 0.4627 | 1.2553 | 1.1204 |
| No log | 11.4286 | 480 | 1.3100 | 0.4748 | 1.3100 | 1.1445 |
| No log | 11.4762 | 482 | 1.3904 | 0.4966 | 1.3904 | 1.1792 |
| No log | 11.5238 | 484 | 1.4769 | 0.4966 | 1.4769 | 1.2153 |
| No log | 11.5714 | 486 | 1.4272 | 0.4966 | 1.4272 | 1.1947 |
| No log | 11.6190 | 488 | 1.3014 | 0.4828 | 1.3014 | 1.1408 |
| No log | 11.6667 | 490 | 1.2514 | 0.4681 | 1.2514 | 1.1186 |
| No log | 11.7143 | 492 | 1.2352 | 0.4828 | 1.2352 | 1.1114 |
| No log | 11.7619 | 494 | 1.1912 | 0.4895 | 1.1912 | 1.0914 |
| No log | 11.8095 | 496 | 1.1886 | 0.5106 | 1.1886 | 1.0902 |
| No log | 11.8571 | 498 | 1.2480 | 0.4722 | 1.2480 | 1.1171 |
| 0.3993 | 11.9048 | 500 | 1.2697 | 0.4898 | 1.2697 | 1.1268 |
| 0.3993 | 11.9524 | 502 | 1.2444 | 0.5 | 1.2444 | 1.1155 |
| 0.3993 | 12.0 | 504 | 1.1984 | 0.5224 | 1.1984 | 1.0947 |
| 0.3993 | 12.0476 | 506 | 1.1418 | 0.5152 | 1.1418 | 1.0685 |
| 0.3993 | 12.0952 | 508 | 1.0992 | 0.5231 | 1.0992 | 1.0484 |
| 0.3993 | 12.1429 | 510 | 1.0826 | 0.5397 | 1.0826 | 1.0405 |
| 0.3993 | 12.1905 | 512 | 1.1141 | 0.5469 | 1.1141 | 1.0555 |
| 0.3993 | 12.2381 | 514 | 1.1409 | 0.5512 | 1.1409 | 1.0681 |
| 0.3993 | 12.2857 | 516 | 1.1616 | 0.5323 | 1.1616 | 1.0778 |
| 0.3993 | 12.3333 | 518 | 1.1678 | 0.4553 | 1.1678 | 1.0806 |
| 0.3993 | 12.3810 | 520 | 1.2257 | 0.4516 | 1.2257 | 1.1071 |
| 0.3993 | 12.4286 | 522 | 1.2640 | 0.4480 | 1.2640 | 1.1243 |
| 0.3993 | 12.4762 | 524 | 1.2590 | 0.4355 | 1.2590 | 1.1220 |
| 0.3993 | 12.5238 | 526 | 1.2582 | 0.4444 | 1.2582 | 1.1217 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lesso06/230d9c9d-bd23-40e4-8b5a-5df40ecf6645
|
lesso06
| 2025-01-15T11:59:06Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:37:49Z
|
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 230d9c9d-bd23-40e4-8b5a-5df40ecf6645
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: true
chat_template: llama3
datasets:
- data_files:
- faf590c77bf66bb4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf590c77bf66bb4_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso06/230d9c9d-bd23-40e4-8b5a-5df40ecf6645
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf590c77bf66bb4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 230d9c9d-bd23-40e4-8b5a-5df40ecf6645
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.6906 | 0.0001 | 1 | 2.7888 |
| 10.9114 | 0.0005 | 5 | 2.5138 |
| 6.8535 | 0.0010 | 10 | 1.5266 |
| 3.3467 | 0.0016 | 15 | 0.6703 |
| 1.464 | 0.0021 | 20 | 0.4148 |
| 1.5469 | 0.0026 | 25 | 0.3721 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
great0001/e855a9f0-9dac-4a2a-a918-ce25bde13dde
|
great0001
| 2025-01-15T11:58:35Z
| 16
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-01-15T11:38:13Z
|
---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e855a9f0-9dac-4a2a-a918-ce25bde13dde
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d3a3e0021951f4ab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d3a3e0021951f4ab_train_data.json
type:
field_input: ''
field_instruction: desciption
field_output: caption
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/e855a9f0-9dac-4a2a-a918-ce25bde13dde
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/d3a3e0021951f4ab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4703f52-35c0-4efe-9d06-eb10e804627c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4703f52-35c0-4efe-9d06-eb10e804627c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e855a9f0-9dac-4a2a-a918-ce25bde13dde
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.8567 | 0.0000 | 1 | 3.4390 |
| 14.4906 | 0.0000 | 3 | 3.4397 |
| 13.3743 | 0.0001 | 6 | 3.4318 |
| 14.1981 | 0.0001 | 9 | 3.4004 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nbninh/37127357-5fa3-4fbd-a97b-566ef4554c76
|
nbninh
| 2025-01-15T11:57:31Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:38:20Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37127357-5fa3-4fbd-a97b-566ef4554c76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c0385643e82895c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c0385643e82895c_train_data.json
type:
field_input: choices
field_instruction: task
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/37127357-5fa3-4fbd-a97b-566ef4554c76
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c0385643e82895c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7c57bd3b-4993-496e-8eee-7f49fda1578b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7c57bd3b-4993-496e-8eee-7f49fda1578b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 37127357-5fa3-4fbd-a97b-566ef4554c76
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5868 | 0.0169 | 200 | 2.5827 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thakkkkkk/8871f48b-ff9a-4822-93e5-7fe975a23972
|
thakkkkkk
| 2025-01-15T11:56:40Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:38:29Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8871f48b-ff9a-4822-93e5-7fe975a23972
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c0385643e82895c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c0385643e82895c_train_data.json
type:
field_input: choices
field_instruction: task
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/8871f48b-ff9a-4822-93e5-7fe975a23972
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/0c0385643e82895c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7c57bd3b-4993-496e-8eee-7f49fda1578b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7c57bd3b-4993-496e-8eee-7f49fda1578b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8871f48b-ff9a-4822-93e5-7fe975a23972
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5763 | 0.0338 | 200 | 2.5718 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k1_task1_organization
|
MayBashendy
| 2025-01-15T11:56:20Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T11:39:58Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k1_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k1_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0580
- Qwk: 0.6131
- Mse: 1.0580
- Rmse: 1.0286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.25 | 2 | 6.0434 | 0.0261 | 6.0434 | 2.4583 |
| No log | 0.5 | 4 | 3.9893 | 0.1429 | 3.9893 | 1.9973 |
| No log | 0.75 | 6 | 2.6914 | 0.0500 | 2.6914 | 1.6406 |
| No log | 1.0 | 8 | 1.9588 | 0.0984 | 1.9588 | 1.3996 |
| No log | 1.25 | 10 | 1.7472 | 0.1111 | 1.7472 | 1.3218 |
| No log | 1.5 | 12 | 1.6663 | 0.0917 | 1.6663 | 1.2908 |
| No log | 1.75 | 14 | 1.6471 | 0.1416 | 1.6471 | 1.2834 |
| No log | 2.0 | 16 | 1.5418 | 0.1333 | 1.5418 | 1.2417 |
| No log | 2.25 | 18 | 1.4940 | 0.1714 | 1.4940 | 1.2223 |
| No log | 2.5 | 20 | 1.4679 | 0.1346 | 1.4679 | 1.2116 |
| No log | 2.75 | 22 | 1.3960 | 0.1905 | 1.3960 | 1.1815 |
| No log | 3.0 | 24 | 1.3620 | 0.2963 | 1.3620 | 1.1671 |
| No log | 3.25 | 26 | 1.3401 | 0.4655 | 1.3401 | 1.1576 |
| No log | 3.5 | 28 | 1.1229 | 0.5854 | 1.1229 | 1.0597 |
| No log | 3.75 | 30 | 1.1577 | 0.56 | 1.1577 | 1.0760 |
| No log | 4.0 | 32 | 1.3164 | 0.4762 | 1.3164 | 1.1473 |
| No log | 4.25 | 34 | 1.0964 | 0.5781 | 1.0964 | 1.0471 |
| No log | 4.5 | 36 | 0.9775 | 0.6316 | 0.9775 | 0.9887 |
| No log | 4.75 | 38 | 1.1159 | 0.5846 | 1.1159 | 1.0564 |
| No log | 5.0 | 40 | 1.1260 | 0.5581 | 1.1260 | 1.0611 |
| No log | 5.25 | 42 | 1.1567 | 0.5271 | 1.1567 | 1.0755 |
| No log | 5.5 | 44 | 1.3267 | 0.4580 | 1.3267 | 1.1518 |
| No log | 5.75 | 46 | 1.1694 | 0.5512 | 1.1694 | 1.0814 |
| No log | 6.0 | 48 | 0.8457 | 0.6571 | 0.8457 | 0.9196 |
| No log | 6.25 | 50 | 0.8401 | 0.6475 | 0.8401 | 0.9166 |
| No log | 6.5 | 52 | 0.8524 | 0.6569 | 0.8524 | 0.9233 |
| No log | 6.75 | 54 | 0.9563 | 0.6619 | 0.9563 | 0.9779 |
| No log | 7.0 | 56 | 1.6587 | 0.3741 | 1.6587 | 1.2879 |
| No log | 7.25 | 58 | 1.7114 | 0.3546 | 1.7114 | 1.3082 |
| No log | 7.5 | 60 | 1.1264 | 0.5379 | 1.1264 | 1.0613 |
| No log | 7.75 | 62 | 0.7990 | 0.7273 | 0.7990 | 0.8938 |
| No log | 8.0 | 64 | 0.7380 | 0.6993 | 0.7380 | 0.8591 |
| No log | 8.25 | 66 | 0.8104 | 0.7234 | 0.8104 | 0.9002 |
| No log | 8.5 | 68 | 1.1166 | 0.5793 | 1.1166 | 1.0567 |
| No log | 8.75 | 70 | 1.2192 | 0.5286 | 1.2192 | 1.1042 |
| No log | 9.0 | 72 | 1.0648 | 0.5846 | 1.0648 | 1.0319 |
| No log | 9.25 | 74 | 1.1797 | 0.5271 | 1.1797 | 1.0862 |
| No log | 9.5 | 76 | 1.1183 | 0.5271 | 1.1183 | 1.0575 |
| No log | 9.75 | 78 | 1.0274 | 0.6462 | 1.0274 | 1.0136 |
| No log | 10.0 | 80 | 0.9583 | 0.6562 | 0.9583 | 0.9789 |
| No log | 10.25 | 82 | 0.8749 | 0.6512 | 0.8749 | 0.9354 |
| No log | 10.5 | 84 | 0.8730 | 0.6716 | 0.8730 | 0.9344 |
| No log | 10.75 | 86 | 1.0355 | 0.5694 | 1.0355 | 1.0176 |
| No log | 11.0 | 88 | 1.3525 | 0.5698 | 1.3525 | 1.1630 |
| No log | 11.25 | 90 | 1.2799 | 0.5195 | 1.2799 | 1.1313 |
| No log | 11.5 | 92 | 1.2196 | 0.5034 | 1.2196 | 1.1043 |
| No log | 11.75 | 94 | 1.0260 | 0.6119 | 1.0260 | 1.0129 |
| No log | 12.0 | 96 | 1.0012 | 0.5556 | 1.0012 | 1.0006 |
| No log | 12.25 | 98 | 1.0046 | 0.6308 | 1.0046 | 1.0023 |
| No log | 12.5 | 100 | 1.1427 | 0.4964 | 1.1427 | 1.0689 |
| No log | 12.75 | 102 | 1.2576 | 0.4755 | 1.2576 | 1.1214 |
| No log | 13.0 | 104 | 1.2733 | 0.4966 | 1.2733 | 1.1284 |
| No log | 13.25 | 106 | 1.0226 | 0.6618 | 1.0226 | 1.0112 |
| No log | 13.5 | 108 | 0.9201 | 0.7194 | 0.9201 | 0.9592 |
| No log | 13.75 | 110 | 0.9727 | 0.6259 | 0.9727 | 0.9862 |
| No log | 14.0 | 112 | 0.9704 | 0.6216 | 0.9704 | 0.9851 |
| No log | 14.25 | 114 | 0.8100 | 0.7034 | 0.8100 | 0.9000 |
| No log | 14.5 | 116 | 0.8216 | 0.6620 | 0.8216 | 0.9064 |
| No log | 14.75 | 118 | 1.0284 | 0.5926 | 1.0284 | 1.0141 |
| No log | 15.0 | 120 | 1.1467 | 0.5850 | 1.1467 | 1.0708 |
| No log | 15.25 | 122 | 1.0504 | 0.6111 | 1.0504 | 1.0249 |
| No log | 15.5 | 124 | 0.8948 | 0.6618 | 0.8948 | 0.9460 |
| No log | 15.75 | 126 | 0.9033 | 0.6508 | 0.9033 | 0.9504 |
| No log | 16.0 | 128 | 0.9267 | 0.6032 | 0.9267 | 0.9627 |
| No log | 16.25 | 130 | 1.0021 | 0.6107 | 1.0021 | 1.0011 |
| No log | 16.5 | 132 | 1.0232 | 0.5938 | 1.0232 | 1.0116 |
| No log | 16.75 | 134 | 1.0181 | 0.6429 | 1.0181 | 1.0090 |
| No log | 17.0 | 136 | 1.0595 | 0.6093 | 1.0595 | 1.0293 |
| No log | 17.25 | 138 | 1.0922 | 0.6104 | 1.0922 | 1.0451 |
| No log | 17.5 | 140 | 0.9201 | 0.6667 | 0.9201 | 0.9592 |
| No log | 17.75 | 142 | 0.8927 | 0.7361 | 0.8927 | 0.9448 |
| No log | 18.0 | 144 | 1.0840 | 0.6216 | 1.0840 | 1.0411 |
| No log | 18.25 | 146 | 1.0815 | 0.6111 | 1.0815 | 1.0400 |
| No log | 18.5 | 148 | 0.8133 | 0.7068 | 0.8133 | 0.9018 |
| No log | 18.75 | 150 | 0.7630 | 0.6767 | 0.7630 | 0.8735 |
| No log | 19.0 | 152 | 0.7418 | 0.7 | 0.7418 | 0.8613 |
| No log | 19.25 | 154 | 0.7183 | 0.7518 | 0.7183 | 0.8475 |
| No log | 19.5 | 156 | 0.9407 | 0.6351 | 0.9407 | 0.9699 |
| No log | 19.75 | 158 | 1.1308 | 0.6194 | 1.1308 | 1.0634 |
| No log | 20.0 | 160 | 1.0770 | 0.6 | 1.0770 | 1.0378 |
| No log | 20.25 | 162 | 1.2141 | 0.5828 | 1.2141 | 1.1019 |
| No log | 20.5 | 164 | 1.0783 | 0.6143 | 1.0783 | 1.0384 |
| No log | 20.75 | 166 | 0.9227 | 0.6466 | 0.9227 | 0.9605 |
| No log | 21.0 | 168 | 0.8481 | 0.6950 | 0.8481 | 0.9209 |
| No log | 21.25 | 170 | 0.9505 | 0.6710 | 0.9505 | 0.9750 |
| No log | 21.5 | 172 | 1.1145 | 0.625 | 1.1145 | 1.0557 |
| No log | 21.75 | 174 | 1.3307 | 0.5714 | 1.3307 | 1.1536 |
| No log | 22.0 | 176 | 1.4338 | 0.5093 | 1.4338 | 1.1974 |
| No log | 22.25 | 178 | 1.3464 | 0.5590 | 1.3464 | 1.1603 |
| No log | 22.5 | 180 | 1.0743 | 0.5816 | 1.0743 | 1.0365 |
| No log | 22.75 | 182 | 0.8602 | 0.6963 | 0.8602 | 0.9275 |
| No log | 23.0 | 184 | 0.8335 | 0.6822 | 0.8335 | 0.9130 |
| No log | 23.25 | 186 | 0.8319 | 0.6614 | 0.8319 | 0.9121 |
| No log | 23.5 | 188 | 0.8881 | 0.6667 | 0.8881 | 0.9424 |
| No log | 23.75 | 190 | 1.0436 | 0.5778 | 1.0436 | 1.0216 |
| No log | 24.0 | 192 | 1.1918 | 0.5816 | 1.1918 | 1.0917 |
| No log | 24.25 | 194 | 1.0548 | 0.6099 | 1.0548 | 1.0270 |
| No log | 24.5 | 196 | 0.8004 | 0.6944 | 0.8004 | 0.8946 |
| No log | 24.75 | 198 | 0.6809 | 0.75 | 0.6809 | 0.8252 |
| No log | 25.0 | 200 | 0.6927 | 0.7413 | 0.6927 | 0.8323 |
| No log | 25.25 | 202 | 0.7597 | 0.7376 | 0.7597 | 0.8716 |
| No log | 25.5 | 204 | 0.9863 | 0.5714 | 0.9863 | 0.9931 |
| No log | 25.75 | 206 | 1.5548 | 0.5363 | 1.5548 | 1.2469 |
| No log | 26.0 | 208 | 1.9109 | 0.4848 | 1.9109 | 1.3823 |
| No log | 26.25 | 210 | 1.8090 | 0.4516 | 1.8090 | 1.3450 |
| No log | 26.5 | 212 | 1.4038 | 0.4744 | 1.4038 | 1.1848 |
| No log | 26.75 | 214 | 1.0982 | 0.6015 | 1.0982 | 1.0479 |
| No log | 27.0 | 216 | 0.9441 | 0.64 | 0.9441 | 0.9716 |
| No log | 27.25 | 218 | 0.8690 | 0.6508 | 0.8690 | 0.9322 |
| No log | 27.5 | 220 | 0.8980 | 0.6522 | 0.8980 | 0.9476 |
| No log | 27.75 | 222 | 0.9651 | 0.6232 | 0.9651 | 0.9824 |
| No log | 28.0 | 224 | 1.0026 | 0.6029 | 1.0026 | 1.0013 |
| No log | 28.25 | 226 | 0.9311 | 0.6377 | 0.9311 | 0.9649 |
| No log | 28.5 | 228 | 0.7594 | 0.6901 | 0.7594 | 0.8714 |
| No log | 28.75 | 230 | 0.6972 | 0.7211 | 0.6972 | 0.8350 |
| No log | 29.0 | 232 | 0.8108 | 0.6667 | 0.8108 | 0.9005 |
| No log | 29.25 | 234 | 0.8886 | 0.6667 | 0.8886 | 0.9426 |
| No log | 29.5 | 236 | 0.8329 | 0.6846 | 0.8329 | 0.9126 |
| No log | 29.75 | 238 | 0.9348 | 0.6133 | 0.9348 | 0.9669 |
| No log | 30.0 | 240 | 1.0132 | 0.6065 | 1.0132 | 1.0066 |
| No log | 30.25 | 242 | 0.9766 | 0.6040 | 0.9766 | 0.9883 |
| No log | 30.5 | 244 | 0.8360 | 0.6714 | 0.8360 | 0.9143 |
| No log | 30.75 | 246 | 0.7462 | 0.7101 | 0.7462 | 0.8638 |
| No log | 31.0 | 248 | 0.7229 | 0.7059 | 0.7229 | 0.8502 |
| No log | 31.25 | 250 | 0.7320 | 0.7194 | 0.7320 | 0.8556 |
| No log | 31.5 | 252 | 0.8151 | 0.6853 | 0.8151 | 0.9028 |
| No log | 31.75 | 254 | 0.9163 | 0.6207 | 0.9163 | 0.9572 |
| No log | 32.0 | 256 | 1.0681 | 0.5811 | 1.0681 | 1.0335 |
| No log | 32.25 | 258 | 1.2252 | 0.5342 | 1.2252 | 1.1069 |
| No log | 32.5 | 260 | 1.0575 | 0.6197 | 1.0575 | 1.0283 |
| No log | 32.75 | 262 | 0.8359 | 0.7101 | 0.8359 | 0.9143 |
| No log | 33.0 | 264 | 0.7626 | 0.7023 | 0.7626 | 0.8732 |
| No log | 33.25 | 266 | 0.7741 | 0.7068 | 0.7741 | 0.8798 |
| No log | 33.5 | 268 | 0.8100 | 0.7068 | 0.8100 | 0.9000 |
| No log | 33.75 | 270 | 0.8567 | 0.6963 | 0.8567 | 0.9256 |
| No log | 34.0 | 272 | 0.9446 | 0.6269 | 0.9446 | 0.9719 |
| No log | 34.25 | 274 | 0.9539 | 0.6475 | 0.9539 | 0.9767 |
| No log | 34.5 | 276 | 0.8931 | 0.6714 | 0.8931 | 0.9450 |
| No log | 34.75 | 278 | 0.8309 | 0.7023 | 0.8309 | 0.9115 |
| No log | 35.0 | 280 | 0.8487 | 0.7023 | 0.8487 | 0.9212 |
| No log | 35.25 | 282 | 0.9441 | 0.6619 | 0.9441 | 0.9717 |
| No log | 35.5 | 284 | 1.0647 | 0.6471 | 1.0647 | 1.0319 |
| No log | 35.75 | 286 | 1.0482 | 0.6308 | 1.0482 | 1.0238 |
| No log | 36.0 | 288 | 1.0268 | 0.624 | 1.0268 | 1.0133 |
| No log | 36.25 | 290 | 1.0085 | 0.6129 | 1.0085 | 1.0042 |
| No log | 36.5 | 292 | 1.0605 | 0.6308 | 1.0605 | 1.0298 |
| No log | 36.75 | 294 | 1.1918 | 0.5674 | 1.1918 | 1.0917 |
| No log | 37.0 | 296 | 1.3213 | 0.5 | 1.3213 | 1.1495 |
| No log | 37.25 | 298 | 1.2890 | 0.5161 | 1.2890 | 1.1354 |
| No log | 37.5 | 300 | 1.1634 | 0.5676 | 1.1634 | 1.0786 |
| No log | 37.75 | 302 | 0.9560 | 0.6716 | 0.9560 | 0.9777 |
| No log | 38.0 | 304 | 0.8738 | 0.6815 | 0.8738 | 0.9348 |
| No log | 38.25 | 306 | 0.8495 | 0.6715 | 0.8495 | 0.9217 |
| No log | 38.5 | 308 | 0.8580 | 0.6809 | 0.8580 | 0.9263 |
| No log | 38.75 | 310 | 0.9014 | 0.6294 | 0.9014 | 0.9494 |
| No log | 39.0 | 312 | 0.8705 | 0.6809 | 0.8705 | 0.9330 |
| No log | 39.25 | 314 | 0.7634 | 0.6957 | 0.7634 | 0.8737 |
| No log | 39.5 | 316 | 0.7352 | 0.7153 | 0.7352 | 0.8574 |
| No log | 39.75 | 318 | 0.7603 | 0.6957 | 0.7603 | 0.8719 |
| No log | 40.0 | 320 | 0.8175 | 0.6950 | 0.8175 | 0.9041 |
| No log | 40.25 | 322 | 0.9782 | 0.6503 | 0.9782 | 0.9890 |
| No log | 40.5 | 324 | 1.0370 | 0.6514 | 1.0370 | 1.0183 |
| No log | 40.75 | 326 | 0.8475 | 0.6708 | 0.8475 | 0.9206 |
| No log | 41.0 | 328 | 0.7110 | 0.7075 | 0.7110 | 0.8432 |
| No log | 41.25 | 330 | 0.7380 | 0.6842 | 0.7380 | 0.8591 |
| No log | 41.5 | 332 | 0.8417 | 0.6957 | 0.8417 | 0.9175 |
| No log | 41.75 | 334 | 0.7915 | 0.6755 | 0.7915 | 0.8897 |
| No log | 42.0 | 336 | 0.8554 | 0.6383 | 0.8554 | 0.9249 |
| No log | 42.25 | 338 | 0.8519 | 0.6765 | 0.8519 | 0.9230 |
| No log | 42.5 | 340 | 0.8469 | 0.6812 | 0.8469 | 0.9203 |
| No log | 42.75 | 342 | 0.8243 | 0.6912 | 0.8243 | 0.9079 |
| No log | 43.0 | 344 | 0.8631 | 0.6912 | 0.8631 | 0.9290 |
| No log | 43.25 | 346 | 0.9410 | 0.6715 | 0.9410 | 0.9700 |
| No log | 43.5 | 348 | 1.0106 | 0.6277 | 1.0106 | 1.0053 |
| No log | 43.75 | 350 | 0.9939 | 0.6471 | 0.9939 | 0.9969 |
| No log | 44.0 | 352 | 0.9523 | 0.6618 | 0.9523 | 0.9759 |
| No log | 44.25 | 354 | 0.9637 | 0.6618 | 0.9637 | 0.9817 |
| No log | 44.5 | 356 | 1.0042 | 0.6471 | 1.0042 | 1.0021 |
| No log | 44.75 | 358 | 1.0852 | 0.6232 | 1.0852 | 1.0417 |
| No log | 45.0 | 360 | 1.2444 | 0.525 | 1.2444 | 1.1155 |
| No log | 45.25 | 362 | 1.3566 | 0.5424 | 1.3566 | 1.1647 |
| No log | 45.5 | 364 | 1.3405 | 0.5424 | 1.3405 | 1.1578 |
| No log | 45.75 | 366 | 1.2306 | 0.5342 | 1.2306 | 1.1093 |
| No log | 46.0 | 368 | 0.9911 | 0.6383 | 0.9911 | 0.9955 |
| No log | 46.25 | 370 | 0.8483 | 0.6716 | 0.8483 | 0.9210 |
| No log | 46.5 | 372 | 0.8155 | 0.6716 | 0.8155 | 0.9031 |
| No log | 46.75 | 374 | 0.8317 | 0.6716 | 0.8317 | 0.9120 |
| No log | 47.0 | 376 | 0.9380 | 0.6525 | 0.9380 | 0.9685 |
| No log | 47.25 | 378 | 1.0442 | 0.6667 | 1.0442 | 1.0219 |
| No log | 47.5 | 380 | 0.9907 | 0.6533 | 0.9907 | 0.9953 |
| No log | 47.75 | 382 | 0.8545 | 0.6957 | 0.8545 | 0.9244 |
| No log | 48.0 | 384 | 0.7984 | 0.7153 | 0.7984 | 0.8935 |
| No log | 48.25 | 386 | 0.8040 | 0.7164 | 0.8040 | 0.8967 |
| No log | 48.5 | 388 | 0.8079 | 0.7164 | 0.8079 | 0.8988 |
| No log | 48.75 | 390 | 0.7967 | 0.7164 | 0.7967 | 0.8926 |
| No log | 49.0 | 392 | 0.8132 | 0.7153 | 0.8132 | 0.9018 |
| No log | 49.25 | 394 | 0.8581 | 0.7153 | 0.8581 | 0.9263 |
| No log | 49.5 | 396 | 0.9294 | 0.6429 | 0.9294 | 0.9640 |
| No log | 49.75 | 398 | 0.9568 | 0.6277 | 0.9568 | 0.9781 |
| No log | 50.0 | 400 | 0.9593 | 0.6277 | 0.9593 | 0.9795 |
| No log | 50.25 | 402 | 0.9249 | 0.6667 | 0.9249 | 0.9617 |
| No log | 50.5 | 404 | 0.8882 | 0.7121 | 0.8882 | 0.9424 |
| No log | 50.75 | 406 | 0.8933 | 0.7194 | 0.8933 | 0.9451 |
| No log | 51.0 | 408 | 0.9321 | 0.6475 | 0.9321 | 0.9655 |
| No log | 51.25 | 410 | 0.9890 | 0.5942 | 0.9890 | 0.9945 |
| No log | 51.5 | 412 | 1.0307 | 0.5972 | 1.0307 | 1.0152 |
| No log | 51.75 | 414 | 0.9892 | 0.5926 | 0.9892 | 0.9946 |
| No log | 52.0 | 416 | 0.9504 | 0.6222 | 0.9504 | 0.9749 |
| No log | 52.25 | 418 | 0.9264 | 0.6567 | 0.9264 | 0.9625 |
| No log | 52.5 | 420 | 0.8731 | 0.6716 | 0.8731 | 0.9344 |
| No log | 52.75 | 422 | 0.8332 | 0.6963 | 0.8332 | 0.9128 |
| No log | 53.0 | 424 | 0.7815 | 0.7299 | 0.7815 | 0.8840 |
| No log | 53.25 | 426 | 0.7639 | 0.7391 | 0.7639 | 0.8740 |
| No log | 53.5 | 428 | 0.7904 | 0.7299 | 0.7904 | 0.8891 |
| No log | 53.75 | 430 | 0.8083 | 0.7083 | 0.8083 | 0.8990 |
| No log | 54.0 | 432 | 0.8071 | 0.7092 | 0.8071 | 0.8984 |
| No log | 54.25 | 434 | 0.8213 | 0.7101 | 0.8213 | 0.9062 |
| No log | 54.5 | 436 | 0.8386 | 0.7101 | 0.8386 | 0.9158 |
| No log | 54.75 | 438 | 0.8172 | 0.7391 | 0.8172 | 0.9040 |
| No log | 55.0 | 440 | 0.7935 | 0.7259 | 0.7935 | 0.8908 |
| No log | 55.25 | 442 | 0.8106 | 0.6963 | 0.8106 | 0.9004 |
| No log | 55.5 | 444 | 0.8529 | 0.6716 | 0.8529 | 0.9235 |
| No log | 55.75 | 446 | 0.9613 | 0.6423 | 0.9613 | 0.9804 |
| No log | 56.0 | 448 | 1.1032 | 0.5526 | 1.1032 | 1.0503 |
| No log | 56.25 | 450 | 1.1277 | 0.5676 | 1.1277 | 1.0619 |
| No log | 56.5 | 452 | 1.0341 | 0.6187 | 1.0341 | 1.0169 |
| No log | 56.75 | 454 | 0.9163 | 0.6370 | 0.9163 | 0.9572 |
| No log | 57.0 | 456 | 0.8363 | 0.6917 | 0.8363 | 0.9145 |
| No log | 57.25 | 458 | 0.8126 | 0.7299 | 0.8126 | 0.9014 |
| No log | 57.5 | 460 | 0.8046 | 0.7299 | 0.8046 | 0.8970 |
| No log | 57.75 | 462 | 0.8150 | 0.7286 | 0.8150 | 0.9028 |
| No log | 58.0 | 464 | 0.8306 | 0.6715 | 0.8306 | 0.9114 |
| No log | 58.25 | 466 | 0.8860 | 0.6528 | 0.8860 | 0.9413 |
| No log | 58.5 | 468 | 0.9279 | 0.6621 | 0.9279 | 0.9633 |
| No log | 58.75 | 470 | 0.9801 | 0.6294 | 0.9801 | 0.9900 |
| No log | 59.0 | 472 | 0.9580 | 0.6286 | 0.9580 | 0.9788 |
| No log | 59.25 | 474 | 0.9449 | 0.6471 | 0.9449 | 0.9721 |
| No log | 59.5 | 476 | 0.9457 | 0.6466 | 0.9457 | 0.9725 |
| No log | 59.75 | 478 | 0.9064 | 0.7015 | 0.9064 | 0.9520 |
| No log | 60.0 | 480 | 0.9013 | 0.7015 | 0.9013 | 0.9494 |
| No log | 60.25 | 482 | 0.9293 | 0.6131 | 0.9293 | 0.9640 |
| No log | 60.5 | 484 | 1.0466 | 0.5931 | 1.0466 | 1.0231 |
| No log | 60.75 | 486 | 1.2370 | 0.5509 | 1.2370 | 1.1122 |
| No log | 61.0 | 488 | 1.3091 | 0.5325 | 1.3091 | 1.1442 |
| No log | 61.25 | 490 | 1.2488 | 0.5325 | 1.2488 | 1.1175 |
| No log | 61.5 | 492 | 1.1442 | 0.5455 | 1.1442 | 1.0697 |
| No log | 61.75 | 494 | 1.0592 | 0.5714 | 1.0592 | 1.0292 |
| No log | 62.0 | 496 | 0.9891 | 0.6259 | 0.9891 | 0.9945 |
| No log | 62.25 | 498 | 1.0005 | 0.6301 | 1.0005 | 1.0002 |
| 0.3118 | 62.5 | 500 | 0.9967 | 0.6301 | 0.9967 | 0.9983 |
| 0.3118 | 62.75 | 502 | 1.0340 | 0.6301 | 1.0340 | 1.0169 |
| 0.3118 | 63.0 | 504 | 1.1171 | 0.5931 | 1.1171 | 1.0569 |
| 0.3118 | 63.25 | 506 | 1.1605 | 0.5931 | 1.1605 | 1.0773 |
| 0.3118 | 63.5 | 508 | 1.1307 | 0.5899 | 1.1307 | 1.0633 |
| 0.3118 | 63.75 | 510 | 1.0580 | 0.6131 | 1.0580 | 1.0286 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lesso01/3be9e922-837e-4ef5-aace-d2dbcca9f8f4
|
lesso01
| 2025-01-15T11:55:54Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:00:38Z
|
---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3be9e922-837e-4ef5-aace-d2dbcca9f8f4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 22898ffc4e984aab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22898ffc4e984aab_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/3be9e922-837e-4ef5-aace-d2dbcca9f8f4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/22898ffc4e984aab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 537ea366-0d00-4215-a8bf-12dd9968edce
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 537ea366-0d00-4215-a8bf-12dd9968edce
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3be9e922-837e-4ef5-aace-d2dbcca9f8f4
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 5 | nan |
| 0.0 | 0.0006 | 10 | nan |
| 0.0 | 0.0010 | 15 | nan |
| 0.0 | 0.0013 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
duyphu/5ecbe2a6-d426-4f6e-b19f-c3076afd5015
|
duyphu
| 2025-01-15T11:55:44Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-15T06:39:32Z
|
---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5ecbe2a6-d426-4f6e-b19f-c3076afd5015
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 18ad6587de8e6631_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/18ad6587de8e6631_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/5ecbe2a6-d426-4f6e-b19f-c3076afd5015
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/18ad6587de8e6631_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8ac1862d-3420-432c-b8d4-590e546120df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8ac1862d-3420-432c-b8d4-590e546120df
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5ecbe2a6-d426-4f6e-b19f-c3076afd5015
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 10 | nan |
| 0.0 | 0.0002 | 20 | nan |
| 0.0 | 0.0003 | 30 | nan |
| 0.0 | 0.0004 | 40 | nan |
| 0.0 | 0.0005 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/a166b702-8f54-4693-9afe-a1b79246a3df
|
ClarenceDan
| 2025-01-15T11:54:55Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-01-15T11:54:10Z
|
---
library_name: peft
base_model: oopsung/llama2-7b-n-ox-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a166b702-8f54-4693-9afe-a1b79246a3df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-n-ox-test-v1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1fdfb3f4e449665_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1fdfb3f4e449665_train_data.json
type:
field_instruction: question
field_output: best_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/a166b702-8f54-4693-9afe-a1b79246a3df
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c1fdfb3f4e449665_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 212c681b-11df-434d-a645-a52b9b33936f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 212c681b-11df-434d-a645-a52b9b33936f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a166b702-8f54-4693-9afe-a1b79246a3df
This model is a fine-tuned version of [oopsung/llama2-7b-n-ox-test-v1](https://huggingface.co/oopsung/llama2-7b-n-ox-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0132 | 1 | nan |
| 0.0 | 0.0397 | 3 | nan |
| 0.0 | 0.0795 | 6 | nan |
| 0.0 | 0.1192 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k2_task2_organization
|
MayBashendy
| 2025-01-15T11:52:24Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-15T11:29:36Z
|
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k2_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k2_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5740
- Qwk: 0.5582
- Mse: 0.5740
- Rmse: 0.7577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1538 | 2 | 4.3166 | -0.0182 | 4.3166 | 2.0776 |
| No log | 0.3077 | 4 | 2.0843 | 0.0215 | 2.0843 | 1.4437 |
| No log | 0.4615 | 6 | 1.1484 | -0.0111 | 1.1484 | 1.0716 |
| No log | 0.6154 | 8 | 0.8337 | 0.1201 | 0.8337 | 0.9131 |
| No log | 0.7692 | 10 | 0.8068 | 0.2451 | 0.8068 | 0.8982 |
| No log | 0.9231 | 12 | 1.0199 | -0.0335 | 1.0199 | 1.0099 |
| No log | 1.0769 | 14 | 1.5561 | 0.1011 | 1.5561 | 1.2474 |
| No log | 1.2308 | 16 | 1.2655 | 0.0262 | 1.2655 | 1.1249 |
| No log | 1.3846 | 18 | 0.9307 | 0.1798 | 0.9307 | 0.9647 |
| No log | 1.5385 | 20 | 0.9386 | 0.1789 | 0.9386 | 0.9688 |
| No log | 1.6923 | 22 | 0.9229 | 0.2181 | 0.9229 | 0.9607 |
| No log | 1.8462 | 24 | 0.7977 | 0.2641 | 0.7977 | 0.8931 |
| No log | 2.0 | 26 | 0.6855 | 0.3686 | 0.6855 | 0.8279 |
| No log | 2.1538 | 28 | 0.6591 | 0.3407 | 0.6591 | 0.8119 |
| No log | 2.3077 | 30 | 0.7682 | 0.2751 | 0.7682 | 0.8765 |
| No log | 2.4615 | 32 | 1.0704 | 0.1400 | 1.0704 | 1.0346 |
| No log | 2.6154 | 34 | 1.0220 | 0.2000 | 1.0220 | 1.0110 |
| No log | 2.7692 | 36 | 0.7401 | 0.2859 | 0.7401 | 0.8603 |
| No log | 2.9231 | 38 | 0.6803 | 0.3668 | 0.6803 | 0.8248 |
| No log | 3.0769 | 40 | 0.6391 | 0.3655 | 0.6391 | 0.7994 |
| No log | 3.2308 | 42 | 0.6932 | 0.3668 | 0.6932 | 0.8326 |
| No log | 3.3846 | 44 | 0.8723 | 0.3535 | 0.8723 | 0.9340 |
| No log | 3.5385 | 46 | 0.8235 | 0.4066 | 0.8235 | 0.9075 |
| No log | 3.6923 | 48 | 0.7124 | 0.4085 | 0.7124 | 0.8440 |
| No log | 3.8462 | 50 | 0.8337 | 0.3710 | 0.8337 | 0.9131 |
| No log | 4.0 | 52 | 1.2769 | 0.3200 | 1.2769 | 1.1300 |
| No log | 4.1538 | 54 | 1.3296 | 0.2698 | 1.3296 | 1.1531 |
| No log | 4.3077 | 56 | 0.8080 | 0.4030 | 0.8080 | 0.8989 |
| No log | 4.4615 | 58 | 0.5909 | 0.5832 | 0.5909 | 0.7687 |
| No log | 4.6154 | 60 | 0.6067 | 0.5297 | 0.6067 | 0.7789 |
| No log | 4.7692 | 62 | 0.7153 | 0.4580 | 0.7153 | 0.8458 |
| No log | 4.9231 | 64 | 1.0223 | 0.2693 | 1.0223 | 1.0111 |
| No log | 5.0769 | 66 | 1.0004 | 0.2956 | 1.0004 | 1.0002 |
| No log | 5.2308 | 68 | 0.8143 | 0.4005 | 0.8143 | 0.9024 |
| No log | 5.3846 | 70 | 0.6096 | 0.4791 | 0.6096 | 0.7808 |
| No log | 5.5385 | 72 | 0.5470 | 0.5367 | 0.5470 | 0.7396 |
| No log | 5.6923 | 74 | 0.5499 | 0.5348 | 0.5499 | 0.7416 |
| No log | 5.8462 | 76 | 0.5558 | 0.5932 | 0.5558 | 0.7455 |
| No log | 6.0 | 78 | 0.5468 | 0.6171 | 0.5468 | 0.7395 |
| No log | 6.1538 | 80 | 0.5554 | 0.5755 | 0.5554 | 0.7453 |
| No log | 6.3077 | 82 | 0.6464 | 0.5197 | 0.6464 | 0.8040 |
| No log | 6.4615 | 84 | 0.6717 | 0.5045 | 0.6717 | 0.8196 |
| No log | 6.6154 | 86 | 0.6752 | 0.5394 | 0.6752 | 0.8217 |
| No log | 6.7692 | 88 | 0.6191 | 0.5860 | 0.6191 | 0.7869 |
| No log | 6.9231 | 90 | 0.7234 | 0.5735 | 0.7234 | 0.8505 |
| No log | 7.0769 | 92 | 0.8108 | 0.5761 | 0.8108 | 0.9004 |
| No log | 7.2308 | 94 | 0.8017 | 0.5777 | 0.8017 | 0.8954 |
| No log | 7.3846 | 96 | 0.7130 | 0.5655 | 0.7130 | 0.8444 |
| No log | 7.5385 | 98 | 0.7170 | 0.5353 | 0.7170 | 0.8468 |
| No log | 7.6923 | 100 | 0.6341 | 0.5730 | 0.6341 | 0.7963 |
| No log | 7.8462 | 102 | 0.5881 | 0.6089 | 0.5881 | 0.7669 |
| No log | 8.0 | 104 | 0.6092 | 0.5540 | 0.6092 | 0.7805 |
| No log | 8.1538 | 106 | 0.5621 | 0.5972 | 0.5621 | 0.7498 |
| No log | 8.3077 | 108 | 0.8307 | 0.4650 | 0.8307 | 0.9114 |
| No log | 8.4615 | 110 | 1.3354 | 0.3244 | 1.3354 | 1.1556 |
| No log | 8.6154 | 112 | 1.2251 | 0.3359 | 1.2251 | 1.1068 |
| No log | 8.7692 | 114 | 0.8733 | 0.4380 | 0.8733 | 0.9345 |
| No log | 8.9231 | 116 | 0.6120 | 0.5583 | 0.6120 | 0.7823 |
| No log | 9.0769 | 118 | 0.5930 | 0.5968 | 0.5930 | 0.7701 |
| No log | 9.2308 | 120 | 0.5794 | 0.5719 | 0.5794 | 0.7612 |
| No log | 9.3846 | 122 | 0.6149 | 0.5502 | 0.6149 | 0.7841 |
| No log | 9.5385 | 124 | 0.6253 | 0.5717 | 0.6253 | 0.7908 |
| No log | 9.6923 | 126 | 0.5867 | 0.5679 | 0.5867 | 0.7660 |
| No log | 9.8462 | 128 | 0.5932 | 0.5733 | 0.5932 | 0.7702 |
| No log | 10.0 | 130 | 0.7043 | 0.4465 | 0.7043 | 0.8393 |
| No log | 10.1538 | 132 | 0.6316 | 0.5452 | 0.6316 | 0.7947 |
| No log | 10.3077 | 134 | 0.5982 | 0.5152 | 0.5982 | 0.7734 |
| No log | 10.4615 | 136 | 0.5870 | 0.5662 | 0.5870 | 0.7661 |
| No log | 10.6154 | 138 | 0.6379 | 0.4895 | 0.6379 | 0.7987 |
| No log | 10.7692 | 140 | 0.6773 | 0.4741 | 0.6773 | 0.8230 |
| No log | 10.9231 | 142 | 0.6862 | 0.5269 | 0.6862 | 0.8284 |
| No log | 11.0769 | 144 | 0.6109 | 0.5259 | 0.6109 | 0.7816 |
| No log | 11.2308 | 146 | 0.5930 | 0.5385 | 0.5930 | 0.7700 |
| No log | 11.3846 | 148 | 0.6006 | 0.5508 | 0.6006 | 0.7750 |
| No log | 11.5385 | 150 | 0.5931 | 0.5467 | 0.5931 | 0.7702 |
| No log | 11.6923 | 152 | 0.6110 | 0.5613 | 0.6110 | 0.7816 |
| No log | 11.8462 | 154 | 0.6326 | 0.5724 | 0.6326 | 0.7954 |
| No log | 12.0 | 156 | 0.6190 | 0.5190 | 0.6190 | 0.7868 |
| No log | 12.1538 | 158 | 0.6447 | 0.5518 | 0.6447 | 0.8029 |
| No log | 12.3077 | 160 | 0.6761 | 0.5431 | 0.6761 | 0.8223 |
| No log | 12.4615 | 162 | 0.7183 | 0.5674 | 0.7183 | 0.8475 |
| No log | 12.6154 | 164 | 0.7084 | 0.5660 | 0.7084 | 0.8417 |
| No log | 12.7692 | 166 | 0.7161 | 0.5152 | 0.7161 | 0.8462 |
| No log | 12.9231 | 168 | 0.7082 | 0.5103 | 0.7082 | 0.8415 |
| No log | 13.0769 | 170 | 0.7004 | 0.5039 | 0.7004 | 0.8369 |
| No log | 13.2308 | 172 | 0.6535 | 0.5201 | 0.6535 | 0.8084 |
| No log | 13.3846 | 174 | 0.6000 | 0.5540 | 0.6000 | 0.7746 |
| No log | 13.5385 | 176 | 0.5964 | 0.5259 | 0.5964 | 0.7723 |
| No log | 13.6923 | 178 | 0.6162 | 0.5313 | 0.6162 | 0.7850 |
| No log | 13.8462 | 180 | 0.6061 | 0.5447 | 0.6061 | 0.7785 |
| No log | 14.0 | 182 | 0.6111 | 0.5808 | 0.6111 | 0.7817 |
| No log | 14.1538 | 184 | 0.6116 | 0.6021 | 0.6116 | 0.7821 |
| No log | 14.3077 | 186 | 0.6198 | 0.5753 | 0.6198 | 0.7873 |
| No log | 14.4615 | 188 | 0.6357 | 0.5907 | 0.6357 | 0.7973 |
| No log | 14.6154 | 190 | 0.6446 | 0.5628 | 0.6446 | 0.8029 |
| No log | 14.7692 | 192 | 0.6561 | 0.5915 | 0.6561 | 0.8100 |
| No log | 14.9231 | 194 | 0.6590 | 0.5754 | 0.6590 | 0.8118 |
| No log | 15.0769 | 196 | 0.6613 | 0.5851 | 0.6613 | 0.8132 |
| No log | 15.2308 | 198 | 0.6526 | 0.5872 | 0.6526 | 0.8078 |
| No log | 15.3846 | 200 | 0.6648 | 0.4991 | 0.6648 | 0.8153 |
| No log | 15.5385 | 202 | 0.6105 | 0.5499 | 0.6105 | 0.7814 |
| No log | 15.6923 | 204 | 0.5950 | 0.5757 | 0.5950 | 0.7713 |
| No log | 15.8462 | 206 | 0.5680 | 0.5560 | 0.5680 | 0.7537 |
| No log | 16.0 | 208 | 0.5635 | 0.5767 | 0.5635 | 0.7507 |
| No log | 16.1538 | 210 | 0.5725 | 0.5560 | 0.5725 | 0.7566 |
| No log | 16.3077 | 212 | 0.5931 | 0.5540 | 0.5931 | 0.7701 |
| No log | 16.4615 | 214 | 0.6073 | 0.5488 | 0.6073 | 0.7793 |
| No log | 16.6154 | 216 | 0.6170 | 0.6094 | 0.6170 | 0.7855 |
| No log | 16.7692 | 218 | 0.6059 | 0.5442 | 0.6059 | 0.7784 |
| No log | 16.9231 | 220 | 0.5896 | 0.5476 | 0.5896 | 0.7679 |
| No log | 17.0769 | 222 | 0.5934 | 0.5965 | 0.5934 | 0.7703 |
| No log | 17.2308 | 224 | 0.6235 | 0.5323 | 0.6235 | 0.7896 |
| No log | 17.3846 | 226 | 0.5803 | 0.5832 | 0.5803 | 0.7618 |
| No log | 17.5385 | 228 | 0.5612 | 0.6000 | 0.5612 | 0.7491 |
| No log | 17.6923 | 230 | 0.5804 | 0.5609 | 0.5804 | 0.7619 |
| No log | 17.8462 | 232 | 0.5921 | 0.6033 | 0.5921 | 0.7695 |
| No log | 18.0 | 234 | 0.6493 | 0.5634 | 0.6493 | 0.8058 |
| No log | 18.1538 | 236 | 0.7160 | 0.5595 | 0.7160 | 0.8462 |
| No log | 18.3077 | 238 | 0.6602 | 0.5634 | 0.6602 | 0.8125 |
| No log | 18.4615 | 240 | 0.6085 | 0.5958 | 0.6085 | 0.7801 |
| No log | 18.6154 | 242 | 0.6087 | 0.5420 | 0.6087 | 0.7802 |
| No log | 18.7692 | 244 | 0.6068 | 0.5570 | 0.6068 | 0.7790 |
| No log | 18.9231 | 246 | 0.5978 | 0.6000 | 0.5978 | 0.7732 |
| No log | 19.0769 | 248 | 0.6424 | 0.6101 | 0.6424 | 0.8015 |
| No log | 19.2308 | 250 | 0.6373 | 0.6100 | 0.6373 | 0.7983 |
| No log | 19.3846 | 252 | 0.6111 | 0.5689 | 0.6111 | 0.7817 |
| No log | 19.5385 | 254 | 0.5998 | 0.5420 | 0.5998 | 0.7745 |
| No log | 19.6923 | 256 | 0.5785 | 0.5875 | 0.5785 | 0.7606 |
| No log | 19.8462 | 258 | 0.5633 | 0.5816 | 0.5633 | 0.7505 |
| No log | 20.0 | 260 | 0.5699 | 0.5528 | 0.5699 | 0.7549 |
| No log | 20.1538 | 262 | 0.5742 | 0.5588 | 0.5742 | 0.7577 |
| No log | 20.3077 | 264 | 0.5676 | 0.5675 | 0.5676 | 0.7534 |
| No log | 20.4615 | 266 | 0.5787 | 0.5845 | 0.5787 | 0.7607 |
| No log | 20.6154 | 268 | 0.5854 | 0.5794 | 0.5854 | 0.7651 |
| No log | 20.7692 | 270 | 0.6018 | 0.5732 | 0.6018 | 0.7758 |
| No log | 20.9231 | 272 | 0.6101 | 0.5748 | 0.6101 | 0.7811 |
| No log | 21.0769 | 274 | 0.5999 | 0.5634 | 0.5999 | 0.7745 |
| No log | 21.2308 | 276 | 0.5755 | 0.6004 | 0.5755 | 0.7586 |
| No log | 21.3846 | 278 | 0.5764 | 0.5927 | 0.5764 | 0.7592 |
| No log | 21.5385 | 280 | 0.5695 | 0.5991 | 0.5695 | 0.7547 |
| No log | 21.6923 | 282 | 0.6902 | 0.4982 | 0.6902 | 0.8308 |
| No log | 21.8462 | 284 | 0.7838 | 0.4876 | 0.7838 | 0.8853 |
| No log | 22.0 | 286 | 0.7335 | 0.4734 | 0.7335 | 0.8565 |
| No log | 22.1538 | 288 | 0.6043 | 0.5569 | 0.6043 | 0.7774 |
| No log | 22.3077 | 290 | 0.6099 | 0.5291 | 0.6099 | 0.7809 |
| No log | 22.4615 | 292 | 0.6723 | 0.4971 | 0.6723 | 0.8199 |
| No log | 22.6154 | 294 | 0.6206 | 0.5386 | 0.6206 | 0.7878 |
| No log | 22.7692 | 296 | 0.5784 | 0.6049 | 0.5784 | 0.7605 |
| No log | 22.9231 | 298 | 0.5803 | 0.5932 | 0.5803 | 0.7618 |
| No log | 23.0769 | 300 | 0.6113 | 0.5722 | 0.6113 | 0.7819 |
| No log | 23.2308 | 302 | 0.5996 | 0.5705 | 0.5996 | 0.7743 |
| No log | 23.3846 | 304 | 0.5707 | 0.5658 | 0.5707 | 0.7554 |
| No log | 23.5385 | 306 | 0.5790 | 0.5681 | 0.5790 | 0.7609 |
| No log | 23.6923 | 308 | 0.6174 | 0.5710 | 0.6174 | 0.7857 |
| No log | 23.8462 | 310 | 0.6609 | 0.5311 | 0.6609 | 0.8130 |
| No log | 24.0 | 312 | 0.6601 | 0.5451 | 0.6601 | 0.8125 |
| No log | 24.1538 | 314 | 0.6304 | 0.5961 | 0.6304 | 0.7940 |
| No log | 24.3077 | 316 | 0.6300 | 0.5864 | 0.6300 | 0.7937 |
| No log | 24.4615 | 318 | 0.6495 | 0.5864 | 0.6495 | 0.8059 |
| No log | 24.6154 | 320 | 0.6751 | 0.5954 | 0.6751 | 0.8216 |
| No log | 24.7692 | 322 | 0.6698 | 0.5630 | 0.6698 | 0.8184 |
| No log | 24.9231 | 324 | 0.6769 | 0.5499 | 0.6769 | 0.8227 |
| No log | 25.0769 | 326 | 0.6889 | 0.5903 | 0.6889 | 0.8300 |
| No log | 25.2308 | 328 | 0.8172 | 0.4687 | 0.8172 | 0.9040 |
| No log | 25.3846 | 330 | 0.9111 | 0.4532 | 0.9111 | 0.9545 |
| No log | 25.5385 | 332 | 0.8508 | 0.4682 | 0.8508 | 0.9224 |
| No log | 25.6923 | 334 | 0.7333 | 0.5726 | 0.7333 | 0.8563 |
| No log | 25.8462 | 336 | 0.7399 | 0.5431 | 0.7399 | 0.8602 |
| No log | 26.0 | 338 | 0.7518 | 0.5606 | 0.7518 | 0.8671 |
| No log | 26.1538 | 340 | 0.7097 | 0.5436 | 0.7097 | 0.8424 |
| No log | 26.3077 | 342 | 0.6625 | 0.5542 | 0.6625 | 0.8139 |
| No log | 26.4615 | 344 | 0.6316 | 0.5701 | 0.6316 | 0.7947 |
| No log | 26.6154 | 346 | 0.6173 | 0.5787 | 0.6173 | 0.7857 |
| No log | 26.7692 | 348 | 0.6029 | 0.5901 | 0.6029 | 0.7765 |
| No log | 26.9231 | 350 | 0.6030 | 0.5907 | 0.6030 | 0.7765 |
| No log | 27.0769 | 352 | 0.6197 | 0.5912 | 0.6197 | 0.7872 |
| No log | 27.2308 | 354 | 0.6146 | 0.6004 | 0.6146 | 0.7839 |
| No log | 27.3846 | 356 | 0.6062 | 0.5829 | 0.6062 | 0.7786 |
| No log | 27.5385 | 358 | 0.6183 | 0.5179 | 0.6183 | 0.7863 |
| No log | 27.6923 | 360 | 0.6465 | 0.5779 | 0.6465 | 0.8040 |
| No log | 27.8462 | 362 | 0.6023 | 0.5725 | 0.6023 | 0.7761 |
| No log | 28.0 | 364 | 0.5663 | 0.6216 | 0.5663 | 0.7525 |
| No log | 28.1538 | 366 | 0.5673 | 0.5921 | 0.5673 | 0.7532 |
| No log | 28.3077 | 368 | 0.5891 | 0.6027 | 0.5891 | 0.7675 |
| No log | 28.4615 | 370 | 0.6038 | 0.5982 | 0.6038 | 0.7770 |
| No log | 28.6154 | 372 | 0.6444 | 0.5727 | 0.6444 | 0.8027 |
| No log | 28.7692 | 374 | 0.6937 | 0.5485 | 0.6937 | 0.8329 |
| No log | 28.9231 | 376 | 0.7086 | 0.5382 | 0.7086 | 0.8418 |
| No log | 29.0769 | 378 | 0.6423 | 0.5759 | 0.6423 | 0.8014 |
| No log | 29.2308 | 380 | 0.5755 | 0.6227 | 0.5755 | 0.7586 |
| No log | 29.3846 | 382 | 0.5606 | 0.5805 | 0.5606 | 0.7487 |
| No log | 29.5385 | 384 | 0.5545 | 0.6075 | 0.5545 | 0.7447 |
| No log | 29.6923 | 386 | 0.5713 | 0.6557 | 0.5713 | 0.7558 |
| No log | 29.8462 | 388 | 0.5733 | 0.6257 | 0.5733 | 0.7572 |
| No log | 30.0 | 390 | 0.5605 | 0.6478 | 0.5605 | 0.7487 |
| No log | 30.1538 | 392 | 0.5521 | 0.6128 | 0.5521 | 0.7430 |
| No log | 30.3077 | 394 | 0.5593 | 0.6313 | 0.5593 | 0.7479 |
| No log | 30.4615 | 396 | 0.5647 | 0.6143 | 0.5647 | 0.7514 |
| No log | 30.6154 | 398 | 0.5598 | 0.6031 | 0.5598 | 0.7482 |
| No log | 30.7692 | 400 | 0.5529 | 0.6134 | 0.5529 | 0.7436 |
| No log | 30.9231 | 402 | 0.5541 | 0.5863 | 0.5541 | 0.7444 |
| No log | 31.0769 | 404 | 0.5766 | 0.5595 | 0.5766 | 0.7594 |
| No log | 31.2308 | 406 | 0.5851 | 0.5576 | 0.5851 | 0.7649 |
| No log | 31.3846 | 408 | 0.5802 | 0.5989 | 0.5802 | 0.7617 |
| No log | 31.5385 | 410 | 0.5939 | 0.5988 | 0.5939 | 0.7706 |
| No log | 31.6923 | 412 | 0.6295 | 0.5816 | 0.6295 | 0.7934 |
| No log | 31.8462 | 414 | 0.6348 | 0.5318 | 0.6348 | 0.7968 |
| No log | 32.0 | 416 | 0.5894 | 0.6025 | 0.5894 | 0.7677 |
| No log | 32.1538 | 418 | 0.5873 | 0.5742 | 0.5873 | 0.7663 |
| No log | 32.3077 | 420 | 0.6661 | 0.5471 | 0.6661 | 0.8161 |
| No log | 32.4615 | 422 | 0.6914 | 0.5492 | 0.6914 | 0.8315 |
| No log | 32.6154 | 424 | 0.6486 | 0.5256 | 0.6486 | 0.8053 |
| No log | 32.7692 | 426 | 0.5781 | 0.5804 | 0.5781 | 0.7604 |
| No log | 32.9231 | 428 | 0.5705 | 0.6007 | 0.5705 | 0.7553 |
| No log | 33.0769 | 430 | 0.5963 | 0.5581 | 0.5963 | 0.7722 |
| No log | 33.2308 | 432 | 0.5858 | 0.6057 | 0.5858 | 0.7654 |
| No log | 33.3846 | 434 | 0.5581 | 0.6023 | 0.5581 | 0.7470 |
| No log | 33.5385 | 436 | 0.5552 | 0.5746 | 0.5552 | 0.7451 |
| No log | 33.6923 | 438 | 0.5499 | 0.5767 | 0.5499 | 0.7416 |
| No log | 33.8462 | 440 | 0.5448 | 0.5921 | 0.5448 | 0.7381 |
| No log | 34.0 | 442 | 0.5549 | 0.6065 | 0.5549 | 0.7449 |
| No log | 34.1538 | 444 | 0.5462 | 0.5799 | 0.5462 | 0.7391 |
| No log | 34.3077 | 446 | 0.5406 | 0.5827 | 0.5406 | 0.7353 |
| No log | 34.4615 | 448 | 0.5485 | 0.5789 | 0.5485 | 0.7406 |
| No log | 34.6154 | 450 | 0.5459 | 0.5887 | 0.5459 | 0.7388 |
| No log | 34.7692 | 452 | 0.5542 | 0.6020 | 0.5542 | 0.7444 |
| No log | 34.9231 | 454 | 0.5702 | 0.6018 | 0.5702 | 0.7551 |
| No log | 35.0769 | 456 | 0.5847 | 0.5843 | 0.5847 | 0.7647 |
| No log | 35.2308 | 458 | 0.5804 | 0.6128 | 0.5804 | 0.7619 |
| No log | 35.3846 | 460 | 0.5919 | 0.5877 | 0.5919 | 0.7694 |
| No log | 35.5385 | 462 | 0.5943 | 0.6016 | 0.5943 | 0.7709 |
| No log | 35.6923 | 464 | 0.5944 | 0.5794 | 0.5944 | 0.7710 |
| No log | 35.8462 | 466 | 0.5890 | 0.5872 | 0.5890 | 0.7675 |
| No log | 36.0 | 468 | 0.5800 | 0.5901 | 0.5800 | 0.7616 |
| No log | 36.1538 | 470 | 0.5976 | 0.5390 | 0.5976 | 0.7730 |
| No log | 36.3077 | 472 | 0.5806 | 0.5308 | 0.5806 | 0.7620 |
| No log | 36.4615 | 474 | 0.5507 | 0.5704 | 0.5507 | 0.7421 |
| No log | 36.6154 | 476 | 0.5499 | 0.5535 | 0.5499 | 0.7416 |
| No log | 36.7692 | 478 | 0.5503 | 0.5569 | 0.5503 | 0.7418 |
| No log | 36.9231 | 480 | 0.5526 | 0.5535 | 0.5526 | 0.7434 |
| No log | 37.0769 | 482 | 0.5537 | 0.5845 | 0.5537 | 0.7441 |
| No log | 37.2308 | 484 | 0.5617 | 0.5705 | 0.5617 | 0.7494 |
| No log | 37.3846 | 486 | 0.5783 | 0.5481 | 0.5783 | 0.7605 |
| No log | 37.5385 | 488 | 0.5681 | 0.5745 | 0.5681 | 0.7537 |
| No log | 37.6923 | 490 | 0.5650 | 0.5783 | 0.5650 | 0.7517 |
| No log | 37.8462 | 492 | 0.5636 | 0.5901 | 0.5636 | 0.7507 |
| No log | 38.0 | 494 | 0.5697 | 0.6065 | 0.5697 | 0.7548 |
| No log | 38.1538 | 496 | 0.5675 | 0.5784 | 0.5675 | 0.7533 |
| No log | 38.3077 | 498 | 0.5759 | 0.5868 | 0.5759 | 0.7589 |
| 0.2756 | 38.4615 | 500 | 0.5910 | 0.5815 | 0.5910 | 0.7688 |
| 0.2756 | 38.6154 | 502 | 0.5843 | 0.5693 | 0.5843 | 0.7644 |
| 0.2756 | 38.7692 | 504 | 0.5624 | 0.5901 | 0.5624 | 0.7499 |
| 0.2756 | 38.9231 | 506 | 0.5804 | 0.5784 | 0.5804 | 0.7618 |
| 0.2756 | 39.0769 | 508 | 0.6115 | 0.5738 | 0.6115 | 0.7820 |
| 0.2756 | 39.2308 | 510 | 0.6113 | 0.5659 | 0.6113 | 0.7819 |
| 0.2756 | 39.3846 | 512 | 0.5769 | 0.5668 | 0.5769 | 0.7595 |
| 0.2756 | 39.5385 | 514 | 0.5634 | 0.5957 | 0.5634 | 0.7506 |
| 0.2756 | 39.6923 | 516 | 0.5693 | 0.5827 | 0.5693 | 0.7545 |
| 0.2756 | 39.8462 | 518 | 0.5748 | 0.5827 | 0.5748 | 0.7581 |
| 0.2756 | 40.0 | 520 | 0.5905 | 0.5508 | 0.5905 | 0.7684 |
| 0.2756 | 40.1538 | 522 | 0.5973 | 0.5384 | 0.5973 | 0.7728 |
| 0.2756 | 40.3077 | 524 | 0.5959 | 0.5567 | 0.5959 | 0.7719 |
| 0.2756 | 40.4615 | 526 | 0.5811 | 0.5887 | 0.5811 | 0.7623 |
| 0.2756 | 40.6154 | 528 | 0.5801 | 0.5894 | 0.5801 | 0.7616 |
| 0.2756 | 40.7692 | 530 | 0.5846 | 0.5478 | 0.5846 | 0.7646 |
| 0.2756 | 40.9231 | 532 | 0.5938 | 0.5404 | 0.5938 | 0.7706 |
| 0.2756 | 41.0769 | 534 | 0.5872 | 0.5778 | 0.5872 | 0.7663 |
| 0.2756 | 41.2308 | 536 | 0.5907 | 0.5949 | 0.5907 | 0.7686 |
| 0.2756 | 41.3846 | 538 | 0.6027 | 0.6003 | 0.6027 | 0.7763 |
| 0.2756 | 41.5385 | 540 | 0.6283 | 0.5896 | 0.6283 | 0.7927 |
| 0.2756 | 41.6923 | 542 | 0.6599 | 0.5246 | 0.6599 | 0.8123 |
| 0.2756 | 41.8462 | 544 | 0.6607 | 0.5422 | 0.6607 | 0.8128 |
| 0.2756 | 42.0 | 546 | 0.6380 | 0.5348 | 0.6380 | 0.7988 |
| 0.2756 | 42.1538 | 548 | 0.6070 | 0.5264 | 0.6070 | 0.7791 |
| 0.2756 | 42.3077 | 550 | 0.5740 | 0.5582 | 0.5740 | 0.7577 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nblinh/6a137c09-5bf4-4476-9c00-589796ea205e
|
nblinh
| 2025-01-15T11:50:51Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:35:43Z
|
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a137c09-5bf4-4476-9c00-589796ea205e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- faf590c77bf66bb4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf590c77bf66bb4_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/6a137c09-5bf4-4476-9c00-589796ea205e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf590c77bf66bb4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6a137c09-5bf4-4476-9c00-589796ea205e
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4549 | 0.0209 | 200 | 0.0917 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
SergedeMaria/sergen
|
SergedeMaria
| 2025-01-15T11:47:46Z
| 24
| 1
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T11:07:35Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: serge
---
# Sergen
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `serge` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('SergedeMaria/sergen', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
nhoxinh/afd1a1ea-e930-4525-824d-fd50c0484a7c
|
nhoxinh
| 2025-01-15T11:47:16Z
| 9
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:35:44Z
|
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: afd1a1ea-e930-4525-824d-fd50c0484a7c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- faf590c77bf66bb4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf590c77bf66bb4_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/afd1a1ea-e930-4525-824d-fd50c0484a7c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf590c77bf66bb4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# afd1a1ea-e930-4525-824d-fd50c0484a7c
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4807 | 0.0209 | 200 | 0.0919 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik87/f4772de0-fc9b-4362-958d-1191deca1a03
|
dimasik87
| 2025-01-15T11:46:40Z
| 12
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-15T11:38:03Z
|
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4772de0-fc9b-4362-958d-1191deca1a03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0c0385643e82895c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0c0385643e82895c_train_data.json
type:
field_input: choices
field_instruction: task
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dimasik87/f4772de0-fc9b-4362-958d-1191deca1a03
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/0c0385643e82895c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7c57bd3b-4993-496e-8eee-7f49fda1578b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7c57bd3b-4993-496e-8eee-7f49fda1578b
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f4772de0-fc9b-4362-958d-1191deca1a03
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0013 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhung01/f5f1fa2e-51be-4131-b134-568e8b19a0ec
|
nhung01
| 2025-01-15T11:46:20Z
| 7
| 0
|
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:35:43Z
|
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5f1fa2e-51be-4131-b134-568e8b19a0ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- faf590c77bf66bb4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf590c77bf66bb4_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/f5f1fa2e-51be-4131-b134-568e8b19a0ec
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf590c77bf66bb4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22f0c3d7-b3a9-4b99-8fa8-01838f3708c6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f5f1fa2e-51be-4131-b134-568e8b19a0ec
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4597 | 0.0209 | 200 | 0.0939 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lhong4759/0ef6a4e1-1cac-42bc-b531-b36bf5c1d9f0
|
lhong4759
| 2025-01-15T11:45:55Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:30:07Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ef6a4e1-1cac-42bc-b531-b36bf5c1d9f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 82602250e0cc45ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/82602250e0cc45ea_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/0ef6a4e1-1cac-42bc-b531-b36bf5c1d9f0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/82602250e0cc45ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0ef6a4e1-1cac-42bc-b531-b36bf5c1d9f0
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.783 | 0.0338 | 200 | 0.6312 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ParitKansal/donut-docvqa-demo
|
ParitKansal
| 2025-01-15T11:45:32Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-01-15T11:43:42Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
soyanagomez/raquel
|
soyanagomez
| 2025-01-15T11:44:43Z
| 24
| 1
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-15T10:39:01Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Raquel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('soyanagomez/raquel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
thangla01/32ec91b8-2cc0-4d87-80b6-c1d5941469bd
|
thangla01
| 2025-01-15T11:44:22Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:29:23Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 32ec91b8-2cc0-4d87-80b6-c1d5941469bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 82602250e0cc45ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/82602250e0cc45ea_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/32ec91b8-2cc0-4d87-80b6-c1d5941469bd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/82602250e0cc45ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 32ec91b8-2cc0-4d87-80b6-c1d5941469bd
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7812 | 0.0338 | 200 | 0.6308 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
satishpednekar/sbxcertchat
|
satishpednekar
| 2025-01-15T11:43:54Z
| 25
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-15T11:04:47Z
|
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** satishpednekar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tmpmodelsave/fixed_no_sft_type3_4k_step200
|
tmpmodelsave
| 2025-01-15T11:43:10Z
| 28
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-15T11:39:49Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhung03/e8c89c1b-851a-4bb9-9a00-897c2e9116c3
|
nhung03
| 2025-01-15T11:42:20Z
| 8
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T11:29:10Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e8c89c1b-851a-4bb9-9a00-897c2e9116c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 82602250e0cc45ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/82602250e0cc45ea_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/e8c89c1b-851a-4bb9-9a00-897c2e9116c3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/82602250e0cc45ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11bcdc0d-bc69-4f77-a28c-0c48d7e59612
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e8c89c1b-851a-4bb9-9a00-897c2e9116c3
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7825 | 0.0338 | 200 | 0.6309 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF
|
mradermacher
| 2025-01-15T11:41:01Z
| 529
| 0
|
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"it",
"en",
"dataset:ReDiX/everyday-conversations-ita",
"dataset:ReDiX/dataforge-cleaned",
"base_model:ReDiX/Qwen2.5-0.5B-Instruct-ITA",
"base_model:quantized:ReDiX/Qwen2.5-0.5B-Instruct-ITA",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-15T11:04:58Z
|
---
base_model: ReDiX/Qwen2.5-0.5B-Instruct-ITA
datasets:
- ReDiX/everyday-conversations-ita
- ReDiX/dataforge-cleaned
language:
- it
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ReDiX/Qwen2.5-0.5B-Instruct-ITA
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ2_S.gguf) | i1-IQ2_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ2_M.gguf) | i1-IQ2_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ3_S.gguf) | i1-IQ3_S | 0.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q2_K.gguf) | i1-Q2_K | 0.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ3_M.gguf) | i1-IQ3_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q4_1.gguf) | i1-Q4_1 | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-ITA-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-ITA.i1-Q6_K.gguf) | i1-Q6_K | 0.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
demohong/ef2a481f-704a-497a-98fb-8599afd5e445
|
demohong
| 2025-01-15T11:40:11Z
| 10
| 0
|
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-15T10:32:07Z
|
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ef2a481f-704a-497a-98fb-8599afd5e445
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a54dfbc141335bb5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a54dfbc141335bb5_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/ef2a481f-704a-497a-98fb-8599afd5e445
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a54dfbc141335bb5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e302ed25-6563-4139-a251-885ddd901a8d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e302ed25-6563-4139-a251-885ddd901a8d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ef2a481f-704a-497a-98fb-8599afd5e445
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4013 | 0.0032 | 200 | 2.4017 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.