modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-21 06:31:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 567
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-21 06:30:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
alyzbane/2025-01-21-15-57-43-swin-base-patch4-window7-224
|
alyzbane
| 2025-01-21T16:06:32Z | 83 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:finetune:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-01-21T16:06:16Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-base-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 2025-01-21-15-57-43-swin-base-patch4-window7-224
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2025-01-21-15-57-43-swin-base-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0384
- Precision: 0.9928
- Recall: 0.9926
- F1: 0.9926
- Accuracy: 0.992
- Top1 Accuracy: 0.9926
- Error Rate: 0.0080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3407
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 0.732 | 1.0 | 34 | 0.3980 | 0.9165 | 0.8741 | 0.8590 | 0.8649 | 0.8741 | 0.1351 |
| 0.2462 | 2.0 | 68 | 0.1051 | 0.9538 | 0.9481 | 0.9484 | 0.9499 | 0.9481 | 0.0501 |
| 0.1991 | 3.0 | 102 | 0.0384 | 0.9928 | 0.9926 | 0.9926 | 0.992 | 0.9926 | 0.0080 |
| 0.1559 | 4.0 | 136 | 0.0890 | 0.9802 | 0.9778 | 0.9780 | 0.9777 | 0.9778 | 0.0223 |
| 0.1024 | 5.0 | 170 | 0.1092 | 0.9863 | 0.9852 | 0.9852 | 0.9846 | 0.9852 | 0.0154 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
kk-aivio/d4645d9f-57ef-4051-956e-6f2dce2b2f36
|
kk-aivio
| 2025-01-21T16:05:47Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-21T16:05:22Z |
---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d4645d9f-57ef-4051-956e-6f2dce2b2f36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e775cbf0abe327ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e775cbf0abe327ae_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/d4645d9f-57ef-4051-956e-6f2dce2b2f36
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e775cbf0abe327ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b84adb33-79b3-4680-b053-92bfa0dc3c21
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b84adb33-79b3-4680-b053-92bfa0dc3c21
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d4645d9f-57ef-4051-956e-6f2dce2b2f36
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9328 | 0.0012 | 1 | 11.9293 |
| 11.9321 | 0.0037 | 3 | 11.9293 |
| 11.9371 | 0.0074 | 6 | 11.9292 |
| 11.9277 | 0.0111 | 9 | 11.9291 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
GoodwinDad/msdovgal
|
GoodwinDad
| 2025-01-21T16:04:40Z | 37 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T12:47:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: msdovgal
---
# Msdovgal
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `msdovgal` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('GoodwinDad/msdovgal', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Best000/f771a33c-4cc6-4402-a8e2-0c95b1f98ad9
|
Best000
| 2025-01-21T16:04:36Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-21T16:04:08Z |
---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f771a33c-4cc6-4402-a8e2-0c95b1f98ad9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloom-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c4eef0d51e921ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c4eef0d51e921ea_train_data.json
type:
field_input: world_literals
field_instruction: logical_form_pretty
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/f771a33c-4cc6-4402-a8e2-0c95b1f98ad9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5c4eef0d51e921ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 847dcfd1-dbaf-4b00-af61-47e0ea3d66d1
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 847dcfd1-dbaf-4b00-af61-47e0ea3d66d1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f771a33c-4cc6-4402-a8e2-0c95b1f98ad9
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.3622 | 0.0033 | 1 | 3.6128 |
| 14.2254 | 0.0098 | 3 | 3.5964 |
| 14.6994 | 0.0197 | 6 | 3.4341 |
| 12.7551 | 0.0295 | 9 | 3.1095 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leonel4rd/Asiranime
|
leonel4rd
| 2025-01-21T16:04:30Z | 58 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-01-21T16:04:18Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/27357863.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: AsirAnimeillustrationflux
---
# Asiranime
<Gallery />
## Trigger words
You should use `AsirAnimeillustrationflux` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/leonel4rd/Asiranime/tree/main) them in the Files & versions tab.
|
chchen/Llama-3.1-8B-Instruct-Reward-1000
|
chchen
| 2025-01-21T16:03:07Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-17T15:50:17Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
metrics:
- accuracy
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-reward-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-reward-1000
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_dpo_1000 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2493
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2705 | 1.7778 | 50 | 0.2345 | 0.9 |
| 0.0559 | 3.5556 | 100 | 0.6941 | 0.88 |
| 0.0063 | 5.3333 | 150 | 0.9227 | 0.89 |
| 0.0115 | 7.1111 | 200 | 1.2608 | 0.88 |
| 0.0025 | 8.8889 | 250 | 1.2527 | 0.89 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
kk-aivio/df0cc05f-3e49-4926-952a-5499a46d59a7
|
kk-aivio
| 2025-01-21T16:02:49Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | 2025-01-21T16:00:49Z |
---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df0cc05f-3e49-4926-952a-5499a46d59a7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 609aff797e4a07c0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/609aff797e4a07c0_train_data.json
type:
field_instruction: prompt
field_output: generated_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/df0cc05f-3e49-4926-952a-5499a46d59a7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/609aff797e4a07c0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ad699451-0082-42e9-b976-cb46388d2b2c
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: ad699451-0082-42e9-b976-cb46388d2b2c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# df0cc05f-3e49-4926-952a-5499a46d59a7
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4234 | 0.0007 | 1 | 0.6777 |
| 0.47 | 0.0021 | 3 | 0.6749 |
| 0.4296 | 0.0043 | 6 | 0.6138 |
| 0.8744 | 0.0064 | 9 | 0.4560 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AquilaX-AI/DB-Summarizer
|
AquilaX-AI
| 2025-01-21T16:02:30Z | 63 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2024-10-08T15:00:02Z |
---
license: apache-2.0
---
## INFERENCE CODE
```bash
pip install transformers[torch]
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
import torch
import time
tokenizer = AutoTokenizer.from_pretrained("AquilaX-AI/DB-Summarizer")
model = AutoModelForCausalLM.from_pretrained("AquilaX-AI/DB-Summarizer")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
question = "How many Vulnerability found today"
db_result = "243"
summ_inp = f"""<|im_start|>system
Generate a clear and accurate response based on the user's question and the database output.<|im_end|>
<|im_start|>user
user_question:
{question}
db_response:
{db_result}<|im_end|>
<|im_start|>assistant"""
import time
start = time.time()
encodeds = tokenizer(summ_inp, return_tensors="pt",truncation=True).input_ids.to(device)
model.to(device)
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
response = model.generate(
input_ids=encodeds,
streamer=text_streamer,
max_new_tokens=512,
use_cache=True,
pad_token_id=151645,
eos_token_id=151645,
num_return_sequences=1
)
end = time.time()
print(f"Time taken: {end - start}")
```
|
nblinh/77245b95-430e-4200-b6ea-acdbedcc2997
|
nblinh
| 2025-01-21T16:02:12Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Maykeye/TinyLLama-v0",
"base_model:adapter:Maykeye/TinyLLama-v0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:57:49Z |
---
library_name: peft
license: apache-2.0
base_model: Maykeye/TinyLLama-v0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 77245b95-430e-4200-b6ea-acdbedcc2997
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Maykeye/TinyLLama-v0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 90bd143c6078d776_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90bd143c6078d776_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/77245b95-430e-4200-b6ea-acdbedcc2997
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/90bd143c6078d776_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 26eabe94-c6a5-4bc1-b266-ff2c83f293ea
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 26eabe94-c6a5-4bc1-b266-ff2c83f293ea
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 77245b95-430e-4200-b6ea-acdbedcc2997
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.1882 | 0.0207 | 200 | 9.4738 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nbninh/5c91b54c-e356-4f6d-8756-71674a2618f9
|
nbninh
| 2025-01-21T16:01:44Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Maykeye/TinyLLama-v0",
"base_model:adapter:Maykeye/TinyLLama-v0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:57:40Z |
---
library_name: peft
license: apache-2.0
base_model: Maykeye/TinyLLama-v0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c91b54c-e356-4f6d-8756-71674a2618f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Maykeye/TinyLLama-v0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 90bd143c6078d776_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/90bd143c6078d776_train_data.json
type:
field_instruction: source
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/5c91b54c-e356-4f6d-8756-71674a2618f9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/90bd143c6078d776_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 26eabe94-c6a5-4bc1-b266-ff2c83f293ea
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 26eabe94-c6a5-4bc1-b266-ff2c83f293ea
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5c91b54c-e356-4f6d-8756-71674a2618f9
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.1316 | 0.0207 | 200 | 9.4135 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k1_task7_organization
|
MayBashendy
| 2025-01-21T15:58:33Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T15:56:00Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k1_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k1_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9319
- Qwk: 0.0982
- Mse: 0.9319
- Rmse: 0.9653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.6667 | 2 | 4.2154 | -0.0060 | 4.2154 | 2.0531 |
| No log | 1.3333 | 4 | 2.1734 | -0.0316 | 2.1734 | 1.4742 |
| No log | 2.0 | 6 | 1.8642 | 0.0298 | 1.8642 | 1.3653 |
| No log | 2.6667 | 8 | 1.6716 | 0.0233 | 1.6716 | 1.2929 |
| No log | 3.3333 | 10 | 0.9513 | 0.0860 | 0.9513 | 0.9753 |
| No log | 4.0 | 12 | 1.3112 | 0.0390 | 1.3112 | 1.1451 |
| No log | 4.6667 | 14 | 1.7222 | 0.0049 | 1.7222 | 1.3123 |
| No log | 5.3333 | 16 | 1.2439 | 0.0686 | 1.2439 | 1.1153 |
| No log | 6.0 | 18 | 0.8313 | 0.2813 | 0.8313 | 0.9117 |
| No log | 6.6667 | 20 | 0.8361 | 0.2739 | 0.8361 | 0.9144 |
| No log | 7.3333 | 22 | 1.0473 | 0.1184 | 1.0473 | 1.0234 |
| No log | 8.0 | 24 | 1.5759 | 0.0898 | 1.5759 | 1.2554 |
| No log | 8.6667 | 26 | 1.2758 | 0.0312 | 1.2758 | 1.1295 |
| No log | 9.3333 | 28 | 0.9355 | 0.2339 | 0.9355 | 0.9672 |
| No log | 10.0 | 30 | 0.8571 | 0.2277 | 0.8571 | 0.9258 |
| No log | 10.6667 | 32 | 0.9790 | 0.1600 | 0.9790 | 0.9895 |
| No log | 11.3333 | 34 | 1.1933 | 0.1182 | 1.1933 | 1.0924 |
| No log | 12.0 | 36 | 1.0288 | 0.1641 | 1.0288 | 1.0143 |
| No log | 12.6667 | 38 | 0.9725 | 0.1531 | 0.9725 | 0.9862 |
| No log | 13.3333 | 40 | 1.0411 | 0.0752 | 1.0411 | 1.0203 |
| No log | 14.0 | 42 | 1.1477 | 0.1056 | 1.1477 | 1.0713 |
| No log | 14.6667 | 44 | 1.1977 | 0.0996 | 1.1977 | 1.0944 |
| No log | 15.3333 | 46 | 1.2576 | -0.0026 | 1.2576 | 1.1214 |
| No log | 16.0 | 48 | 1.4412 | 0.0858 | 1.4412 | 1.2005 |
| No log | 16.6667 | 50 | 1.3153 | 0.0020 | 1.3153 | 1.1468 |
| No log | 17.3333 | 52 | 1.1836 | 0.0972 | 1.1836 | 1.0879 |
| No log | 18.0 | 54 | 1.0634 | 0.1653 | 1.0634 | 1.0312 |
| No log | 18.6667 | 56 | 1.1575 | 0.0671 | 1.1575 | 1.0759 |
| No log | 19.3333 | 58 | 1.3467 | 0.0858 | 1.3467 | 1.1605 |
| No log | 20.0 | 60 | 1.2003 | 0.0999 | 1.2003 | 1.0956 |
| No log | 20.6667 | 62 | 1.0722 | 0.1820 | 1.0722 | 1.0355 |
| No log | 21.3333 | 64 | 1.0590 | 0.1449 | 1.0590 | 1.0291 |
| No log | 22.0 | 66 | 1.1925 | 0.0966 | 1.1925 | 1.0920 |
| No log | 22.6667 | 68 | 1.2008 | 0.1281 | 1.2008 | 1.0958 |
| No log | 23.3333 | 70 | 1.1068 | 0.1472 | 1.1068 | 1.0521 |
| No log | 24.0 | 72 | 1.0672 | 0.0834 | 1.0672 | 1.0331 |
| No log | 24.6667 | 74 | 1.0327 | 0.1917 | 1.0327 | 1.0162 |
| No log | 25.3333 | 76 | 0.9911 | 0.1490 | 0.9911 | 0.9956 |
| No log | 26.0 | 78 | 0.9620 | 0.1234 | 0.9620 | 0.9808 |
| No log | 26.6667 | 80 | 0.8954 | 0.2449 | 0.8954 | 0.9462 |
| No log | 27.3333 | 82 | 0.8566 | 0.2809 | 0.8566 | 0.9255 |
| No log | 28.0 | 84 | 0.8661 | 0.2479 | 0.8661 | 0.9306 |
| No log | 28.6667 | 86 | 0.9168 | 0.1547 | 0.9168 | 0.9575 |
| No log | 29.3333 | 88 | 0.9412 | 0.2036 | 0.9412 | 0.9702 |
| No log | 30.0 | 90 | 0.9896 | 0.2238 | 0.9896 | 0.9948 |
| No log | 30.6667 | 92 | 1.0643 | 0.1479 | 1.0643 | 1.0317 |
| No log | 31.3333 | 94 | 1.0855 | 0.1545 | 1.0855 | 1.0419 |
| No log | 32.0 | 96 | 1.0320 | 0.1225 | 1.0320 | 1.0159 |
| No log | 32.6667 | 98 | 0.9797 | 0.1196 | 0.9797 | 0.9898 |
| No log | 33.3333 | 100 | 0.9685 | 0.1578 | 0.9685 | 0.9841 |
| No log | 34.0 | 102 | 0.9433 | 0.1231 | 0.9433 | 0.9712 |
| No log | 34.6667 | 104 | 0.9862 | 0.2164 | 0.9862 | 0.9931 |
| No log | 35.3333 | 106 | 1.0853 | 0.0443 | 1.0853 | 1.0418 |
| No log | 36.0 | 108 | 1.0853 | 0.0741 | 1.0853 | 1.0418 |
| No log | 36.6667 | 110 | 1.0564 | 0.0995 | 1.0564 | 1.0278 |
| No log | 37.3333 | 112 | 1.0430 | 0.0991 | 1.0430 | 1.0213 |
| No log | 38.0 | 114 | 1.0199 | 0.1600 | 1.0199 | 1.0099 |
| No log | 38.6667 | 116 | 1.0008 | 0.1682 | 1.0008 | 1.0004 |
| No log | 39.3333 | 118 | 0.9581 | 0.1682 | 0.9581 | 0.9788 |
| No log | 40.0 | 120 | 0.9366 | 0.2273 | 0.9366 | 0.9678 |
| No log | 40.6667 | 122 | 0.9447 | 0.2303 | 0.9447 | 0.9720 |
| No log | 41.3333 | 124 | 0.9732 | 0.1682 | 0.9732 | 0.9865 |
| No log | 42.0 | 126 | 0.9248 | 0.2652 | 0.9248 | 0.9617 |
| No log | 42.6667 | 128 | 0.9043 | 0.2061 | 0.9043 | 0.9509 |
| No log | 43.3333 | 130 | 0.9117 | 0.2495 | 0.9117 | 0.9548 |
| No log | 44.0 | 132 | 0.8953 | 0.2495 | 0.8953 | 0.9462 |
| No log | 44.6667 | 134 | 0.8936 | 0.2681 | 0.8936 | 0.9453 |
| No log | 45.3333 | 136 | 0.9254 | 0.2043 | 0.9254 | 0.9620 |
| No log | 46.0 | 138 | 0.9858 | 0.1846 | 0.9858 | 0.9929 |
| No log | 46.6667 | 140 | 0.9915 | 0.1814 | 0.9915 | 0.9957 |
| No log | 47.3333 | 142 | 1.0114 | 0.1446 | 1.0114 | 1.0057 |
| No log | 48.0 | 144 | 1.0252 | 0.1725 | 1.0252 | 1.0125 |
| No log | 48.6667 | 146 | 1.0566 | 0.1401 | 1.0566 | 1.0279 |
| No log | 49.3333 | 148 | 1.0272 | 0.1725 | 1.0272 | 1.0135 |
| No log | 50.0 | 150 | 1.0131 | 0.1725 | 1.0131 | 1.0065 |
| No log | 50.6667 | 152 | 0.9593 | 0.0685 | 0.9593 | 0.9795 |
| No log | 51.3333 | 154 | 0.9186 | 0.1253 | 0.9186 | 0.9584 |
| No log | 52.0 | 156 | 0.9174 | 0.1219 | 0.9174 | 0.9578 |
| No log | 52.6667 | 158 | 0.9301 | 0.1253 | 0.9301 | 0.9644 |
| No log | 53.3333 | 160 | 0.9585 | 0.0325 | 0.9585 | 0.9790 |
| No log | 54.0 | 162 | 0.9754 | 0.0685 | 0.9754 | 0.9876 |
| No log | 54.6667 | 164 | 0.9639 | 0.0912 | 0.9639 | 0.9818 |
| No log | 55.3333 | 166 | 0.9458 | 0.1871 | 0.9458 | 0.9725 |
| No log | 56.0 | 168 | 0.9432 | 0.1273 | 0.9432 | 0.9712 |
| No log | 56.6667 | 170 | 0.9894 | 0.1682 | 0.9894 | 0.9947 |
| No log | 57.3333 | 172 | 1.0290 | 0.1395 | 1.0290 | 1.0144 |
| No log | 58.0 | 174 | 1.0209 | 0.1395 | 1.0209 | 1.0104 |
| No log | 58.6667 | 176 | 0.9581 | 0.1682 | 0.9581 | 0.9788 |
| No log | 59.3333 | 178 | 0.9120 | 0.2203 | 0.9120 | 0.9550 |
| No log | 60.0 | 180 | 0.9185 | 0.2109 | 0.9185 | 0.9584 |
| No log | 60.6667 | 182 | 0.9187 | 0.2077 | 0.9187 | 0.9585 |
| No log | 61.3333 | 184 | 0.9137 | 0.1820 | 0.9137 | 0.9559 |
| No log | 62.0 | 186 | 0.9230 | 0.1888 | 0.9230 | 0.9607 |
| No log | 62.6667 | 188 | 0.9289 | 0.2486 | 0.9289 | 0.9638 |
| No log | 63.3333 | 190 | 0.9298 | 0.2192 | 0.9298 | 0.9643 |
| No log | 64.0 | 192 | 0.9246 | 0.2192 | 0.9246 | 0.9616 |
| No log | 64.6667 | 194 | 0.9413 | 0.2192 | 0.9413 | 0.9702 |
| No log | 65.3333 | 196 | 0.9668 | 0.1600 | 0.9668 | 0.9833 |
| No log | 66.0 | 198 | 0.9989 | 0.1600 | 0.9989 | 0.9994 |
| No log | 66.6667 | 200 | 0.9965 | 0.1600 | 0.9965 | 0.9983 |
| No log | 67.3333 | 202 | 0.9830 | 0.1600 | 0.9830 | 0.9914 |
| No log | 68.0 | 204 | 0.9687 | 0.1785 | 0.9687 | 0.9842 |
| No log | 68.6667 | 206 | 0.9718 | 0.2102 | 0.9718 | 0.9858 |
| No log | 69.3333 | 208 | 0.9771 | 0.2053 | 0.9771 | 0.9885 |
| No log | 70.0 | 210 | 0.9827 | 0.1825 | 0.9827 | 0.9913 |
| No log | 70.6667 | 212 | 0.9785 | 0.1825 | 0.9785 | 0.9892 |
| No log | 71.3333 | 214 | 0.9779 | 0.0627 | 0.9779 | 0.9889 |
| No log | 72.0 | 216 | 0.9670 | 0.0635 | 0.9670 | 0.9834 |
| No log | 72.6667 | 218 | 0.9447 | 0.0627 | 0.9447 | 0.9719 |
| No log | 73.3333 | 220 | 0.9350 | 0.0275 | 0.9350 | 0.9670 |
| No log | 74.0 | 222 | 0.9266 | 0.0602 | 0.9266 | 0.9626 |
| No log | 74.6667 | 224 | 0.9231 | 0.1506 | 0.9231 | 0.9608 |
| No log | 75.3333 | 226 | 0.9273 | 0.1179 | 0.9273 | 0.9630 |
| No log | 76.0 | 228 | 0.9238 | 0.1506 | 0.9238 | 0.9611 |
| No log | 76.6667 | 230 | 0.9227 | 0.0602 | 0.9227 | 0.9606 |
| No log | 77.3333 | 232 | 0.9349 | 0.0640 | 0.9349 | 0.9669 |
| No log | 78.0 | 234 | 0.9516 | 0.0692 | 0.9516 | 0.9755 |
| No log | 78.6667 | 236 | 0.9618 | 0.0692 | 0.9618 | 0.9807 |
| No log | 79.3333 | 238 | 0.9593 | 0.0692 | 0.9593 | 0.9795 |
| No log | 80.0 | 240 | 0.9582 | 0.0339 | 0.9582 | 0.9789 |
| No log | 80.6667 | 242 | 0.9522 | 0.0640 | 0.9522 | 0.9758 |
| No log | 81.3333 | 244 | 0.9438 | 0.0640 | 0.9438 | 0.9715 |
| No log | 82.0 | 246 | 0.9396 | 0.0640 | 0.9396 | 0.9693 |
| No log | 82.6667 | 248 | 0.9340 | 0.0640 | 0.9340 | 0.9665 |
| No log | 83.3333 | 250 | 0.9324 | 0.0912 | 0.9324 | 0.9656 |
| No log | 84.0 | 252 | 0.9336 | 0.1246 | 0.9336 | 0.9662 |
| No log | 84.6667 | 254 | 0.9296 | 0.1825 | 0.9296 | 0.9642 |
| No log | 85.3333 | 256 | 0.9219 | 0.1820 | 0.9219 | 0.9602 |
| No log | 86.0 | 258 | 0.9108 | 0.1487 | 0.9108 | 0.9543 |
| No log | 86.6667 | 260 | 0.8961 | 0.1487 | 0.8961 | 0.9466 |
| No log | 87.3333 | 262 | 0.8845 | 0.1820 | 0.8845 | 0.9405 |
| No log | 88.0 | 264 | 0.8800 | 0.2838 | 0.8800 | 0.9381 |
| No log | 88.6667 | 266 | 0.8824 | 0.2203 | 0.8824 | 0.9394 |
| No log | 89.3333 | 268 | 0.8902 | 0.1661 | 0.8902 | 0.9435 |
| No log | 90.0 | 270 | 0.9027 | 0.1016 | 0.9027 | 0.9501 |
| No log | 90.6667 | 272 | 0.9134 | 0.0982 | 0.9134 | 0.9557 |
| No log | 91.3333 | 274 | 0.9268 | 0.1307 | 0.9268 | 0.9627 |
| No log | 92.0 | 276 | 0.9343 | 0.1307 | 0.9343 | 0.9666 |
| No log | 92.6667 | 278 | 0.9366 | 0.1307 | 0.9366 | 0.9678 |
| No log | 93.3333 | 280 | 0.9405 | 0.1307 | 0.9405 | 0.9698 |
| No log | 94.0 | 282 | 0.9455 | 0.1307 | 0.9455 | 0.9724 |
| No log | 94.6667 | 284 | 0.9463 | 0.1307 | 0.9463 | 0.9728 |
| No log | 95.3333 | 286 | 0.9443 | 0.1307 | 0.9443 | 0.9718 |
| No log | 96.0 | 288 | 0.9403 | 0.1307 | 0.9403 | 0.9697 |
| No log | 96.6667 | 290 | 0.9371 | 0.1307 | 0.9371 | 0.9681 |
| No log | 97.3333 | 292 | 0.9352 | 0.1307 | 0.9352 | 0.9670 |
| No log | 98.0 | 294 | 0.9332 | 0.1307 | 0.9332 | 0.9660 |
| No log | 98.6667 | 296 | 0.9321 | 0.1307 | 0.9321 | 0.9655 |
| No log | 99.3333 | 298 | 0.9318 | 0.0982 | 0.9318 | 0.9653 |
| No log | 100.0 | 300 | 0.9319 | 0.0982 | 0.9319 | 0.9653 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
denbeo/79bb9edb-7359-4a19-a6e0-623aa6e78627
|
denbeo
| 2025-01-21T15:56:46Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:41:20Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 79bb9edb-7359-4a19-a6e0-623aa6e78627
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/79bb9edb-7359-4a19-a6e0-623aa6e78627
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 79bb9edb-7359-4a19-a6e0-623aa6e78627
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4058 | 0.2224 | 200 | 1.4330 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leonel4rd/Miyazakiv2
|
leonel4rd
| 2025-01-21T15:56:06Z | 86 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-01-21T15:55:25Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ComfyUI_07043_.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Miyazaki anime art
---
# Miyazakiv2
<Gallery />
## Trigger words
You should use `Miyazaki anime art` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/leonel4rd/Miyazakiv2/tree/main) them in the Files & versions tab.
|
aleegis09/3312c8b3-b859-4115-9352-c90217fc439b
|
aleegis09
| 2025-01-21T15:55:04Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | 2025-01-21T15:34:25Z |
---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3312c8b3-b859-4115-9352-c90217fc439b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c844dd0553d03cf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c844dd0553d03cf0_train_data.json
type:
field_input: candidate_answers
field_instruction: story
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis09/3312c8b3-b859-4115-9352-c90217fc439b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c844dd0553d03cf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 436d9b9b-aa81-4a08-bc24-ec0c4c55947d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 436d9b9b-aa81-4a08-bc24-ec0c4c55947d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3312c8b3-b859-4115-9352-c90217fc439b
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.563 | 0.0011 | 1 | 1.8274 |
| 0.1848 | 0.0567 | 50 | 0.1193 |
| 0.1106 | 0.1133 | 100 | 0.1022 |
| 0.1394 | 0.1700 | 150 | 0.0974 |
| 0.1041 | 0.2266 | 200 | 0.0921 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
demohong/217282dd-8bdb-41dc-9882-69a757d954d1
|
demohong
| 2025-01-21T15:54:39Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:37:33Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 217282dd-8bdb-41dc-9882-69a757d954d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2bfda0aa0cefbd5c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2bfda0aa0cefbd5c_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/217282dd-8bdb-41dc-9882-69a757d954d1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2bfda0aa0cefbd5c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 20951530-b524-445b-8766-809a9e2fe145
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 20951530-b524-445b-8766-809a9e2fe145
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 217282dd-8bdb-41dc-9882-69a757d954d1
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.4645 | 0.1446 | 200 | 1.3709 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ClarenceDan/83062a9b-61b2-4f71-96c5-a2f228c79434
|
ClarenceDan
| 2025-01-21T15:54:05Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"region:us"
] | null | 2025-01-21T15:45:39Z |
---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83062a9b-61b2-4f71-96c5-a2f228c79434
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2396a570a1ea2a9e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2396a570a1ea2a9e_train_data.json
type:
field_instruction: question2
field_output: question1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/83062a9b-61b2-4f71-96c5-a2f228c79434
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2396a570a1ea2a9e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 640589b9-10fe-4a37-8ea0-cd558f874b67
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 640589b9-10fe-4a37-8ea0-cd558f874b67
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 83062a9b-61b2-4f71-96c5-a2f228c79434
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 24.8808 | 0.0000 | 1 | 6.5707 |
| 25.8993 | 0.0000 | 3 | 6.5570 |
| 24.0359 | 0.0001 | 6 | 6.4997 |
| 27.9566 | 0.0001 | 9 | 6.4283 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Best000/b728671c-8b8f-4068-a04a-e46003c29c8b
|
Best000
| 2025-01-21T15:53:31Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"region:us"
] | null | 2025-01-21T15:49:38Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b728671c-8b8f-4068-a04a-e46003c29c8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/b728671c-8b8f-4068-a04a-e46003c29c8b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b728671c-8b8f-4068-a04a-e46003c29c8b
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6761 | 0.0011 | 1 | 1.7808 |
| 1.6811 | 0.0033 | 3 | 1.7723 |
| 1.6479 | 0.0067 | 6 | 1.6822 |
| 1.5671 | 0.0100 | 9 | 1.6570 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ajtaltarabukin2022/8a4bb738-86f6-4be5-ac9c-d23e9000eb08
|
ajtaltarabukin2022
| 2025-01-21T15:53:09Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:43:48Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8a4bb738-86f6-4be5-ac9c-d23e9000eb08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6bb273fb8d3c0253_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6bb273fb8d3c0253_train_data.json
type:
field_input: condition
field_instruction: drugName
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 256
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ajtaltarabukin2022/8a4bb738-86f6-4be5-ac9c-d23e9000eb08
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 40
micro_batch_size: 2
mlflow_experiment_name: /tmp/6bb273fb8d3c0253_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f44a8599-bd2c-4b24-9468-fb17670debf8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f44a8599-bd2c-4b24-9468-fb17670debf8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8a4bb738-86f6-4be5-ac9c-d23e9000eb08
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0006 | 15 | nan |
| 0.0 | 0.0008 | 20 | nan |
| 0.0 | 0.0010 | 25 | nan |
| 0.0 | 0.0012 | 30 | nan |
| 0.0 | 0.0014 | 35 | nan |
| 0.0 | 0.0016 | 40 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
profelyndoncarlson/chile_edu_estan_HyDE
|
profelyndoncarlson
| 2025-01-21T15:51:31Z | 19 | 0 | null |
[
"safetensors",
"t5",
"es",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"region:us"
] | null | 2024-09-25T19:07:22Z |
---
license: apache-2.0
language:
- es
base_model:
- google/byt5-small
---
# **Model Card: Spanish Text Generation with ByT5-Small**
This model is a **text generation** model fine-tuned from [**ByT5-Small**](https://huggingface.co/google/byt5-small), designed to generate coherent and contextually relevant Spanish text based on input prompts. It is optimized for generating content chunks, making it suitable for applications such as content creation, automated writing assistance, and more.
---
## **Model Details**
- **Model Name**: `trained-byt5-small`
- **Architecture**: [ByT5-Small](https://huggingface.co/google/byt5-small) (a byte-level variant of T5)
- **Language**: Spanish
- **Task**: Text Generation
- Given a **prompt**, the model generates a **textual response** that continues or complements the input.
---
## **Intended Use and Applications**
1. **Content Creation**: Assist writers by generating content based on given prompts, helping to overcome writer's block or to expand on ideas.
2. **Automated Writing Assistance**: Provide suggestions or continuations in writing applications, such as blogs, articles, or reports.
3. **Chatbots and Conversational Agents**: Enhance conversational AI systems by generating more natural and contextually appropriate responses in Spanish.
4. **Educational Tools**: Aid in language learning by generating example sentences, explanations, or extended content based on user inputs.
5. **Creative Writing**: Support creative processes by offering story continuations, character developments, or plot ideas.
---
## **How It Was Trained**
### **1. Data Source**
- **Database**: Data was sourced from an internal SQL Server database containing:
- **Prompts** (`input_text`): User queries or initial text snippets.
- **Content** (`output_text`): Corresponding generated or relevant text passages with a high relevance rank (`rank > 4`).
- **Data Selection**: The top 5,000 `(prompt, content)` pairs were selected where both `prompt` and `content` have non-zero lengths and a relevance rank greater than 4, ensuring high-quality training data.
### **2. Preprocessing**
- **Text Splitting**:
- Long `output_text` entries were split into chunks of up to 512 characters to manage model input size and to enhance training efficiency.
- **Tokenization**:
- Utilized the **ByT5Tokenizer** for byte-level tokenization, which is well-suited for handling diverse Spanish text without being constrained by specific token vocabularies.
- Configured with:
- `max_length = 512`
- `doc_stride = 256` (for handling long texts with overlapping contexts)
### **3. Training Setup**
- **Base Model**: `google/byt5-small`
- **Framework**: PyTorch with Hugging Face Transformers
- **Loss Function**: Cross Entropy Loss (`torch.nn.CrossEntropyLoss`) to train the model to predict the next tokens in the sequence.
- **Optimizer**: AdamW with a learning rate of `5e-5` and weight decay of `0.01`
- **Batch Size**:
- **Training**: 2 per device
- **Evaluation**: 4 per device
- **Epochs**: 3
- **Gradient Accumulation**: 1 (simplified for stable training)
- **Mixed Precision**: Disabled (`fp16 = False`) to prevent issues with NaNs during training.
- **Gradient Checkpointing**: Enabled to optimize memory usage.
- **Early Stopping**: Implemented with a patience of 2 epochs to prevent overfitting.
- **Hardware**: Trained on GPUs if available; otherwise, CPU.
### **4. Data Splits**
- **Training Set**: 80% of the data
- **Validation Set**: 20% split from the remaining 20%
- **Test Set**: 50% of the validation split, resulting in:
- **Training**: 80%
- **Validation**: 10%
- **Test**: 10%
---
## **Model Performance**
- **Training Metrics**:
- **Loss**: Monitored using Cross Entropy Loss on both training and validation sets.
- **Early Stopping**: Training halted if the validation loss did not improve for 2 consecutive evaluation steps.
- **Final Evaluation**:
- **Test Set Loss**: Logged as `test_loss` in the training logs.
- **Performance Notes**: Specific numerical results depend on the data distribution and the training process. Users are encouraged to evaluate the model on their own datasets to gauge performance in their specific applications.
---
## **Usage Example**
Below is a Python example demonstrating how to use the fine-tuned ByT5-Small model for text generation in Spanish. Ensure you have installed the necessary libraries (`transformers`, `torch`) and have the model saved in the `./trained-byt5-small` directory.
```python
import torch
from transformers import T5ForConditionalGeneration, ByT5Tokenizer
# Load the trained model and tokenizer
model_dir = "./trained-byt5-small"
tokenizer = ByT5Tokenizer.from_pretrained(model_dir)
model = T5ForConditionalGeneration.from_pretrained(model_dir)
# Move model to device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
prompt = "¿Cómo implementar un sistema solar en una escuela primaria?"
# Tokenize the input text
inputs = tokenizer(
prompt,
return_tensors="pt",
max_length=512,
truncation=True
).to(device)
# Generate outputs
with torch.no_grad():
outputs = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
max_length=512,
num_return_sequences=1,
do_sample=True,
temperature=0.5,
top_k=2000,
top_p=0.95,
repetition_penalty=1.2,
early_stopping=True
)
# Decode and print the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Generated Text: {generated_text}")
```
**Output:**
```
Generated Text: Para implementar un sistema solar en una escuela primaria, se puede comenzar por educar a los estudiantes sobre los planetas y sus características. Luego, se pueden realizar actividades prácticas como construir maquetas del sistema solar, organizar excursiones a planetarios o utilizar software educativo interactivo. Además, es importante fomentar la curiosidad y el interés de los alumnos mediante proyectos de investigación y presentaciones sobre diferentes aspectos del espacio.
```
---
## **Limitations and Ethical Considerations**
1. **Bias and Fairness**:
- The model's outputs are influenced by the training data. If the data contains biases, the model may inadvertently reproduce them. Users should be cautious and review generated content for fairness and neutrality.
2. **Domain Specificity**:
- Trained on specific prompt-content pairs from an internal database, the model may perform best within similar contexts. Its performance might degrade when applied to highly specialized or unfamiliar domains.
3. **Quality and Reliability**:
- While the model aims to generate coherent and relevant text, it does not verify factual accuracy. Users should validate the generated content, especially in critical applications.
4. **Data Privacy**:
- Ensure that any data used with this model complies with relevant privacy laws and regulations. The training data should not contain sensitive or personal information unless appropriate consent has been obtained.
5. **Misuse Potential**:
- Like any generative model, it can be used to create misleading or harmful content. Implement safeguards to prevent and mitigate misuse.
---
## **Intended Users**
- **Developers** building Spanish-language content generation tools.
- **Content Creators** seeking automated assistance in generating written material.
- **Researchers** exploring text generation and natural language processing in Spanish.
- **Educators** developing tools for language learning and educational content creation.
- **Businesses** integrating conversational agents or chatbots that generate Spanish text.
---
|
nbninh/ee57924d-53e8-4e97-b20f-717fc6bbc7c1
|
nbninh
| 2025-01-21T15:49:09Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:37:28Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ee57924d-53e8-4e97-b20f-717fc6bbc7c1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2bfda0aa0cefbd5c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2bfda0aa0cefbd5c_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/ee57924d-53e8-4e97-b20f-717fc6bbc7c1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2bfda0aa0cefbd5c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 20951530-b524-445b-8766-809a9e2fe145
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 20951530-b524-445b-8766-809a9e2fe145
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ee57924d-53e8-4e97-b20f-717fc6bbc7c1
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.4684 | 0.1446 | 200 | 1.3703 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thalllsssss/be59f7ff-a890-4a17-aaa1-dd4c2779c098
|
thalllsssss
| 2025-01-21T15:48:54Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:37:23Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be59f7ff-a890-4a17-aaa1-dd4c2779c098
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2bfda0aa0cefbd5c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2bfda0aa0cefbd5c_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/be59f7ff-a890-4a17-aaa1-dd4c2779c098
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2bfda0aa0cefbd5c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 20951530-b524-445b-8766-809a9e2fe145
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 20951530-b524-445b-8766-809a9e2fe145
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# be59f7ff-a890-4a17-aaa1-dd4c2779c098
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.4785 | 0.1446 | 200 | 1.3704 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik1987/ad58dacd-0255-40c4-9c34-030bfda53fa8
|
dimasik1987
| 2025-01-21T15:48:10Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T15:41:51Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ad58dacd-0255-40c4-9c34-030bfda53fa8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e9b53bc832f74d9b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e9b53bc832f74d9b_train_data.json
type:
field_input: text
field_instruction: span
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/ad58dacd-0255-40c4-9c34-030bfda53fa8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/e9b53bc832f74d9b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 49ded5b1-1b97-4538-9229-13c02ee293d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 49ded5b1-1b97-4538-9229-13c02ee293d0
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# ad58dacd-0255-40c4-9c34-030bfda53fa8
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0051 | 1 | 2.6773 |
| 2.5458 | 0.0255 | 5 | 2.6477 |
| 2.6627 | 0.0510 | 10 | 2.5229 |
| 2.5761 | 0.0764 | 15 | 2.4290 |
| 2.4725 | 0.1019 | 20 | 2.3817 |
| 2.5046 | 0.1274 | 25 | 2.3613 |
| 2.556 | 0.1529 | 30 | 2.3561 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/50fd75eb-e97b-4144-9631-1a889c67b5c3
|
lesso04
| 2025-01-21T15:47:53Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:41:56Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 50fd75eb-e97b-4144-9631-1a889c67b5c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- e9b53bc832f74d9b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e9b53bc832f74d9b_train_data.json
type:
field_input: text
field_instruction: span
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso04/50fd75eb-e97b-4144-9631-1a889c67b5c3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/e9b53bc832f74d9b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 49ded5b1-1b97-4538-9229-13c02ee293d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 49ded5b1-1b97-4538-9229-13c02ee293d0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 50fd75eb-e97b-4144-9631-1a889c67b5c3
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7451 | 0.0025 | 1 | 1.5669 |
| 1.2279 | 0.0127 | 5 | 1.5302 |
| 0.6004 | 0.0255 | 10 | 0.9499 |
| 0.2402 | 0.0382 | 15 | 0.6046 |
| 0.2987 | 0.0510 | 20 | 0.5382 |
| 0.1516 | 0.0637 | 25 | 0.5271 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso05/3a6e6ee0-f453-4470-b151-b7a3d3dae622
|
lesso05
| 2025-01-21T15:46:35Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T15:41:52Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3a6e6ee0-f453-4470-b151-b7a3d3dae622
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- e9b53bc832f74d9b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e9b53bc832f74d9b_train_data.json
type:
field_input: text
field_instruction: span
field_output: sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/3a6e6ee0-f453-4470-b151-b7a3d3dae622
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/e9b53bc832f74d9b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 49ded5b1-1b97-4538-9229-13c02ee293d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 49ded5b1-1b97-4538-9229-13c02ee293d0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3a6e6ee0-f453-4470-b151-b7a3d3dae622
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7451 | 0.0025 | 1 | 1.5669 |
| 1.2126 | 0.0127 | 5 | 1.5262 |
| 0.5911 | 0.0255 | 10 | 0.9433 |
| 0.2347 | 0.0382 | 15 | 0.6011 |
| 0.2829 | 0.0510 | 20 | 0.5357 |
| 0.1533 | 0.0637 | 25 | 0.5270 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
atansor/phi_01211
|
atansor
| 2025-01-21T15:45:34Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T15:38:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ivangrapher/5c6d08ed-b5eb-4153-b48e-a4439eac2d9a
|
ivangrapher
| 2025-01-21T15:43:32Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-21T15:37:18Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c6d08ed-b5eb-4153-b48e-a4439eac2d9a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2bfda0aa0cefbd5c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2bfda0aa0cefbd5c_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ivangrapher/5c6d08ed-b5eb-4153-b48e-a4439eac2d9a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/2bfda0aa0cefbd5c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 20951530-b524-445b-8766-809a9e2fe145
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 20951530-b524-445b-8766-809a9e2fe145
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5c6d08ed-b5eb-4153-b48e-a4439eac2d9a
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 1.8159 |
| 7.5123 | 0.0036 | 5 | 1.7340 |
| 6.7056 | 0.0072 | 10 | 1.5800 |
| 6.2732 | 0.0108 | 15 | 1.4605 |
| 6.1126 | 0.0145 | 20 | 1.4481 |
| 5.4533 | 0.0181 | 25 | 1.4292 |
| 5.9138 | 0.0217 | 30 | 1.4265 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
datlaaaaaaa/8c8e22aa-ddbc-48b2-ba0e-3076f62eb257
|
datlaaaaaaa
| 2025-01-21T15:43:17Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:40:14Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8c8e22aa-ddbc-48b2-ba0e-3076f62eb257
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/8c8e22aa-ddbc-48b2-ba0e-3076f62eb257
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8c8e22aa-ddbc-48b2-ba0e-3076f62eb257
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4085 | 0.2224 | 200 | 1.4331 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aysekaya/tr-text-to-sql-model-12
|
aysekaya
| 2025-01-21T15:42:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-01-21T15:38:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
profelyndoncarlson/reranker
|
profelyndoncarlson
| 2025-01-21T15:41:44Z | 5,405 | 0 | null |
[
"safetensors",
"bert",
"es",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-09-30T21:38:06Z |
---
license: apache-2.0
language:
- es
base_model:
- dccuchile/bert-base-spanish-wwm-cased
---
# **Model Card: Spanish Text Reranker using BETO**
This model is a **reranker** for Spanish text passages, built on top of [**BETO**](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (a BERT-based model pre-trained on Spanish). It was trained to **score the relevance** of text passages given a user prompt, enabling you to reorder search results or candidate answers by how closely they match the user’s query.
---
## **Model Details**
- **Model Name**: `reranker_beto_pytorch_optimized`
- **Architecture**: [BETO (BERT-base Spanish WWM)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased)
- **Language**: Spanish
- **Task**: Regression-based Reranking
- Given a `(prompt, content)` pair, the model outputs a **single numerical score** indicating predicted relevance.
---
## **Intended Use and Applications**
1. **Passage Reranking**: Use the model to rerank search results, QA passages, or any candidate text snippet according to how well they answer a Spanish query.
2. **Information Retrieval Pipelines**: Integrate the model as a final step after retrieving multiple candidate passages from a search engine. The model will reorder candidates by relevance.
3. **Question-Answering Systems**: Filter or sort passages that might contain the best answer to a user’s Spanish question.
---
## **How It Was Trained**
1. **Data Source**:
- Model training data came from an internal system that stores:
- **Prompts** (user queries or questions)
- **Content** (text chunks from documents)
- **Rank** (a manual or heuristic-based 1–5 relevance score)
- Additional generation steps (HyDE / T5) were used to create synthetic queries, but **this reranker model** specifically used the `(prompt, content, rank)` tuples from the database.
2. **Preprocessing**:
- The textual pairs (`prompt`, `content`) were tokenized using the BETO tokenizer (cased) with:
- `max_length = 512`
- `doc_stride = 256` (for lengthy passages)
- The `rank` field was normalized and mapped to a continuous value (`relevance`) for regression.
3. **Training Setup**:
- **Base model**: `dccuchile/bert-base-spanish-wwm-cased`
- **Loss**: Mean Squared Error (MSE) to predict the `relevance` score
- **Optimizer**: `AdamW` with a learning rate of `3e-5`
- **Epochs**: 3
- **Batch Size**: 8
- **Hardware**: CPU/GPU (CUDA if available)
4. **Splits**:
- Data was split into **train (80%)**, **validation (10%)**, and **test (10%)** sets using `sklearn.model_selection.train_test_split`.
---
## **Model Performance**
- The code logs training and validation **loss** (MSE).
- Final test set MSE is logged as `test_loss`.
- Specific numerical results depend on your data distribution and training logs.
---
## **Usage Example**
Below is a quick example in Python using [Hugging Face Transformers](https://github.com/huggingface/transformers). After you’ve downloaded the model and tokenizer to `./reranker_beto_pytorch_optimized`, you can do:
```python
import torch
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the fine-tuned model and tokenizer
model_dir = "./reranker_beto_pytorch_optimized"
tokenizer = BertTokenizer.from_pretrained(model_dir)
model = BertForSequenceClassification.from_pretrained(model_dir).to(device)
model.eval()
prompt = "¿Cómo implementar un sistema solar en una escuela primaria?"
passage = "Este documento describe las partes del sistema solar ..."
inputs = tokenizer(
prompt,
passage,
max_length=512,
truncation='only_second',
padding='max_length',
return_tensors='pt'
)
# Forward pass
with torch.no_grad():
outputs = model(
input_ids=inputs['input_ids'].to(device),
attention_mask=inputs['attention_mask'].to(device)
)
score = outputs.logits.squeeze().item()
print(f"Predicted relevance score: {score:.4f}")
```
You would compare **scores** across multiple passages for a single prompt, then **rank** or **sort** them from **highest** to **lowest** predicted relevance.
---
## **Limitations and Ethical Considerations**
1. **Bias and Fairness**:
- Model performance is influenced by training data’s content and labels. If the data distribution is skewed, the model might reflect those biases (e.g., domain-specific content, reading level bias).
2. **Domain Generalization**:
- Trained primarily on text from a specific database of Spanish prompts and passages. Performance may degrade in **highly specialized** or **different** domains, or with non-standard Spanish dialects.
3. **Possible Misinformation**:
- Reranking aims to find the “most relevant” snippet, not necessarily the “most correct” or “fact-checked.” Always verify final results for correctness or harmful misinformation.
4. **Data Confidentiality**:
- If your data contains personal or sensitive info, ensure you comply with relevant privacy and data handling regulations before using or distributing the model.
---
## **Intended Users**
- **Developers** building Spanish-based search and question-answering systems.
- **Researchers** experimenting with Spanish language reranking or IR tasks.
- **Content Managers** wanting to reorder Spanish text snippets by relevance.
---
|
Best000/81429cda-e9c1-4989-92a1-879a9e4a8cff
|
Best000
| 2025-01-21T15:41:20Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-21T15:40:54Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 81429cda-e9c1-4989-92a1-879a9e4a8cff
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 372a9aad3c15aaa5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/372a9aad3c15aaa5_train_data.json
type:
field_instruction: file_path
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/81429cda-e9c1-4989-92a1-879a9e4a8cff
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/372a9aad3c15aaa5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ab42417-e3dd-430c-9b06-cae6baa46502
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ab42417-e3dd-430c-9b06-cae6baa46502
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 81429cda-e9c1-4989-92a1-879a9e4a8cff
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8344 | 0.0014 | 1 | 10.8340 |
| 10.8394 | 0.0043 | 3 | 10.8338 |
| 10.8414 | 0.0086 | 6 | 10.8323 |
| 10.8332 | 0.0129 | 9 | 10.8300 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nat-hunt/e4b2ca76-6f76-4559-b7aa-be22f9bddbf5
|
nat-hunt
| 2025-01-21T15:40:14Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-21T15:39:49Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e4b2ca76-6f76-4559-b7aa-be22f9bddbf5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 372a9aad3c15aaa5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/372a9aad3c15aaa5_train_data.json
type:
field_instruction: file_path
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/e4b2ca76-6f76-4559-b7aa-be22f9bddbf5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/372a9aad3c15aaa5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ab42417-e3dd-430c-9b06-cae6baa46502
wandb_project: Birthday-SN56-4-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ab42417-e3dd-430c-9b06-cae6baa46502
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e4b2ca76-6f76-4559-b7aa-be22f9bddbf5
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8344 | 0.0014 | 1 | 10.8340 |
| 10.8394 | 0.0043 | 3 | 10.8337 |
| 10.8411 | 0.0086 | 6 | 10.8321 |
| 10.8326 | 0.0129 | 9 | 10.8292 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kostiantynk1205/953339e6-a3c2-4921-a7f8-e282bdd88afe
|
kostiantynk1205
| 2025-01-21T15:38:50Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-21T15:38:25Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 953339e6-a3c2-4921-a7f8-e282bdd88afe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 372a9aad3c15aaa5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/372a9aad3c15aaa5_train_data.json
type:
field_instruction: file_path
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/953339e6-a3c2-4921-a7f8-e282bdd88afe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/372a9aad3c15aaa5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ab42417-e3dd-430c-9b06-cae6baa46502
wandb_project: Birthday-SN56-6-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ab42417-e3dd-430c-9b06-cae6baa46502
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 953339e6-a3c2-4921-a7f8-e282bdd88afe
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8344 | 0.0014 | 1 | 10.8340 |
| 10.8395 | 0.0043 | 3 | 10.8338 |
| 10.8415 | 0.0086 | 6 | 10.8326 |
| 10.8334 | 0.0129 | 9 | 10.8306 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso03/10d71a49-3a3d-4d06-9e5b-25eed994dde1
|
lesso03
| 2025-01-21T15:38:33Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:42:49Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 10d71a49-3a3d-4d06-9e5b-25eed994dde1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 9c65c4353c858f05_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c65c4353c858f05_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso03/10d71a49-3a3d-4d06-9e5b-25eed994dde1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c65c4353c858f05_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4200e893-cfbb-40b3-bdb2-0eb0fdaa99b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4200e893-cfbb-40b3-bdb2-0eb0fdaa99b8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 10d71a49-3a3d-4d06-9e5b-25eed994dde1
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8619 | 0.0001 | 1 | 4.0875 |
| 3.3029 | 0.0004 | 5 | 3.8703 |
| 2.8874 | 0.0008 | 10 | 2.6289 |
| 2.3871 | 0.0012 | 15 | 1.8896 |
| 1.3123 | 0.0016 | 20 | 1.4961 |
| 1.022 | 0.0021 | 25 | 1.4496 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cvoffer/1f74b535-9022-4cf7-adc4-1d0b1247f4e6
|
cvoffer
| 2025-01-21T15:38:11Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T15:29:13Z |
---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f74b535-9022-4cf7-adc4-1d0b1247f4e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b20fbfa08217066a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b20fbfa08217066a_train_data.json
type:
field_instruction: question
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cvoffer/1f74b535-9022-4cf7-adc4-1d0b1247f4e6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/b20fbfa08217066a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bb16ccd3-964b-4fd6-975d-73e3b09a8079
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bb16ccd3-964b-4fd6-975d-73e3b09a8079
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 1f74b535-9022-4cf7-adc4-1d0b1247f4e6
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | nan |
| 0.0 | 0.0034 | 5 | nan |
| 0.0 | 0.0068 | 10 | nan |
| 0.0 | 0.0101 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k17_task7_organization
|
MayBashendy
| 2025-01-21T15:37:16Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T15:29:37Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k17_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k17_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0754
- Qwk: 0.0462
- Mse: 1.0754
- Rmse: 1.0370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.05 | 2 | 2.7815 | -0.0481 | 2.7815 | 1.6678 |
| No log | 0.1 | 4 | 1.7927 | 0.0061 | 1.7927 | 1.3389 |
| No log | 0.15 | 6 | 2.0211 | -0.1653 | 2.0211 | 1.4217 |
| No log | 0.2 | 8 | 1.3569 | -0.1328 | 1.3569 | 1.1648 |
| No log | 0.25 | 10 | 1.0144 | 0.0054 | 1.0144 | 1.0072 |
| No log | 0.3 | 12 | 0.9010 | 0.1461 | 0.9010 | 0.9492 |
| No log | 0.35 | 14 | 0.9014 | 0.1534 | 0.9014 | 0.9494 |
| No log | 0.4 | 16 | 0.8922 | 0.1636 | 0.8922 | 0.9445 |
| No log | 0.45 | 18 | 0.8487 | 0.0679 | 0.8487 | 0.9212 |
| No log | 0.5 | 20 | 0.9147 | 0.1511 | 0.9147 | 0.9564 |
| No log | 0.55 | 22 | 1.0285 | 0.1259 | 1.0285 | 1.0141 |
| No log | 0.6 | 24 | 1.0813 | 0.0986 | 1.0813 | 1.0398 |
| No log | 0.65 | 26 | 0.8837 | 0.2132 | 0.8837 | 0.9401 |
| No log | 0.7 | 28 | 0.7961 | 0.0937 | 0.7961 | 0.8922 |
| No log | 0.75 | 30 | 0.7689 | 0.0481 | 0.7689 | 0.8769 |
| No log | 0.8 | 32 | 0.7462 | 0.0481 | 0.7462 | 0.8638 |
| No log | 0.85 | 34 | 0.7381 | 0.0884 | 0.7381 | 0.8591 |
| No log | 0.9 | 36 | 0.7520 | 0.0 | 0.7520 | 0.8672 |
| No log | 0.95 | 38 | 0.7783 | 0.0481 | 0.7783 | 0.8822 |
| No log | 1.0 | 40 | 0.7709 | 0.0 | 0.7709 | 0.8780 |
| No log | 1.05 | 42 | 0.7433 | 0.0 | 0.7433 | 0.8622 |
| No log | 1.1 | 44 | 0.7383 | 0.0 | 0.7383 | 0.8592 |
| No log | 1.15 | 46 | 0.7397 | 0.0 | 0.7397 | 0.8601 |
| No log | 1.2 | 48 | 0.7337 | 0.0884 | 0.7337 | 0.8566 |
| No log | 1.25 | 50 | 0.7301 | 0.1236 | 0.7301 | 0.8544 |
| No log | 1.3 | 52 | 0.7242 | 0.1456 | 0.7242 | 0.8510 |
| No log | 1.35 | 54 | 0.7434 | 0.1807 | 0.7434 | 0.8622 |
| No log | 1.4 | 56 | 0.7364 | 0.1508 | 0.7364 | 0.8581 |
| No log | 1.45 | 58 | 0.7273 | 0.1187 | 0.7273 | 0.8528 |
| No log | 1.5 | 60 | 0.7258 | 0.0840 | 0.7258 | 0.8520 |
| No log | 1.55 | 62 | 0.7335 | 0.0444 | 0.7335 | 0.8565 |
| No log | 1.6 | 64 | 0.7513 | 0.0937 | 0.7513 | 0.8668 |
| No log | 1.65 | 66 | 0.7398 | 0.0481 | 0.7398 | 0.8601 |
| No log | 1.7 | 68 | 0.7442 | 0.0 | 0.7442 | 0.8627 |
| No log | 1.75 | 70 | 0.7480 | 0.0 | 0.7480 | 0.8649 |
| No log | 1.8 | 72 | 0.7431 | -0.0027 | 0.7431 | 0.8620 |
| No log | 1.85 | 74 | 0.7483 | 0.0893 | 0.7483 | 0.8651 |
| No log | 1.9 | 76 | 0.7506 | 0.0026 | 0.7506 | 0.8664 |
| No log | 1.95 | 78 | 0.7455 | 0.0026 | 0.7455 | 0.8634 |
| No log | 2.0 | 80 | 0.7296 | 0.0764 | 0.7296 | 0.8542 |
| No log | 2.05 | 82 | 0.7244 | 0.0410 | 0.7244 | 0.8511 |
| No log | 2.1 | 84 | 0.7185 | 0.0481 | 0.7185 | 0.8476 |
| No log | 2.15 | 86 | 0.7201 | 0.0481 | 0.7201 | 0.8486 |
| No log | 2.2 | 88 | 0.7684 | 0.0688 | 0.7684 | 0.8766 |
| No log | 2.25 | 90 | 0.8472 | -0.0047 | 0.8472 | 0.9204 |
| No log | 2.3 | 92 | 0.9219 | 0.0336 | 0.9219 | 0.9601 |
| No log | 2.35 | 94 | 0.8592 | 0.0661 | 0.8592 | 0.9269 |
| No log | 2.4 | 96 | 0.7926 | 0.1448 | 0.7926 | 0.8903 |
| No log | 2.45 | 98 | 0.7396 | 0.2158 | 0.7396 | 0.8600 |
| No log | 2.5 | 100 | 0.7441 | 0.2158 | 0.7441 | 0.8626 |
| No log | 2.55 | 102 | 0.7275 | 0.1867 | 0.7275 | 0.8529 |
| No log | 2.6 | 104 | 0.7325 | 0.2509 | 0.7325 | 0.8559 |
| No log | 2.65 | 106 | 0.7702 | 0.2218 | 0.7702 | 0.8776 |
| No log | 2.7 | 108 | 0.7711 | 0.2158 | 0.7711 | 0.8781 |
| No log | 2.75 | 110 | 0.7584 | 0.2158 | 0.7584 | 0.8709 |
| No log | 2.8 | 112 | 0.7624 | 0.2158 | 0.7624 | 0.8732 |
| No log | 2.85 | 114 | 0.7761 | 0.2413 | 0.7761 | 0.8810 |
| No log | 2.9 | 116 | 0.7774 | 0.1901 | 0.7774 | 0.8817 |
| No log | 2.95 | 118 | 0.7895 | 0.2847 | 0.7895 | 0.8885 |
| No log | 3.0 | 120 | 0.7612 | 0.1624 | 0.7612 | 0.8724 |
| No log | 3.05 | 122 | 0.7446 | 0.2158 | 0.7446 | 0.8629 |
| No log | 3.1 | 124 | 0.7593 | 0.1010 | 0.7593 | 0.8714 |
| No log | 3.15 | 126 | 0.8075 | 0.0971 | 0.8075 | 0.8986 |
| No log | 3.2 | 128 | 0.7975 | 0.0971 | 0.7975 | 0.8930 |
| No log | 3.25 | 130 | 0.7766 | 0.0697 | 0.7766 | 0.8812 |
| No log | 3.3 | 132 | 0.7984 | 0.1051 | 0.7984 | 0.8936 |
| No log | 3.35 | 134 | 0.9101 | 0.2149 | 0.9101 | 0.9540 |
| No log | 3.4 | 136 | 1.0358 | 0.2521 | 1.0358 | 1.0177 |
| No log | 3.45 | 138 | 1.0471 | 0.2364 | 1.0471 | 1.0233 |
| No log | 3.5 | 140 | 0.9628 | 0.1995 | 0.9628 | 0.9812 |
| No log | 3.55 | 142 | 0.9067 | 0.0584 | 0.9067 | 0.9522 |
| No log | 3.6 | 144 | 0.8637 | 0.0697 | 0.8637 | 0.9293 |
| No log | 3.65 | 146 | 0.9336 | 0.0975 | 0.9336 | 0.9662 |
| No log | 3.7 | 148 | 0.9419 | 0.0856 | 0.9419 | 0.9705 |
| No log | 3.75 | 150 | 0.9849 | 0.2193 | 0.9849 | 0.9924 |
| No log | 3.8 | 152 | 0.9769 | 0.2892 | 0.9769 | 0.9884 |
| No log | 3.85 | 154 | 0.9240 | 0.2439 | 0.9240 | 0.9613 |
| No log | 3.9 | 156 | 0.8655 | 0.2943 | 0.8655 | 0.9303 |
| No log | 3.95 | 158 | 0.8632 | 0.3369 | 0.8632 | 0.9291 |
| No log | 4.0 | 160 | 0.9115 | 0.2912 | 0.9115 | 0.9547 |
| No log | 4.05 | 162 | 0.8749 | 0.3115 | 0.8749 | 0.9354 |
| No log | 4.1 | 164 | 0.8485 | 0.3157 | 0.8485 | 0.9212 |
| No log | 4.15 | 166 | 0.8877 | 0.2059 | 0.8877 | 0.9422 |
| No log | 4.2 | 168 | 0.8398 | 0.2662 | 0.8398 | 0.9164 |
| No log | 4.25 | 170 | 0.7146 | 0.3020 | 0.7146 | 0.8453 |
| No log | 4.3 | 172 | 0.6766 | 0.1829 | 0.6766 | 0.8225 |
| No log | 4.35 | 174 | 0.6850 | 0.2181 | 0.6850 | 0.8276 |
| No log | 4.4 | 176 | 0.7396 | 0.4052 | 0.7396 | 0.8600 |
| No log | 4.45 | 178 | 0.8550 | 0.4251 | 0.8550 | 0.9247 |
| No log | 4.5 | 180 | 0.8611 | 0.3754 | 0.8611 | 0.9280 |
| No log | 4.55 | 182 | 0.8287 | 0.4251 | 0.8287 | 0.9103 |
| No log | 4.6 | 184 | 0.7763 | 0.3167 | 0.7763 | 0.8811 |
| No log | 4.65 | 186 | 0.7740 | 0.3167 | 0.7740 | 0.8798 |
| No log | 4.7 | 188 | 0.7625 | 0.3622 | 0.7625 | 0.8732 |
| No log | 4.75 | 190 | 0.7500 | 0.3341 | 0.7500 | 0.8660 |
| No log | 4.8 | 192 | 0.7489 | 0.2950 | 0.7489 | 0.8654 |
| No log | 4.85 | 194 | 0.7782 | 0.4884 | 0.7782 | 0.8821 |
| No log | 4.9 | 196 | 0.7834 | 0.5120 | 0.7834 | 0.8851 |
| No log | 4.95 | 198 | 0.8070 | 0.3789 | 0.8070 | 0.8983 |
| No log | 5.0 | 200 | 0.8071 | 0.2950 | 0.8071 | 0.8984 |
| No log | 5.05 | 202 | 0.8578 | 0.3234 | 0.8578 | 0.9262 |
| No log | 5.1 | 204 | 0.8973 | 0.3918 | 0.8973 | 0.9472 |
| No log | 5.15 | 206 | 0.8158 | 0.3372 | 0.8158 | 0.9032 |
| No log | 5.2 | 208 | 0.7714 | 0.2847 | 0.7714 | 0.8783 |
| No log | 5.25 | 210 | 0.8357 | 0.4224 | 0.8357 | 0.9142 |
| No log | 5.3 | 212 | 0.9080 | 0.3727 | 0.9080 | 0.9529 |
| No log | 5.35 | 214 | 0.9664 | 0.3727 | 0.9664 | 0.9831 |
| No log | 5.4 | 216 | 0.9304 | 0.3012 | 0.9304 | 0.9646 |
| No log | 5.45 | 218 | 0.9390 | 0.2779 | 0.9390 | 0.9690 |
| No log | 5.5 | 220 | 0.8981 | 0.1029 | 0.8981 | 0.9477 |
| No log | 5.55 | 222 | 0.8721 | 0.1577 | 0.8721 | 0.9339 |
| No log | 5.6 | 224 | 0.8147 | 0.1624 | 0.8147 | 0.9026 |
| No log | 5.65 | 226 | 0.7475 | 0.3127 | 0.7475 | 0.8646 |
| No log | 5.7 | 228 | 0.7594 | 0.3399 | 0.7594 | 0.8714 |
| No log | 5.75 | 230 | 0.8364 | 0.3425 | 0.8364 | 0.9146 |
| No log | 5.8 | 232 | 0.9304 | 0.2886 | 0.9304 | 0.9645 |
| No log | 5.85 | 234 | 1.0254 | 0.2683 | 1.0254 | 1.0126 |
| No log | 5.9 | 236 | 1.0362 | 0.2683 | 1.0362 | 1.0179 |
| No log | 5.95 | 238 | 0.9750 | 0.2886 | 0.9750 | 0.9874 |
| No log | 6.0 | 240 | 0.9937 | 0.3395 | 0.9937 | 0.9968 |
| No log | 6.05 | 242 | 1.0004 | 0.3247 | 1.0004 | 1.0002 |
| No log | 6.1 | 244 | 0.8859 | 0.2923 | 0.8859 | 0.9412 |
| No log | 6.15 | 246 | 0.8550 | 0.2967 | 0.8550 | 0.9246 |
| No log | 6.2 | 248 | 0.8317 | 0.3545 | 0.8317 | 0.9120 |
| No log | 6.25 | 250 | 0.8385 | 0.3127 | 0.8385 | 0.9157 |
| No log | 6.3 | 252 | 0.8527 | 0.2692 | 0.8527 | 0.9234 |
| No log | 6.35 | 254 | 0.9042 | 0.3892 | 0.9042 | 0.9509 |
| No log | 6.4 | 256 | 0.9010 | 0.4113 | 0.9010 | 0.9492 |
| No log | 6.45 | 258 | 0.8617 | 0.4328 | 0.8617 | 0.9283 |
| No log | 6.5 | 260 | 0.8275 | 0.3169 | 0.8275 | 0.9097 |
| No log | 6.55 | 262 | 0.8960 | 0.2982 | 0.8960 | 0.9466 |
| No log | 6.6 | 264 | 1.0151 | 0.2977 | 1.0151 | 1.0075 |
| No log | 6.65 | 266 | 1.0324 | 0.2876 | 1.0324 | 1.0161 |
| No log | 6.7 | 268 | 0.9790 | 0.3082 | 0.9790 | 0.9894 |
| No log | 6.75 | 270 | 0.9558 | 0.3417 | 0.9558 | 0.9777 |
| No log | 6.8 | 272 | 0.9772 | 0.3082 | 0.9772 | 0.9885 |
| No log | 6.85 | 274 | 1.0048 | 0.2876 | 1.0048 | 1.0024 |
| No log | 6.9 | 276 | 1.0751 | 0.2264 | 1.0751 | 1.0369 |
| No log | 6.95 | 278 | 1.1302 | 0.1995 | 1.1302 | 1.0631 |
| No log | 7.0 | 280 | 1.0435 | 0.2439 | 1.0435 | 1.0215 |
| No log | 7.05 | 282 | 1.0062 | 0.2956 | 1.0062 | 1.0031 |
| No log | 7.1 | 284 | 1.0261 | 0.3253 | 1.0261 | 1.0130 |
| No log | 7.15 | 286 | 0.9877 | 0.3253 | 0.9877 | 0.9939 |
| No log | 7.2 | 288 | 0.8878 | 0.5402 | 0.8878 | 0.9422 |
| No log | 7.25 | 290 | 0.8381 | 0.3894 | 0.8381 | 0.9155 |
| No log | 7.3 | 292 | 0.8505 | 0.3894 | 0.8505 | 0.9222 |
| No log | 7.35 | 294 | 0.9370 | 0.4162 | 0.9370 | 0.9680 |
| No log | 7.4 | 296 | 1.0075 | 0.2659 | 1.0075 | 1.0037 |
| No log | 7.45 | 298 | 0.9639 | 0.3739 | 0.9639 | 0.9818 |
| No log | 7.5 | 300 | 0.8622 | 0.5029 | 0.8622 | 0.9286 |
| No log | 7.55 | 302 | 0.8358 | 0.3894 | 0.8358 | 0.9142 |
| No log | 7.6 | 304 | 0.8985 | 0.3869 | 0.8985 | 0.9479 |
| No log | 7.65 | 306 | 1.0091 | 0.3767 | 1.0091 | 1.0045 |
| No log | 7.7 | 308 | 1.0558 | 0.2824 | 1.0558 | 1.0275 |
| No log | 7.75 | 310 | 0.9737 | 0.3678 | 0.9737 | 0.9867 |
| No log | 7.8 | 312 | 0.8828 | 0.2899 | 0.8828 | 0.9396 |
| No log | 7.85 | 314 | 0.8188 | 0.2558 | 0.8188 | 0.9049 |
| No log | 7.9 | 316 | 0.8384 | 0.2077 | 0.8384 | 0.9157 |
| No log | 7.95 | 318 | 0.9457 | 0.2547 | 0.9457 | 0.9725 |
| No log | 8.0 | 320 | 1.0946 | 0.1030 | 1.0946 | 1.0462 |
| No log | 8.05 | 322 | 1.2364 | 0.1630 | 1.2364 | 1.1119 |
| No log | 8.1 | 324 | 1.2732 | 0.1445 | 1.2732 | 1.1283 |
| No log | 8.15 | 326 | 1.0747 | 0.1457 | 1.0747 | 1.0367 |
| No log | 8.2 | 328 | 0.8665 | 0.3167 | 0.8665 | 0.9308 |
| No log | 8.25 | 330 | 0.7977 | 0.3387 | 0.7977 | 0.8931 |
| No log | 8.3 | 332 | 0.8006 | 0.2883 | 0.8006 | 0.8947 |
| No log | 8.35 | 334 | 0.8646 | 0.3637 | 0.8646 | 0.9298 |
| No log | 8.4 | 336 | 0.9317 | 0.4230 | 0.9317 | 0.9653 |
| No log | 8.45 | 338 | 0.8908 | 0.3991 | 0.8908 | 0.9438 |
| No log | 8.5 | 340 | 0.7968 | 0.2817 | 0.7968 | 0.8926 |
| No log | 8.55 | 342 | 0.7467 | 0.3020 | 0.7467 | 0.8641 |
| No log | 8.6 | 344 | 0.7310 | 0.2471 | 0.7310 | 0.8550 |
| No log | 8.65 | 346 | 0.7545 | 0.2589 | 0.7545 | 0.8686 |
| No log | 8.7 | 348 | 0.7999 | 0.3099 | 0.7999 | 0.8944 |
| No log | 8.75 | 350 | 0.8967 | 0.3940 | 0.8967 | 0.9469 |
| No log | 8.8 | 352 | 1.0051 | 0.2568 | 1.0051 | 1.0026 |
| No log | 8.85 | 354 | 1.1098 | 0.2101 | 1.1098 | 1.0535 |
| No log | 8.9 | 356 | 1.0758 | 0.2421 | 1.0758 | 1.0372 |
| No log | 8.95 | 358 | 0.9590 | 0.2971 | 0.9590 | 0.9793 |
| No log | 9.0 | 360 | 0.8277 | 0.3372 | 0.8277 | 0.9098 |
| No log | 9.05 | 362 | 0.7652 | 0.2589 | 0.7652 | 0.8747 |
| No log | 9.1 | 364 | 0.7531 | 0.2883 | 0.7531 | 0.8678 |
| No log | 9.15 | 366 | 0.7831 | 0.2817 | 0.7831 | 0.8849 |
| No log | 9.2 | 368 | 0.8093 | 0.3737 | 0.8093 | 0.8996 |
| No log | 9.25 | 370 | 0.7960 | 0.3196 | 0.7960 | 0.8922 |
| No log | 9.3 | 372 | 0.7802 | 0.2527 | 0.7802 | 0.8833 |
| No log | 9.35 | 374 | 0.8058 | 0.3099 | 0.8058 | 0.8977 |
| No log | 9.4 | 376 | 0.8698 | 0.4154 | 0.8698 | 0.9327 |
| No log | 9.45 | 378 | 0.8967 | 0.2923 | 0.8967 | 0.9469 |
| No log | 9.5 | 380 | 0.8721 | 0.3042 | 0.8721 | 0.9339 |
| No log | 9.55 | 382 | 0.8281 | 0.3302 | 0.8281 | 0.9100 |
| No log | 9.6 | 384 | 0.7880 | 0.3789 | 0.7880 | 0.8877 |
| No log | 9.65 | 386 | 0.8122 | 0.3444 | 0.8122 | 0.9012 |
| No log | 9.7 | 388 | 0.8866 | 0.3586 | 0.8866 | 0.9416 |
| No log | 9.75 | 390 | 0.9376 | 0.2670 | 0.9376 | 0.9683 |
| No log | 9.8 | 392 | 0.9172 | 0.2253 | 0.9172 | 0.9577 |
| No log | 9.85 | 394 | 0.8681 | 0.2471 | 0.8681 | 0.9317 |
| No log | 9.9 | 396 | 0.8026 | 0.1884 | 0.8026 | 0.8959 |
| No log | 9.95 | 398 | 0.7398 | 0.2063 | 0.7398 | 0.8601 |
| No log | 10.0 | 400 | 0.7391 | 0.2718 | 0.7391 | 0.8597 |
| No log | 10.05 | 402 | 0.7676 | 0.3099 | 0.7676 | 0.8761 |
| No log | 10.1 | 404 | 0.8502 | 0.3564 | 0.8502 | 0.9220 |
| No log | 10.15 | 406 | 1.0245 | 0.3010 | 1.0245 | 1.0122 |
| No log | 10.2 | 408 | 1.1584 | 0.1805 | 1.1584 | 1.0763 |
| No log | 10.25 | 410 | 1.1766 | 0.1961 | 1.1766 | 1.0847 |
| No log | 10.3 | 412 | 1.0601 | 0.2358 | 1.0601 | 1.0296 |
| No log | 10.35 | 414 | 0.8856 | 0.3913 | 0.8856 | 0.9411 |
| No log | 10.4 | 416 | 0.7855 | 0.2847 | 0.7855 | 0.8863 |
| No log | 10.45 | 418 | 0.7554 | 0.2913 | 0.7554 | 0.8691 |
| No log | 10.5 | 420 | 0.7340 | 0.2158 | 0.7340 | 0.8567 |
| No log | 10.55 | 422 | 0.7284 | 0.2158 | 0.7284 | 0.8534 |
| No log | 10.6 | 424 | 0.7346 | 0.1407 | 0.7346 | 0.8571 |
| No log | 10.65 | 426 | 0.7697 | 0.2589 | 0.7697 | 0.8773 |
| No log | 10.7 | 428 | 0.8409 | 0.3099 | 0.8409 | 0.9170 |
| No log | 10.75 | 430 | 0.8929 | 0.3234 | 0.8929 | 0.9450 |
| No log | 10.8 | 432 | 0.9182 | 0.3042 | 0.9182 | 0.9582 |
| No log | 10.85 | 434 | 0.9127 | 0.2784 | 0.9127 | 0.9554 |
| No log | 10.9 | 436 | 0.9932 | 0.2316 | 0.9932 | 0.9966 |
| No log | 10.95 | 438 | 1.0541 | 0.2412 | 1.0541 | 1.0267 |
| No log | 11.0 | 440 | 1.0786 | 0.2412 | 1.0786 | 1.0386 |
| No log | 11.05 | 442 | 1.0343 | 0.2227 | 1.0343 | 1.0170 |
| No log | 11.1 | 444 | 0.9311 | 0.2784 | 0.9311 | 0.9649 |
| No log | 11.15 | 446 | 0.9003 | 0.2574 | 0.9003 | 0.9489 |
| No log | 11.2 | 448 | 0.9355 | 0.2843 | 0.9355 | 0.9672 |
| No log | 11.25 | 450 | 0.9614 | 0.2574 | 0.9614 | 0.9805 |
| No log | 11.3 | 452 | 1.0205 | 0.1955 | 1.0205 | 1.0102 |
| No log | 11.35 | 454 | 1.0828 | 0.1827 | 1.0828 | 1.0406 |
| No log | 11.4 | 456 | 1.1495 | 0.1293 | 1.1495 | 1.0721 |
| No log | 11.45 | 458 | 1.1364 | 0.1909 | 1.1364 | 1.0660 |
| No log | 11.5 | 460 | 1.0658 | 0.2075 | 1.0658 | 1.0324 |
| No log | 11.55 | 462 | 1.0013 | 0.1692 | 1.0013 | 1.0007 |
| No log | 11.6 | 464 | 0.9741 | 0.1501 | 0.9741 | 0.9869 |
| No log | 11.65 | 466 | 0.9484 | 0.1867 | 0.9484 | 0.9738 |
| No log | 11.7 | 468 | 0.9782 | 0.1734 | 0.9782 | 0.9890 |
| No log | 11.75 | 470 | 1.0343 | 0.1573 | 1.0343 | 1.0170 |
| No log | 11.8 | 472 | 1.0619 | 0.2227 | 1.0619 | 1.0305 |
| No log | 11.85 | 474 | 1.0965 | 0.2100 | 1.0965 | 1.0471 |
| No log | 11.9 | 476 | 1.0422 | 0.2703 | 1.0422 | 1.0209 |
| No log | 11.95 | 478 | 0.9812 | 0.1501 | 0.9812 | 0.9906 |
| No log | 12.0 | 480 | 0.9728 | 0.1501 | 0.9728 | 0.9863 |
| No log | 12.05 | 482 | 0.9867 | 0.1422 | 0.9867 | 0.9933 |
| No log | 12.1 | 484 | 0.9741 | 0.1692 | 0.9741 | 0.9869 |
| No log | 12.15 | 486 | 0.9928 | 0.1692 | 0.9928 | 0.9964 |
| No log | 12.2 | 488 | 0.9996 | 0.1651 | 0.9996 | 0.9998 |
| No log | 12.25 | 490 | 1.0038 | 0.1692 | 1.0038 | 1.0019 |
| No log | 12.3 | 492 | 0.9871 | 0.2046 | 0.9871 | 0.9935 |
| No log | 12.35 | 494 | 0.9290 | 0.2632 | 0.9290 | 0.9638 |
| No log | 12.4 | 496 | 0.9141 | 0.2632 | 0.9141 | 0.9561 |
| No log | 12.45 | 498 | 0.9721 | 0.2843 | 0.9721 | 0.9859 |
| 0.3559 | 12.5 | 500 | 1.0361 | 0.1692 | 1.0361 | 1.0179 |
| 0.3559 | 12.55 | 502 | 1.0333 | 0.1822 | 1.0333 | 1.0165 |
| 0.3559 | 12.6 | 504 | 1.0326 | 0.1584 | 1.0326 | 1.0162 |
| 0.3559 | 12.65 | 506 | 1.0702 | 0.0803 | 1.0702 | 1.0345 |
| 0.3559 | 12.7 | 508 | 1.0406 | 0.1584 | 1.0406 | 1.0201 |
| 0.3559 | 12.75 | 510 | 0.9997 | 0.2352 | 0.9997 | 0.9999 |
| 0.3559 | 12.8 | 512 | 0.9725 | 0.1867 | 0.9725 | 0.9862 |
| 0.3559 | 12.85 | 514 | 0.9107 | 0.2817 | 0.9107 | 0.9543 |
| 0.3559 | 12.9 | 516 | 0.8876 | 0.2754 | 0.8876 | 0.9422 |
| 0.3559 | 12.95 | 518 | 0.8646 | 0.2754 | 0.8646 | 0.9298 |
| 0.3559 | 13.0 | 520 | 0.8035 | 0.3238 | 0.8035 | 0.8964 |
| 0.3559 | 13.05 | 522 | 0.7985 | 0.2950 | 0.7985 | 0.8936 |
| 0.3559 | 13.1 | 524 | 0.8662 | 0.2409 | 0.8662 | 0.9307 |
| 0.3559 | 13.15 | 526 | 0.9824 | 0.1385 | 0.9824 | 0.9912 |
| 0.3559 | 13.2 | 528 | 1.0687 | 0.2075 | 1.0687 | 1.0338 |
| 0.3559 | 13.25 | 530 | 1.0561 | 0.2075 | 1.0561 | 1.0277 |
| 0.3559 | 13.3 | 532 | 0.9803 | 0.1651 | 0.9803 | 0.9901 |
| 0.3559 | 13.35 | 534 | 0.9083 | 0.1914 | 0.9083 | 0.9531 |
| 0.3559 | 13.4 | 536 | 0.8907 | 0.2297 | 0.8907 | 0.9438 |
| 0.3559 | 13.45 | 538 | 0.9183 | 0.1180 | 0.9183 | 0.9583 |
| 0.3559 | 13.5 | 540 | 0.9664 | 0.1911 | 0.9664 | 0.9830 |
| 0.3559 | 13.55 | 542 | 1.0440 | 0.1787 | 1.0440 | 1.0217 |
| 0.3559 | 13.6 | 544 | 1.0957 | 0.1787 | 1.0957 | 1.0468 |
| 0.3559 | 13.65 | 546 | 1.0785 | 0.1787 | 1.0785 | 1.0385 |
| 0.3559 | 13.7 | 548 | 1.0023 | 0.1787 | 1.0023 | 1.0012 |
| 0.3559 | 13.75 | 550 | 0.9235 | 0.1777 | 0.9235 | 0.9610 |
| 0.3559 | 13.8 | 552 | 0.8722 | 0.2297 | 0.8722 | 0.9339 |
| 0.3559 | 13.85 | 554 | 0.9108 | 0.2297 | 0.9108 | 0.9543 |
| 0.3559 | 13.9 | 556 | 0.9007 | 0.1914 | 0.9007 | 0.9491 |
| 0.3559 | 13.95 | 558 | 0.9032 | 0.1914 | 0.9032 | 0.9504 |
| 0.3559 | 14.0 | 560 | 0.9296 | 0.2410 | 0.9296 | 0.9642 |
| 0.3559 | 14.05 | 562 | 1.0355 | 0.1869 | 1.0355 | 1.0176 |
| 0.3559 | 14.1 | 564 | 1.1113 | 0.1428 | 1.1113 | 1.0542 |
| 0.3559 | 14.15 | 566 | 1.1008 | 0.2141 | 1.1008 | 1.0492 |
| 0.3559 | 14.2 | 568 | 1.0136 | 0.2411 | 1.0136 | 1.0068 |
| 0.3559 | 14.25 | 570 | 0.9384 | 0.3169 | 0.9384 | 0.9687 |
| 0.3559 | 14.3 | 572 | 0.8950 | 0.2297 | 0.8950 | 0.9460 |
| 0.3559 | 14.35 | 574 | 0.9188 | 0.2574 | 0.9188 | 0.9585 |
| 0.3559 | 14.4 | 576 | 0.9862 | 0.2358 | 0.9862 | 0.9931 |
| 0.3559 | 14.45 | 578 | 1.0167 | 0.1692 | 1.0167 | 1.0083 |
| 0.3559 | 14.5 | 580 | 1.0321 | 0.1180 | 1.0321 | 1.0159 |
| 0.3559 | 14.55 | 582 | 1.0206 | 0.1180 | 1.0206 | 1.0103 |
| 0.3559 | 14.6 | 584 | 1.0164 | 0.2437 | 1.0164 | 1.0082 |
| 0.3559 | 14.65 | 586 | 0.9980 | 0.1911 | 0.9980 | 0.9990 |
| 0.3559 | 14.7 | 588 | 0.9958 | 0.2164 | 0.9958 | 0.9979 |
| 0.3559 | 14.75 | 590 | 0.9724 | 0.2164 | 0.9724 | 0.9861 |
| 0.3559 | 14.8 | 592 | 0.9617 | 0.2164 | 0.9617 | 0.9807 |
| 0.3559 | 14.85 | 594 | 0.9434 | 0.2784 | 0.9434 | 0.9713 |
| 0.3559 | 14.9 | 596 | 0.9472 | 0.3606 | 0.9472 | 0.9732 |
| 0.3559 | 14.95 | 598 | 0.9542 | 0.3169 | 0.9542 | 0.9768 |
| 0.3559 | 15.0 | 600 | 0.9227 | 0.3169 | 0.9227 | 0.9606 |
| 0.3559 | 15.05 | 602 | 0.8584 | 0.2784 | 0.8584 | 0.9265 |
| 0.3559 | 15.1 | 604 | 0.8454 | 0.2784 | 0.8454 | 0.9194 |
| 0.3559 | 15.15 | 606 | 0.8763 | 0.3042 | 0.8763 | 0.9361 |
| 0.3559 | 15.2 | 608 | 0.9143 | 0.2810 | 0.9143 | 0.9562 |
| 0.3559 | 15.25 | 610 | 0.9432 | 0.2703 | 0.9432 | 0.9712 |
| 0.3559 | 15.3 | 612 | 0.9527 | 0.2164 | 0.9527 | 0.9761 |
| 0.3559 | 15.35 | 614 | 0.9149 | 0.2000 | 0.9149 | 0.9565 |
| 0.3559 | 15.4 | 616 | 0.9573 | 0.1499 | 0.9573 | 0.9784 |
| 0.3559 | 15.45 | 618 | 1.0605 | 0.1870 | 1.0605 | 1.0298 |
| 0.3559 | 15.5 | 620 | 1.0966 | 0.1870 | 1.0966 | 1.0472 |
| 0.3559 | 15.55 | 622 | 1.1433 | 0.1679 | 1.1433 | 1.0692 |
| 0.3559 | 15.6 | 624 | 1.1018 | 0.1490 | 1.1018 | 1.0497 |
| 0.3559 | 15.65 | 626 | 1.0046 | 0.2032 | 1.0046 | 1.0023 |
| 0.3559 | 15.7 | 628 | 0.9653 | 0.2211 | 0.9653 | 0.9825 |
| 0.3559 | 15.75 | 630 | 0.9585 | 0.2211 | 0.9585 | 0.9790 |
| 0.3559 | 15.8 | 632 | 1.0727 | 0.1265 | 1.0727 | 1.0357 |
| 0.3559 | 15.85 | 634 | 1.1293 | 0.1758 | 1.1293 | 1.0627 |
| 0.3559 | 15.9 | 636 | 1.1570 | 0.1679 | 1.1570 | 1.0756 |
| 0.3559 | 15.95 | 638 | 1.2015 | 0.1679 | 1.2015 | 1.0961 |
| 0.3559 | 16.0 | 640 | 1.1414 | 0.1679 | 1.1414 | 1.0684 |
| 0.3559 | 16.05 | 642 | 1.0472 | 0.1635 | 1.0472 | 1.0233 |
| 0.3559 | 16.1 | 644 | 0.9685 | 0.2756 | 0.9685 | 0.9841 |
| 0.3559 | 16.15 | 646 | 0.9257 | 0.2810 | 0.9257 | 0.9622 |
| 0.3559 | 16.2 | 648 | 0.9181 | 0.3169 | 0.9181 | 0.9582 |
| 0.3559 | 16.25 | 650 | 0.9439 | 0.3347 | 0.9439 | 0.9715 |
| 0.3559 | 16.3 | 652 | 1.0069 | 0.2939 | 1.0069 | 1.0034 |
| 0.3559 | 16.35 | 654 | 1.0898 | 0.2264 | 1.0898 | 1.0439 |
| 0.3559 | 16.4 | 656 | 1.1320 | 0.1895 | 1.1320 | 1.0640 |
| 0.3559 | 16.45 | 658 | 1.0995 | 0.1968 | 1.0995 | 1.0486 |
| 0.3559 | 16.5 | 660 | 0.9695 | 0.2939 | 0.9695 | 0.9846 |
| 0.3559 | 16.55 | 662 | 0.8808 | 0.2615 | 0.8808 | 0.9385 |
| 0.3559 | 16.6 | 664 | 0.8782 | 0.2259 | 0.8782 | 0.9371 |
| 0.3559 | 16.65 | 666 | 0.9080 | 0.1911 | 0.9080 | 0.9529 |
| 0.3559 | 16.7 | 668 | 0.9489 | 0.1827 | 0.9489 | 0.9741 |
| 0.3559 | 16.75 | 670 | 1.0408 | 0.2392 | 1.0408 | 1.0202 |
| 0.3559 | 16.8 | 672 | 1.0373 | 0.2227 | 1.0373 | 1.0185 |
| 0.3559 | 16.85 | 674 | 0.9711 | 0.1747 | 0.9711 | 0.9855 |
| 0.3559 | 16.9 | 676 | 0.9008 | 0.2982 | 0.9008 | 0.9491 |
| 0.3559 | 16.95 | 678 | 0.8984 | 0.2982 | 0.8984 | 0.9479 |
| 0.3559 | 17.0 | 680 | 0.9232 | 0.2982 | 0.9232 | 0.9608 |
| 0.3559 | 17.05 | 682 | 0.9509 | 0.3110 | 0.9509 | 0.9752 |
| 0.3559 | 17.1 | 684 | 0.9527 | 0.3110 | 0.9527 | 0.9761 |
| 0.3559 | 17.15 | 686 | 0.9146 | 0.2923 | 0.9146 | 0.9563 |
| 0.3559 | 17.2 | 688 | 0.9187 | 0.2866 | 0.9187 | 0.9585 |
| 0.3559 | 17.25 | 690 | 0.9385 | 0.1827 | 0.9385 | 0.9687 |
| 0.3559 | 17.3 | 692 | 0.9435 | 0.1827 | 0.9435 | 0.9714 |
| 0.3559 | 17.35 | 694 | 0.9518 | 0.1499 | 0.9518 | 0.9756 |
| 0.3559 | 17.4 | 696 | 0.9639 | 0.1499 | 0.9639 | 0.9818 |
| 0.3559 | 17.45 | 698 | 0.9292 | 0.1499 | 0.9292 | 0.9640 |
| 0.3559 | 17.5 | 700 | 0.9305 | 0.1499 | 0.9305 | 0.9646 |
| 0.3559 | 17.55 | 702 | 0.8995 | 0.2562 | 0.8995 | 0.9484 |
| 0.3559 | 17.6 | 704 | 0.8398 | 0.3169 | 0.8398 | 0.9164 |
| 0.3559 | 17.65 | 706 | 0.8160 | 0.3372 | 0.8160 | 0.9034 |
| 0.3559 | 17.7 | 708 | 0.8594 | 0.3042 | 0.8594 | 0.9270 |
| 0.3559 | 17.75 | 710 | 0.9187 | 0.2259 | 0.9187 | 0.9585 |
| 0.3559 | 17.8 | 712 | 0.9937 | 0.2363 | 0.9937 | 0.9969 |
| 0.3559 | 17.85 | 714 | 1.0874 | 0.2271 | 1.0874 | 1.0428 |
| 0.3559 | 17.9 | 716 | 1.0767 | 0.2412 | 1.0767 | 1.0377 |
| 0.3559 | 17.95 | 718 | 1.0119 | 0.2552 | 1.0119 | 1.0059 |
| 0.3559 | 18.0 | 720 | 0.9545 | 0.1911 | 0.9545 | 0.9770 |
| 0.3559 | 18.05 | 722 | 0.9176 | 0.2982 | 0.9176 | 0.9579 |
| 0.3559 | 18.1 | 724 | 0.9192 | 0.2615 | 0.9192 | 0.9588 |
| 0.3559 | 18.15 | 726 | 0.9566 | 0.1911 | 0.9566 | 0.9781 |
| 0.3559 | 18.2 | 728 | 1.0233 | 0.2227 | 1.0233 | 1.0116 |
| 0.3559 | 18.25 | 730 | 1.1003 | 0.1564 | 1.1003 | 1.0490 |
| 0.3559 | 18.3 | 732 | 1.1947 | 0.1613 | 1.1947 | 1.0930 |
| 0.3559 | 18.35 | 734 | 1.2081 | 0.1613 | 1.2081 | 1.0991 |
| 0.3559 | 18.4 | 736 | 1.1517 | 0.1784 | 1.1517 | 1.0732 |
| 0.3559 | 18.45 | 738 | 1.1007 | 0.2183 | 1.1007 | 1.0491 |
| 0.3559 | 18.5 | 740 | 1.0787 | 0.2552 | 1.0787 | 1.0386 |
| 0.3559 | 18.55 | 742 | 1.0840 | 0.2504 | 1.0840 | 1.0412 |
| 0.3559 | 18.6 | 744 | 1.1201 | 0.2392 | 1.1201 | 1.0583 |
| 0.3559 | 18.65 | 746 | 1.1672 | 0.1713 | 1.1672 | 1.0804 |
| 0.3559 | 18.7 | 748 | 1.1556 | 0.1713 | 1.1556 | 1.0750 |
| 0.3559 | 18.75 | 750 | 1.0890 | 0.2437 | 1.0890 | 1.0436 |
| 0.3559 | 18.8 | 752 | 1.0070 | 0.2119 | 1.0070 | 1.0035 |
| 0.3559 | 18.85 | 754 | 0.9427 | 0.2726 | 0.9427 | 0.9709 |
| 0.3559 | 18.9 | 756 | 0.9104 | 0.2518 | 0.9104 | 0.9541 |
| 0.3559 | 18.95 | 758 | 0.8891 | 0.3302 | 0.8891 | 0.9429 |
| 0.3559 | 19.0 | 760 | 0.8679 | 0.3302 | 0.8679 | 0.9316 |
| 0.3559 | 19.05 | 762 | 0.8489 | 0.3167 | 0.8489 | 0.9214 |
| 0.3559 | 19.1 | 764 | 0.8329 | 0.3167 | 0.8329 | 0.9127 |
| 0.3559 | 19.15 | 766 | 0.8495 | 0.3444 | 0.8495 | 0.9217 |
| 0.3559 | 19.2 | 768 | 0.9087 | 0.4067 | 0.9087 | 0.9532 |
| 0.3559 | 19.25 | 770 | 0.9642 | 0.3051 | 0.9642 | 0.9819 |
| 0.3559 | 19.3 | 772 | 0.9685 | 0.2995 | 0.9685 | 0.9841 |
| 0.3559 | 19.35 | 774 | 0.9435 | 0.3473 | 0.9435 | 0.9713 |
| 0.3559 | 19.4 | 776 | 0.9080 | 0.3819 | 0.9080 | 0.9529 |
| 0.3559 | 19.45 | 778 | 0.9286 | 0.3433 | 0.9286 | 0.9636 |
| 0.3559 | 19.5 | 780 | 0.9535 | 0.3194 | 0.9535 | 0.9765 |
| 0.3559 | 19.55 | 782 | 0.9274 | 0.2223 | 0.9274 | 0.9630 |
| 0.3559 | 19.6 | 784 | 0.8843 | 0.2383 | 0.8843 | 0.9404 |
| 0.3559 | 19.65 | 786 | 0.8223 | 0.1598 | 0.8223 | 0.9068 |
| 0.3559 | 19.7 | 788 | 0.7812 | 0.2471 | 0.7812 | 0.8838 |
| 0.3559 | 19.75 | 790 | 0.7880 | 0.2027 | 0.7880 | 0.8877 |
| 0.3559 | 19.8 | 792 | 0.8455 | 0.2285 | 0.8455 | 0.9195 |
| 0.3559 | 19.85 | 794 | 0.9330 | 0.2632 | 0.9330 | 0.9659 |
| 0.3559 | 19.9 | 796 | 1.0173 | 0.1911 | 1.0173 | 1.0086 |
| 0.3559 | 19.95 | 798 | 1.0708 | 0.1422 | 1.0708 | 1.0348 |
| 0.3559 | 20.0 | 800 | 1.1168 | 0.1821 | 1.1168 | 1.0568 |
| 0.3559 | 20.05 | 802 | 1.1116 | 0.1909 | 1.1116 | 1.0543 |
| 0.3559 | 20.1 | 804 | 1.1245 | 0.1909 | 1.1245 | 1.0604 |
| 0.3559 | 20.15 | 806 | 1.1137 | 0.1328 | 1.1137 | 1.0553 |
| 0.3559 | 20.2 | 808 | 1.1157 | 0.1328 | 1.1157 | 1.0563 |
| 0.3559 | 20.25 | 810 | 1.1120 | 0.1635 | 1.1120 | 1.0545 |
| 0.3559 | 20.3 | 812 | 1.0757 | 0.1949 | 1.0757 | 1.0372 |
| 0.3559 | 20.35 | 814 | 1.0232 | 0.2075 | 1.0232 | 1.0116 |
| 0.3559 | 20.4 | 816 | 1.0141 | 0.2032 | 1.0141 | 1.0070 |
| 0.3559 | 20.45 | 818 | 0.9764 | 0.2810 | 0.9764 | 0.9881 |
| 0.3559 | 20.5 | 820 | 0.9691 | 0.2615 | 0.9691 | 0.9844 |
| 0.3559 | 20.55 | 822 | 0.9805 | 0.2562 | 0.9805 | 0.9902 |
| 0.3559 | 20.6 | 824 | 1.0114 | 0.2939 | 1.0114 | 1.0057 |
| 0.3559 | 20.65 | 826 | 1.0639 | 0.1990 | 1.0639 | 1.0315 |
| 0.3559 | 20.7 | 828 | 1.0771 | 0.2032 | 1.0771 | 1.0378 |
| 0.3559 | 20.75 | 830 | 1.0387 | 0.2032 | 1.0387 | 1.0192 |
| 0.3559 | 20.8 | 832 | 0.9869 | 0.2982 | 0.9869 | 0.9934 |
| 0.3559 | 20.85 | 834 | 0.9296 | 0.3105 | 0.9296 | 0.9642 |
| 0.3559 | 20.9 | 836 | 0.8970 | 0.3637 | 0.8970 | 0.9471 |
| 0.3559 | 20.95 | 838 | 0.8725 | 0.4067 | 0.8725 | 0.9341 |
| 0.3559 | 21.0 | 840 | 0.8015 | 0.3444 | 0.8015 | 0.8953 |
| 0.3559 | 21.05 | 842 | 0.7495 | 0.3868 | 0.7495 | 0.8657 |
| 0.3559 | 21.1 | 844 | 0.7492 | 0.3868 | 0.7492 | 0.8656 |
| 0.3559 | 21.15 | 846 | 0.7933 | 0.3444 | 0.7933 | 0.8907 |
| 0.3559 | 21.2 | 848 | 0.8807 | 0.4067 | 0.8807 | 0.9385 |
| 0.3559 | 21.25 | 850 | 0.9366 | 0.3169 | 0.9366 | 0.9678 |
| 0.3559 | 21.3 | 852 | 0.9918 | 0.1709 | 0.9918 | 0.9959 |
| 0.3559 | 21.35 | 854 | 1.0128 | 0.1671 | 1.0128 | 1.0064 |
| 0.3559 | 21.4 | 856 | 0.9443 | 0.3110 | 0.9443 | 0.9718 |
| 0.3559 | 21.45 | 858 | 0.8430 | 0.3819 | 0.8430 | 0.9182 |
| 0.3559 | 21.5 | 860 | 0.7615 | 0.3594 | 0.7615 | 0.8727 |
| 0.3559 | 21.55 | 862 | 0.7363 | 0.3312 | 0.7363 | 0.8581 |
| 0.3559 | 21.6 | 864 | 0.7536 | 0.3312 | 0.7536 | 0.8681 |
| 0.3559 | 21.65 | 866 | 0.8307 | 0.3372 | 0.8307 | 0.9114 |
| 0.3559 | 21.7 | 868 | 0.8943 | 0.3675 | 0.8943 | 0.9457 |
| 0.3559 | 21.75 | 870 | 0.9708 | 0.3287 | 0.9708 | 0.9853 |
| 0.3559 | 21.8 | 872 | 1.0251 | 0.2227 | 1.0251 | 1.0125 |
| 0.3559 | 21.85 | 874 | 1.0566 | 0.2227 | 1.0566 | 1.0279 |
| 0.3559 | 21.9 | 876 | 1.0179 | 0.2411 | 1.0179 | 1.0089 |
| 0.3559 | 21.95 | 878 | 0.9323 | 0.2562 | 0.9323 | 0.9655 |
| 0.3559 | 22.0 | 880 | 0.9044 | 0.2923 | 0.9044 | 0.9510 |
| 0.3559 | 22.05 | 882 | 0.9260 | 0.2562 | 0.9260 | 0.9623 |
| 0.3559 | 22.1 | 884 | 0.9632 | 0.2756 | 0.9632 | 0.9814 |
| 0.3559 | 22.15 | 886 | 1.0387 | 0.1709 | 1.0387 | 1.0192 |
| 0.3559 | 22.2 | 888 | 1.1398 | 0.1870 | 1.1398 | 1.0676 |
| 0.3559 | 22.25 | 890 | 1.1581 | 0.1870 | 1.1581 | 1.0762 |
| 0.3559 | 22.3 | 892 | 1.1031 | 0.1909 | 1.1031 | 1.0503 |
| 0.3559 | 22.35 | 894 | 0.9989 | 0.2075 | 0.9989 | 0.9994 |
| 0.3559 | 22.4 | 896 | 0.8689 | 0.2982 | 0.8689 | 0.9321 |
| 0.3559 | 22.45 | 898 | 0.8160 | 0.2463 | 0.8160 | 0.9033 |
| 0.3559 | 22.5 | 900 | 0.8254 | 0.2463 | 0.8254 | 0.9085 |
| 0.3559 | 22.55 | 902 | 0.8750 | 0.2982 | 0.8750 | 0.9354 |
| 0.3559 | 22.6 | 904 | 0.9017 | 0.2562 | 0.9017 | 0.9496 |
| 0.3559 | 22.65 | 906 | 0.9243 | 0.2211 | 0.9243 | 0.9614 |
| 0.3559 | 22.7 | 908 | 0.9374 | 0.2211 | 0.9374 | 0.9682 |
| 0.3559 | 22.75 | 910 | 0.8787 | 0.2982 | 0.8787 | 0.9374 |
| 0.3559 | 22.8 | 912 | 0.8496 | 0.2982 | 0.8496 | 0.9218 |
| 0.3559 | 22.85 | 914 | 0.8688 | 0.2923 | 0.8688 | 0.9321 |
| 0.3559 | 22.9 | 916 | 0.9236 | 0.3051 | 0.9236 | 0.9610 |
| 0.3559 | 22.95 | 918 | 0.9257 | 0.3051 | 0.9257 | 0.9621 |
| 0.3559 | 23.0 | 920 | 0.9021 | 0.3110 | 0.9021 | 0.9498 |
| 0.3559 | 23.05 | 922 | 0.8337 | 0.2982 | 0.8337 | 0.9131 |
| 0.3559 | 23.1 | 924 | 0.7679 | 0.4134 | 0.7679 | 0.8763 |
| 0.3559 | 23.15 | 926 | 0.7128 | 0.3868 | 0.7128 | 0.8443 |
| 0.3559 | 23.2 | 928 | 0.7236 | 0.3868 | 0.7236 | 0.8506 |
| 0.3559 | 23.25 | 930 | 0.7707 | 0.3712 | 0.7707 | 0.8779 |
| 0.3559 | 23.3 | 932 | 0.8099 | 0.2904 | 0.8099 | 0.9000 |
| 0.3559 | 23.35 | 934 | 0.8576 | 0.2000 | 0.8576 | 0.9261 |
| 0.3559 | 23.4 | 936 | 0.8635 | 0.1734 | 0.8635 | 0.9293 |
| 0.3559 | 23.45 | 938 | 0.8811 | 0.2000 | 0.8811 | 0.9387 |
| 0.3559 | 23.5 | 940 | 0.9003 | 0.2142 | 0.9003 | 0.9489 |
| 0.3559 | 23.55 | 942 | 0.9292 | 0.2923 | 0.9292 | 0.9640 |
| 0.3559 | 23.6 | 944 | 0.9417 | 0.2923 | 0.9417 | 0.9704 |
| 0.3559 | 23.65 | 946 | 0.9565 | 0.2463 | 0.9565 | 0.9780 |
| 0.3559 | 23.7 | 948 | 1.0134 | 0.2562 | 1.0134 | 1.0067 |
| 0.3559 | 23.75 | 950 | 1.0518 | 0.1651 | 1.0518 | 1.0256 |
| 0.3559 | 23.8 | 952 | 1.0677 | 0.1651 | 1.0677 | 1.0333 |
| 0.3559 | 23.85 | 954 | 1.0938 | 0.0134 | 1.0938 | 1.0459 |
| 0.3559 | 23.9 | 956 | 1.0998 | 0.0114 | 1.0998 | 1.0487 |
| 0.3559 | 23.95 | 958 | 1.0754 | 0.0462 | 1.0754 | 1.0370 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nathanialhunt/a4673c11-c12b-4d99-99b1-c849d5e98ca5
|
nathanialhunt
| 2025-01-21T15:37:10Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/e3f77680-ac2a-4c6f-afed-0b2386f29ee7",
"base_model:adapter:rayonlabs/e3f77680-ac2a-4c6f-afed-0b2386f29ee7",
"region:us"
] | null | 2025-01-21T15:05:44Z |
---
library_name: peft
base_model: rayonlabs/e3f77680-ac2a-4c6f-afed-0b2386f29ee7
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4673c11-c12b-4d99-99b1-c849d5e98ca5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/e3f77680-ac2a-4c6f-afed-0b2386f29ee7
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8a053a2d05ec29b3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8a053a2d05ec29b3_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/a4673c11-c12b-4d99-99b1-c849d5e98ca5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8a053a2d05ec29b3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dab6df52-ba8c-40d3-8369-22e8245b6f81
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: dab6df52-ba8c-40d3-8369-22e8245b6f81
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4673c11-c12b-4d99-99b1-c849d5e98ca5
This model is a fine-tuned version of [rayonlabs/e3f77680-ac2a-4c6f-afed-0b2386f29ee7](https://huggingface.co/rayonlabs/e3f77680-ac2a-4c6f-afed-0b2386f29ee7) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.2355 | 0.0000 | 1 | 13.6415 |
| 13.9149 | 0.0001 | 3 | 13.1123 |
| 9.2283 | 0.0002 | 6 | 6.7145 |
| 3.0629 | 0.0004 | 9 | 3.5170 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nblinh63/2567152b-8e91-41ea-993f-04c8a73e062a
|
nblinh63
| 2025-01-21T15:35:51Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:03:53Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2567152b-8e91-41ea-993f-04c8a73e062a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ea142ac27856f425_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ea142ac27856f425_train_data.json
type:
field_instruction: 'Purpose: Clinical Focus'
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/2567152b-8e91-41ea-993f-04c8a73e062a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ea142ac27856f425_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8d608405-ef66-4e7a-9976-6a448c75bbeb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8d608405-ef66-4e7a-9976-6a448c75bbeb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2567152b-8e91-41ea-993f-04c8a73e062a
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6217 | 0.0052 | 200 | 1.6800 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fedovtt/7669b4ac-fd7d-43d8-93f5-e154fec08708
|
fedovtt
| 2025-01-21T15:35:45Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T15:34:48Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7669b4ac-fd7d-43d8-93f5-e154fec08708
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 50727019ec5046e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50727019ec5046e2_train_data.json
type:
field_input: category
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/7669b4ac-fd7d-43d8-93f5-e154fec08708
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/50727019ec5046e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 61ce37e2-d75b-4435-bf72-23ce5c13f0b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 61ce37e2-d75b-4435-bf72-23ce5c13f0b8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7669b4ac-fd7d-43d8-93f5-e154fec08708
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | nan |
| 0.0 | 0.0066 | 5 | nan |
| 0.0 | 0.0133 | 10 | nan |
| 0.0 | 0.0199 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
chchen/Llama-3.1-8B-Instruct-dpo-1000
|
chchen
| 2025-01-21T15:32:46Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-21T15:10:51Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
license: llama3.1
tags:
- llama-factory
- lora
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-dpo-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-dpo-1000
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the bct_non_cot_dpo_1000 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2845
- Rewards/chosen: 0.1535
- Rewards/rejected: -1.8119
- Rewards/accuracies: 0.9000
- Rewards/margins: 1.9654
- Logps/chosen: -16.0986
- Logps/rejected: -38.2013
- Logits/chosen: -0.1852
- Logits/rejected: -0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/chosen | Logps/rejected | Logits/chosen | Logits/rejected |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:------------:|:--------------:|:-------------:|:---------------:|
| 0.6346 | 1.7778 | 50 | 0.6252 | 0.2207 | 0.0694 | 0.8000 | 0.1513 | -15.4268 | -19.3885 | -0.2672 | -0.3872 |
| 0.4162 | 3.5556 | 100 | 0.3736 | 0.3857 | -0.7267 | 0.8900 | 1.1125 | -13.7764 | -27.3496 | -0.2058 | -0.3821 |
| 0.2919 | 5.3333 | 150 | 0.3053 | 0.2701 | -1.3825 | 0.9000 | 1.6526 | -14.9327 | -33.9072 | -0.1906 | -0.3753 |
| 0.3007 | 7.1111 | 200 | 0.2881 | 0.1886 | -1.7019 | 0.9100 | 1.8905 | -15.7478 | -37.1016 | -0.1883 | -0.3726 |
| 0.2536 | 8.8889 | 250 | 0.2845 | 0.1535 | -1.8119 | 0.9000 | 1.9654 | -16.0986 | -38.2013 | -0.1852 | -0.3689 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.20.0
|
nhungphammmmm/7404cfd9-10e5-40cf-860b-672e90b76941
|
nhungphammmmm
| 2025-01-21T15:30:23Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:40:40Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7404cfd9-10e5-40cf-860b-672e90b76941
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/7404cfd9-10e5-40cf-860b-672e90b76941
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7404cfd9-10e5-40cf-860b-672e90b76941
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4057 | 0.2224 | 200 | 1.4328 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nhunglaaaaaaa/ec8cc4f0-be45-4918-88e5-2cb03233d31b
|
nhunglaaaaaaa
| 2025-01-21T15:30:10Z | 11 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:40:43Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ec8cc4f0-be45-4918-88e5-2cb03233d31b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/ec8cc4f0-be45-4918-88e5-2cb03233d31b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ec8cc4f0-be45-4918-88e5-2cb03233d31b
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4068 | 0.2224 | 200 | 1.4342 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k16_task7_organization
|
MayBashendy
| 2025-01-21T15:29:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T15:24:19Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k16_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k16_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5216
- Qwk: 0.0201
- Mse: 1.5216
- Rmse: 1.2335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0526 | 2 | 2.6272 | -0.0593 | 2.6272 | 1.6209 |
| No log | 0.1053 | 4 | 1.3533 | 0.0412 | 1.3533 | 1.1633 |
| No log | 0.1579 | 6 | 1.1248 | -0.1866 | 1.1248 | 1.0606 |
| No log | 0.2105 | 8 | 1.2809 | -0.2576 | 1.2809 | 1.1317 |
| No log | 0.2632 | 10 | 1.1955 | -0.0609 | 1.1955 | 1.0934 |
| No log | 0.3158 | 12 | 1.2034 | -0.1027 | 1.2034 | 1.0970 |
| No log | 0.3684 | 14 | 1.1322 | 0.0868 | 1.1322 | 1.0641 |
| No log | 0.4211 | 16 | 1.0782 | 0.0943 | 1.0782 | 1.0384 |
| No log | 0.4737 | 18 | 1.0863 | 0.0391 | 1.0863 | 1.0423 |
| No log | 0.5263 | 20 | 1.1633 | -0.1081 | 1.1633 | 1.0786 |
| No log | 0.5789 | 22 | 1.0268 | 0.0419 | 1.0268 | 1.0133 |
| No log | 0.6316 | 24 | 0.9608 | -0.0054 | 0.9608 | 0.9802 |
| No log | 0.6842 | 26 | 0.9309 | -0.0103 | 0.9309 | 0.9648 |
| No log | 0.7368 | 28 | 0.9532 | 0.0208 | 0.9532 | 0.9763 |
| No log | 0.7895 | 30 | 0.9817 | 0.1268 | 0.9817 | 0.9908 |
| No log | 0.8421 | 32 | 1.0772 | 0.0958 | 1.0772 | 1.0379 |
| No log | 0.8947 | 34 | 1.0455 | 0.1103 | 1.0455 | 1.0225 |
| No log | 0.9474 | 36 | 1.0125 | -0.0472 | 1.0125 | 1.0062 |
| No log | 1.0 | 38 | 1.3162 | -0.0624 | 1.3162 | 1.1473 |
| No log | 1.0526 | 40 | 1.5613 | -0.0844 | 1.5613 | 1.2495 |
| No log | 1.1053 | 42 | 1.4443 | -0.0219 | 1.4443 | 1.2018 |
| No log | 1.1579 | 44 | 1.1555 | -0.0579 | 1.1555 | 1.0749 |
| No log | 1.2105 | 46 | 0.9541 | -0.0915 | 0.9541 | 0.9768 |
| No log | 1.2632 | 48 | 1.0257 | 0.1028 | 1.0257 | 1.0128 |
| No log | 1.3158 | 50 | 1.1605 | -0.0201 | 1.1605 | 1.0773 |
| No log | 1.3684 | 52 | 1.2818 | -0.0529 | 1.2818 | 1.1322 |
| No log | 1.4211 | 54 | 1.1967 | -0.0753 | 1.1967 | 1.0939 |
| No log | 1.4737 | 56 | 1.0339 | -0.0569 | 1.0339 | 1.0168 |
| No log | 1.5263 | 58 | 1.1167 | -0.1355 | 1.1167 | 1.0567 |
| No log | 1.5789 | 60 | 1.0975 | -0.1120 | 1.0975 | 1.0476 |
| No log | 1.6316 | 62 | 1.1324 | -0.1389 | 1.1324 | 1.0641 |
| No log | 1.6842 | 64 | 1.4684 | -0.1344 | 1.4684 | 1.2118 |
| No log | 1.7368 | 66 | 1.6105 | -0.1020 | 1.6105 | 1.2691 |
| No log | 1.7895 | 68 | 1.4590 | -0.1956 | 1.4590 | 1.2079 |
| No log | 1.8421 | 70 | 1.3797 | -0.2625 | 1.3797 | 1.1746 |
| No log | 1.8947 | 72 | 1.3951 | -0.1445 | 1.3951 | 1.1812 |
| No log | 1.9474 | 74 | 1.4090 | -0.1745 | 1.4090 | 1.1870 |
| No log | 2.0 | 76 | 1.4591 | -0.1263 | 1.4591 | 1.2079 |
| No log | 2.0526 | 78 | 1.6217 | -0.0934 | 1.6217 | 1.2735 |
| No log | 2.1053 | 80 | 1.6123 | -0.0973 | 1.6123 | 1.2698 |
| No log | 2.1579 | 82 | 1.4721 | -0.0703 | 1.4721 | 1.2133 |
| No log | 2.2105 | 84 | 1.3864 | -0.0980 | 1.3864 | 1.1775 |
| No log | 2.2632 | 86 | 1.3088 | -0.0693 | 1.3088 | 1.1440 |
| No log | 2.3158 | 88 | 1.1973 | 0.0012 | 1.1973 | 1.0942 |
| No log | 2.3684 | 90 | 1.1401 | 0.0184 | 1.1401 | 1.0678 |
| No log | 2.4211 | 92 | 1.1138 | 0.0162 | 1.1138 | 1.0554 |
| No log | 2.4737 | 94 | 1.1471 | -0.0358 | 1.1471 | 1.0710 |
| No log | 2.5263 | 96 | 1.1660 | 0.0445 | 1.1660 | 1.0798 |
| No log | 2.5789 | 98 | 1.1249 | -0.0274 | 1.1249 | 1.0606 |
| No log | 2.6316 | 100 | 1.1856 | -0.0077 | 1.1856 | 1.0889 |
| No log | 2.6842 | 102 | 1.3302 | 0.1004 | 1.3302 | 1.1533 |
| No log | 2.7368 | 104 | 1.4543 | 0.0633 | 1.4543 | 1.2059 |
| No log | 2.7895 | 106 | 1.2951 | 0.1479 | 1.2951 | 1.1380 |
| No log | 2.8421 | 108 | 1.0956 | 0.0378 | 1.0956 | 1.0467 |
| No log | 2.8947 | 110 | 1.1455 | 0.0327 | 1.1455 | 1.0703 |
| No log | 2.9474 | 112 | 1.3362 | 0.0900 | 1.3362 | 1.1559 |
| No log | 3.0 | 114 | 1.4208 | 0.1093 | 1.4208 | 1.1920 |
| No log | 3.0526 | 116 | 1.3002 | 0.0849 | 1.3002 | 1.1403 |
| No log | 3.1053 | 118 | 1.2707 | -0.0670 | 1.2707 | 1.1273 |
| No log | 3.1579 | 120 | 1.2686 | 0.0155 | 1.2686 | 1.1263 |
| No log | 3.2105 | 122 | 1.3567 | 0.0802 | 1.3567 | 1.1648 |
| No log | 3.2632 | 124 | 1.4962 | 0.1244 | 1.4962 | 1.2232 |
| No log | 3.3158 | 126 | 1.4460 | 0.0881 | 1.4460 | 1.2025 |
| No log | 3.3684 | 128 | 1.3508 | 0.0979 | 1.3508 | 1.1622 |
| No log | 3.4211 | 130 | 1.2981 | 0.1031 | 1.2981 | 1.1393 |
| No log | 3.4737 | 132 | 1.1881 | -0.0025 | 1.1881 | 1.0900 |
| No log | 3.5263 | 134 | 1.1407 | -0.0025 | 1.1407 | 1.0680 |
| No log | 3.5789 | 136 | 1.1313 | -0.0347 | 1.1313 | 1.0636 |
| No log | 3.6316 | 138 | 1.1818 | 0.0445 | 1.1818 | 1.0871 |
| No log | 3.6842 | 140 | 1.3569 | 0.1145 | 1.3569 | 1.1649 |
| No log | 3.7368 | 142 | 1.4561 | 0.0791 | 1.4561 | 1.2067 |
| No log | 3.7895 | 144 | 1.2293 | 0.0122 | 1.2293 | 1.1087 |
| No log | 3.8421 | 146 | 1.1815 | 0.0184 | 1.1815 | 1.0870 |
| No log | 3.8947 | 148 | 1.2707 | -0.0557 | 1.2707 | 1.1273 |
| No log | 3.9474 | 150 | 1.5072 | 0.1044 | 1.5072 | 1.2277 |
| No log | 4.0 | 152 | 1.7639 | 0.1336 | 1.7639 | 1.3281 |
| No log | 4.0526 | 154 | 1.6838 | 0.0240 | 1.6838 | 1.2976 |
| No log | 4.1053 | 156 | 1.3476 | 0.1146 | 1.3476 | 1.1609 |
| No log | 4.1579 | 158 | 1.0604 | -0.0619 | 1.0604 | 1.0298 |
| No log | 4.2105 | 160 | 1.0329 | -0.0826 | 1.0329 | 1.0163 |
| No log | 4.2632 | 162 | 1.0057 | 0.0255 | 1.0057 | 1.0029 |
| No log | 4.3158 | 164 | 1.0843 | 0.0653 | 1.0843 | 1.0413 |
| No log | 4.3684 | 166 | 1.3021 | -0.0391 | 1.3021 | 1.1411 |
| No log | 4.4211 | 168 | 1.3976 | -0.0391 | 1.3976 | 1.1822 |
| No log | 4.4737 | 170 | 1.3075 | -0.0027 | 1.3075 | 1.1434 |
| No log | 4.5263 | 172 | 1.2089 | 0.0218 | 1.2089 | 1.0995 |
| No log | 4.5789 | 174 | 1.2620 | -0.0462 | 1.2620 | 1.1234 |
| No log | 4.6316 | 176 | 1.4129 | 0.0736 | 1.4129 | 1.1886 |
| No log | 4.6842 | 178 | 1.4863 | 0.0399 | 1.4863 | 1.2192 |
| No log | 4.7368 | 180 | 1.5802 | -0.0149 | 1.5802 | 1.2571 |
| No log | 4.7895 | 182 | 1.5562 | -0.0149 | 1.5562 | 1.2475 |
| No log | 4.8421 | 184 | 1.5297 | 0.0353 | 1.5297 | 1.2368 |
| No log | 4.8947 | 186 | 1.3550 | 0.0838 | 1.3550 | 1.1640 |
| No log | 4.9474 | 188 | 1.2241 | -0.0383 | 1.2241 | 1.1064 |
| No log | 5.0 | 190 | 1.1879 | -0.0383 | 1.1879 | 1.0899 |
| No log | 5.0526 | 192 | 1.2152 | 0.1110 | 1.2152 | 1.1024 |
| No log | 5.1053 | 194 | 1.3843 | 0.1568 | 1.3843 | 1.1766 |
| No log | 5.1579 | 196 | 1.4539 | 0.1272 | 1.4539 | 1.2058 |
| No log | 5.2105 | 198 | 1.3739 | 0.0727 | 1.3739 | 1.1721 |
| No log | 5.2632 | 200 | 1.3371 | 0.0347 | 1.3371 | 1.1563 |
| No log | 5.3158 | 202 | 1.4417 | 0.1093 | 1.4417 | 1.2007 |
| No log | 5.3684 | 204 | 1.4772 | 0.1019 | 1.4772 | 1.2154 |
| No log | 5.4211 | 206 | 1.5103 | 0.1364 | 1.5103 | 1.2289 |
| No log | 5.4737 | 208 | 1.3202 | 0.1626 | 1.3202 | 1.1490 |
| No log | 5.5263 | 210 | 1.2476 | 0.1422 | 1.2476 | 1.1169 |
| No log | 5.5789 | 212 | 1.3022 | 0.1144 | 1.3022 | 1.1411 |
| No log | 5.6316 | 214 | 1.4055 | 0.0704 | 1.4055 | 1.1855 |
| No log | 5.6842 | 216 | 1.6031 | 0.0971 | 1.6031 | 1.2661 |
| No log | 5.7368 | 218 | 1.5909 | 0.0531 | 1.5909 | 1.2613 |
| No log | 5.7895 | 220 | 1.3706 | 0.0873 | 1.3706 | 1.1707 |
| No log | 5.8421 | 222 | 1.0998 | 0.0683 | 1.0998 | 1.0487 |
| No log | 5.8947 | 224 | 1.0045 | -0.0548 | 1.0045 | 1.0023 |
| No log | 5.9474 | 226 | 0.9858 | -0.0232 | 0.9858 | 0.9929 |
| No log | 6.0 | 228 | 1.0412 | 0.0683 | 1.0412 | 1.0204 |
| No log | 6.0526 | 230 | 1.2110 | 0.1356 | 1.2110 | 1.1004 |
| No log | 6.1053 | 232 | 1.2900 | 0.0746 | 1.2900 | 1.1358 |
| No log | 6.1579 | 234 | 1.1768 | 0.1339 | 1.1768 | 1.0848 |
| No log | 6.2105 | 236 | 1.0820 | 0.0293 | 1.0820 | 1.0402 |
| No log | 6.2632 | 238 | 1.0496 | -0.0586 | 1.0496 | 1.0245 |
| No log | 6.3158 | 240 | 1.0748 | 0.0332 | 1.0748 | 1.0367 |
| No log | 6.3684 | 242 | 1.1750 | 0.1557 | 1.1750 | 1.0840 |
| No log | 6.4211 | 244 | 1.1079 | 0.1623 | 1.1079 | 1.0526 |
| No log | 6.4737 | 246 | 0.9989 | 0.0994 | 0.9989 | 0.9995 |
| No log | 6.5263 | 248 | 1.0298 | 0.2094 | 1.0298 | 1.0148 |
| No log | 6.5789 | 250 | 1.2194 | 0.1352 | 1.2194 | 1.1043 |
| No log | 6.6316 | 252 | 1.2990 | 0.0925 | 1.2990 | 1.1397 |
| No log | 6.6842 | 254 | 1.2397 | 0.1557 | 1.2397 | 1.1134 |
| No log | 6.7368 | 256 | 1.1328 | 0.1422 | 1.1328 | 1.0643 |
| No log | 6.7895 | 258 | 1.1646 | 0.0829 | 1.1646 | 1.0792 |
| No log | 6.8421 | 260 | 1.2065 | -0.0969 | 1.2065 | 1.0984 |
| No log | 6.8947 | 262 | 1.2767 | 0.0516 | 1.2767 | 1.1299 |
| No log | 6.9474 | 264 | 1.3506 | 0.0896 | 1.3506 | 1.1621 |
| No log | 7.0 | 266 | 1.4084 | 0.0250 | 1.4084 | 1.1867 |
| No log | 7.0526 | 268 | 1.4387 | 0.0250 | 1.4387 | 1.1995 |
| No log | 7.1053 | 270 | 1.3452 | 0.0927 | 1.3452 | 1.1598 |
| No log | 7.1579 | 272 | 1.3409 | 0.1115 | 1.3409 | 1.1580 |
| No log | 7.2105 | 274 | 1.3564 | 0.1260 | 1.3564 | 1.1646 |
| No log | 7.2632 | 276 | 1.3774 | 0.0976 | 1.3774 | 1.1736 |
| No log | 7.3158 | 278 | 1.3428 | 0.1618 | 1.3428 | 1.1588 |
| No log | 7.3684 | 280 | 1.3703 | 0.1320 | 1.3703 | 1.1706 |
| No log | 7.4211 | 282 | 1.3586 | 0.0607 | 1.3586 | 1.1656 |
| No log | 7.4737 | 284 | 1.3317 | 0.0607 | 1.3317 | 1.1540 |
| No log | 7.5263 | 286 | 1.3328 | 0.0561 | 1.3328 | 1.1545 |
| No log | 7.5789 | 288 | 1.2788 | 0.0929 | 1.2788 | 1.1308 |
| No log | 7.6316 | 290 | 1.3360 | 0.0539 | 1.3360 | 1.1559 |
| No log | 7.6842 | 292 | 1.5797 | 0.0708 | 1.5797 | 1.2569 |
| No log | 7.7368 | 294 | 1.7573 | 0.0225 | 1.7573 | 1.3256 |
| No log | 7.7895 | 296 | 1.8595 | 0.0225 | 1.8595 | 1.3636 |
| No log | 7.8421 | 298 | 1.6648 | 0.0455 | 1.6648 | 1.2903 |
| No log | 7.8947 | 300 | 1.4574 | 0.1278 | 1.4574 | 1.2072 |
| No log | 7.9474 | 302 | 1.3516 | 0.0820 | 1.3516 | 1.1626 |
| No log | 8.0 | 304 | 1.3100 | 0.0955 | 1.3100 | 1.1446 |
| No log | 8.0526 | 306 | 1.3089 | 0.1271 | 1.3089 | 1.1441 |
| No log | 8.1053 | 308 | 1.3591 | 0.1257 | 1.3591 | 1.1658 |
| No log | 8.1579 | 310 | 1.4105 | 0.1018 | 1.4105 | 1.1877 |
| No log | 8.2105 | 312 | 1.3486 | 0.1175 | 1.3486 | 1.1613 |
| No log | 8.2632 | 314 | 1.1931 | 0.1045 | 1.1931 | 1.0923 |
| No log | 8.3158 | 316 | 1.1342 | 0.0327 | 1.1342 | 1.0650 |
| No log | 8.3684 | 318 | 1.1787 | 0.1486 | 1.1787 | 1.0857 |
| No log | 8.4211 | 320 | 1.3549 | 0.1287 | 1.3549 | 1.1640 |
| No log | 8.4737 | 322 | 1.4294 | 0.1246 | 1.4294 | 1.1956 |
| No log | 8.5263 | 324 | 1.3287 | 0.1417 | 1.3287 | 1.1527 |
| No log | 8.5789 | 326 | 1.2734 | 0.2059 | 1.2734 | 1.1285 |
| No log | 8.6316 | 328 | 1.2472 | 0.1528 | 1.2472 | 1.1168 |
| No log | 8.6842 | 330 | 1.2534 | 0.2017 | 1.2534 | 1.1196 |
| No log | 8.7368 | 332 | 1.2175 | 0.1976 | 1.2175 | 1.1034 |
| No log | 8.7895 | 334 | 1.1991 | 0.2601 | 1.1991 | 1.0950 |
| No log | 8.8421 | 336 | 1.1013 | 0.2046 | 1.1013 | 1.0494 |
| No log | 8.8947 | 338 | 1.0782 | 0.1955 | 1.0782 | 1.0384 |
| No log | 8.9474 | 340 | 1.1517 | 0.2032 | 1.1517 | 1.0732 |
| No log | 9.0 | 342 | 1.2388 | 0.2100 | 1.2388 | 1.1130 |
| No log | 9.0526 | 344 | 1.2111 | 0.2209 | 1.2111 | 1.1005 |
| No log | 9.1053 | 346 | 1.1529 | 0.2343 | 1.1529 | 1.0737 |
| No log | 9.1579 | 348 | 1.1797 | 0.2343 | 1.1797 | 1.0862 |
| No log | 9.2105 | 350 | 1.3373 | 0.1561 | 1.3373 | 1.1564 |
| No log | 9.2632 | 352 | 1.4898 | 0.1823 | 1.4898 | 1.2206 |
| No log | 9.3158 | 354 | 1.4551 | 0.1823 | 1.4551 | 1.2063 |
| No log | 9.3684 | 356 | 1.2892 | 0.2330 | 1.2892 | 1.1354 |
| No log | 9.4211 | 358 | 1.2209 | 0.1827 | 1.2209 | 1.1049 |
| No log | 9.4737 | 360 | 1.1878 | 0.1827 | 1.1878 | 1.0898 |
| No log | 9.5263 | 362 | 1.1373 | 0.2259 | 1.1373 | 1.0664 |
| No log | 9.5789 | 364 | 1.1745 | 0.1869 | 1.1745 | 1.0838 |
| No log | 9.6316 | 366 | 1.2467 | 0.0866 | 1.2467 | 1.1166 |
| No log | 9.6842 | 368 | 1.2706 | 0.1922 | 1.2706 | 1.1272 |
| No log | 9.7368 | 370 | 1.2681 | 0.1680 | 1.2681 | 1.1261 |
| No log | 9.7895 | 372 | 1.2913 | 0.1379 | 1.2913 | 1.1364 |
| No log | 9.8421 | 374 | 1.2234 | 0.1109 | 1.2234 | 1.1061 |
| No log | 9.8947 | 376 | 1.2794 | 0.1143 | 1.2794 | 1.1311 |
| No log | 9.9474 | 378 | 1.4159 | 0.0733 | 1.4159 | 1.1899 |
| No log | 10.0 | 380 | 1.3992 | 0.0515 | 1.3992 | 1.1829 |
| No log | 10.0526 | 382 | 1.2418 | 0.1699 | 1.2418 | 1.1144 |
| No log | 10.1053 | 384 | 1.0707 | 0.1028 | 1.0707 | 1.0348 |
| No log | 10.1579 | 386 | 1.0396 | 0.0781 | 1.0396 | 1.0196 |
| No log | 10.2105 | 388 | 1.1039 | 0.1293 | 1.1039 | 1.0507 |
| No log | 10.2632 | 390 | 1.1998 | 0.1145 | 1.1998 | 1.0954 |
| No log | 10.3158 | 392 | 1.3749 | 0.0813 | 1.3749 | 1.1725 |
| No log | 10.3684 | 394 | 1.5529 | 0.0927 | 1.5529 | 1.2461 |
| No log | 10.4211 | 396 | 1.6228 | 0.1075 | 1.6228 | 1.2739 |
| No log | 10.4737 | 398 | 1.4991 | 0.0927 | 1.4991 | 1.2244 |
| No log | 10.5263 | 400 | 1.3037 | 0.0761 | 1.3037 | 1.1418 |
| No log | 10.5789 | 402 | 1.1463 | 0.0993 | 1.1463 | 1.0706 |
| No log | 10.6316 | 404 | 1.1016 | 0.0058 | 1.1016 | 1.0496 |
| No log | 10.6842 | 406 | 1.1157 | -0.0320 | 1.1157 | 1.0562 |
| No log | 10.7368 | 408 | 1.1974 | 0.0462 | 1.1974 | 1.0943 |
| No log | 10.7895 | 410 | 1.3775 | 0.0921 | 1.3775 | 1.1737 |
| No log | 10.8421 | 412 | 1.4886 | 0.0544 | 1.4886 | 1.2201 |
| No log | 10.8947 | 414 | 1.4853 | 0.0829 | 1.4853 | 1.2187 |
| No log | 10.9474 | 416 | 1.3911 | 0.1453 | 1.3911 | 1.1795 |
| No log | 11.0 | 418 | 1.3247 | 0.1360 | 1.3247 | 1.1509 |
| No log | 11.0526 | 420 | 1.2653 | 0.1360 | 1.2653 | 1.1249 |
| No log | 11.1053 | 422 | 1.2646 | 0.1360 | 1.2646 | 1.1245 |
| No log | 11.1579 | 424 | 1.2685 | 0.0154 | 1.2685 | 1.1263 |
| No log | 11.2105 | 426 | 1.2663 | -0.1056 | 1.2663 | 1.1253 |
| No log | 11.2632 | 428 | 1.2858 | 0.0462 | 1.2858 | 1.1339 |
| No log | 11.3158 | 430 | 1.3754 | 0.1145 | 1.3754 | 1.1728 |
| No log | 11.3684 | 432 | 1.3910 | 0.1679 | 1.3910 | 1.1794 |
| No log | 11.4211 | 434 | 1.3145 | 0.1265 | 1.3145 | 1.1465 |
| No log | 11.4737 | 436 | 1.1925 | 0.1044 | 1.1925 | 1.0920 |
| No log | 11.5263 | 438 | 1.1626 | 0.1332 | 1.1626 | 1.0782 |
| No log | 11.5789 | 440 | 1.1926 | 0.1584 | 1.1926 | 1.0921 |
| No log | 11.6316 | 442 | 1.2035 | 0.1178 | 1.2035 | 1.0970 |
| No log | 11.6842 | 444 | 1.2160 | 0.1178 | 1.2160 | 1.1027 |
| No log | 11.7368 | 446 | 1.2972 | 0.1653 | 1.2972 | 1.1389 |
| No log | 11.7895 | 448 | 1.5549 | 0.1098 | 1.5549 | 1.2470 |
| No log | 11.8421 | 450 | 1.6948 | 0.0566 | 1.6948 | 1.3019 |
| No log | 11.8947 | 452 | 1.6280 | 0.1011 | 1.6280 | 1.2759 |
| No log | 11.9474 | 454 | 1.5196 | 0.0949 | 1.5196 | 1.2327 |
| No log | 12.0 | 456 | 1.3540 | 0.0671 | 1.3540 | 1.1636 |
| No log | 12.0526 | 458 | 1.2187 | 0.2363 | 1.2187 | 1.1040 |
| No log | 12.1053 | 460 | 1.0734 | 0.2046 | 1.0734 | 1.0361 |
| No log | 12.1579 | 462 | 1.0405 | 0.0891 | 1.0405 | 1.0200 |
| No log | 12.2105 | 464 | 1.0814 | 0.1180 | 1.0814 | 1.0399 |
| No log | 12.2632 | 466 | 1.1792 | 0.2211 | 1.1792 | 1.0859 |
| No log | 12.3158 | 468 | 1.3373 | 0.1388 | 1.3373 | 1.1564 |
| No log | 12.3684 | 470 | 1.3915 | 0.1359 | 1.3915 | 1.1796 |
| No log | 12.4211 | 472 | 1.2989 | 0.2017 | 1.2989 | 1.1397 |
| No log | 12.4737 | 474 | 1.1876 | 0.1603 | 1.1876 | 1.0898 |
| No log | 12.5263 | 476 | 1.2592 | 0.1815 | 1.2592 | 1.1221 |
| No log | 12.5789 | 478 | 1.3422 | 0.2421 | 1.3422 | 1.1585 |
| No log | 12.6316 | 480 | 1.2959 | 0.1884 | 1.2959 | 1.1384 |
| No log | 12.6842 | 482 | 1.2439 | 0.1653 | 1.2439 | 1.1153 |
| No log | 12.7368 | 484 | 1.1140 | 0.1584 | 1.1140 | 1.0555 |
| No log | 12.7895 | 486 | 1.1081 | 0.1293 | 1.1081 | 1.0527 |
| No log | 12.8421 | 488 | 1.2119 | 0.1603 | 1.2119 | 1.1009 |
| No log | 12.8947 | 490 | 1.4286 | 0.1701 | 1.4286 | 1.1953 |
| No log | 12.9474 | 492 | 1.6398 | 0.0863 | 1.6398 | 1.2806 |
| No log | 13.0 | 494 | 1.6853 | 0.1098 | 1.6853 | 1.2982 |
| No log | 13.0526 | 496 | 1.6091 | 0.0949 | 1.6091 | 1.2685 |
| No log | 13.1053 | 498 | 1.4361 | 0.0453 | 1.4361 | 1.1984 |
| 0.3259 | 13.1579 | 500 | 1.3016 | 0.0779 | 1.3016 | 1.1409 |
| 0.3259 | 13.2105 | 502 | 1.2718 | 0.0575 | 1.2718 | 1.1277 |
| 0.3259 | 13.2632 | 504 | 1.3192 | 0.0673 | 1.3192 | 1.1486 |
| 0.3259 | 13.3158 | 506 | 1.4092 | 0.1606 | 1.4092 | 1.1871 |
| 0.3259 | 13.3684 | 508 | 1.5130 | 0.1445 | 1.5130 | 1.2300 |
| 0.3259 | 13.4211 | 510 | 1.4857 | 0.1508 | 1.4857 | 1.2189 |
| 0.3259 | 13.4737 | 512 | 1.3728 | 0.1638 | 1.3728 | 1.1717 |
| 0.3259 | 13.5263 | 514 | 1.3385 | 0.1638 | 1.3385 | 1.1570 |
| 0.3259 | 13.5789 | 516 | 1.3815 | 0.1606 | 1.3815 | 1.1754 |
| 0.3259 | 13.6316 | 518 | 1.4450 | 0.1646 | 1.4450 | 1.2021 |
| 0.3259 | 13.6842 | 520 | 1.4601 | 0.1782 | 1.4601 | 1.2083 |
| 0.3259 | 13.7368 | 522 | 1.3638 | 0.2130 | 1.3638 | 1.1678 |
| 0.3259 | 13.7895 | 524 | 1.2512 | 0.2109 | 1.2512 | 1.1186 |
| 0.3259 | 13.8421 | 526 | 1.2186 | 0.2192 | 1.2186 | 1.1039 |
| 0.3259 | 13.8947 | 528 | 1.2574 | 0.2192 | 1.2574 | 1.1213 |
| 0.3259 | 13.9474 | 530 | 1.2631 | 0.1789 | 1.2631 | 1.1239 |
| 0.3259 | 14.0 | 532 | 1.4203 | 0.1917 | 1.4203 | 1.1917 |
| 0.3259 | 14.0526 | 534 | 1.4799 | 0.1814 | 1.4799 | 1.2165 |
| 0.3259 | 14.1053 | 536 | 1.5447 | 0.1755 | 1.5447 | 1.2429 |
| 0.3259 | 14.1579 | 538 | 1.6538 | 0.0951 | 1.6538 | 1.2860 |
| 0.3259 | 14.2105 | 540 | 1.5808 | 0.0472 | 1.5808 | 1.2573 |
| 0.3259 | 14.2632 | 542 | 1.4312 | 0.1367 | 1.4312 | 1.1963 |
| 0.3259 | 14.3158 | 544 | 1.1936 | 0.2032 | 1.1936 | 1.0925 |
| 0.3259 | 14.3684 | 546 | 1.0483 | 0.2410 | 1.0483 | 1.0239 |
| 0.3259 | 14.4211 | 548 | 1.0373 | 0.2142 | 1.0373 | 1.0185 |
| 0.3259 | 14.4737 | 550 | 1.1436 | 0.2308 | 1.1436 | 1.0694 |
| 0.3259 | 14.5263 | 552 | 1.3117 | 0.2059 | 1.3117 | 1.1453 |
| 0.3259 | 14.5789 | 554 | 1.3648 | 0.2223 | 1.3648 | 1.1682 |
| 0.3259 | 14.6316 | 556 | 1.2660 | 0.2209 | 1.2660 | 1.1252 |
| 0.3259 | 14.6842 | 558 | 1.2117 | 0.2032 | 1.2117 | 1.1008 |
| 0.3259 | 14.7368 | 560 | 1.2640 | 0.2227 | 1.2640 | 1.1243 |
| 0.3259 | 14.7895 | 562 | 1.2589 | 0.2227 | 1.2589 | 1.1220 |
| 0.3259 | 14.8421 | 564 | 1.2401 | 0.2411 | 1.2401 | 1.1136 |
| 0.3259 | 14.8947 | 566 | 1.2666 | 0.2529 | 1.2666 | 1.1254 |
| 0.3259 | 14.9474 | 568 | 1.2995 | 0.1846 | 1.2995 | 1.1400 |
| 0.3259 | 15.0 | 570 | 1.4025 | 0.2031 | 1.4025 | 1.1843 |
| 0.3259 | 15.0526 | 572 | 1.4904 | 0.1275 | 1.4904 | 1.2208 |
| 0.3259 | 15.1053 | 574 | 1.4141 | 0.1784 | 1.4141 | 1.1892 |
| 0.3259 | 15.1579 | 576 | 1.3191 | 0.1784 | 1.3191 | 1.1485 |
| 0.3259 | 15.2105 | 578 | 1.2229 | 0.1671 | 1.2229 | 1.1058 |
| 0.3259 | 15.2632 | 580 | 1.1510 | 0.2411 | 1.1510 | 1.0729 |
| 0.3259 | 15.3158 | 582 | 1.1297 | 0.1542 | 1.1297 | 1.0629 |
| 0.3259 | 15.3684 | 584 | 1.1245 | 0.0623 | 1.1245 | 1.0604 |
| 0.3259 | 15.4211 | 586 | 1.1541 | 0.1254 | 1.1541 | 1.0743 |
| 0.3259 | 15.4737 | 588 | 1.2785 | 0.1394 | 1.2785 | 1.1307 |
| 0.3259 | 15.5263 | 590 | 1.5593 | 0.0826 | 1.5593 | 1.2487 |
| 0.3259 | 15.5789 | 592 | 1.7255 | 0.0865 | 1.7255 | 1.3136 |
| 0.3259 | 15.6316 | 594 | 1.7327 | 0.0655 | 1.7327 | 1.3163 |
| 0.3259 | 15.6842 | 596 | 1.6098 | 0.1145 | 1.6098 | 1.2688 |
| 0.3259 | 15.7368 | 598 | 1.5696 | 0.1195 | 1.5696 | 1.2528 |
| 0.3259 | 15.7895 | 600 | 1.6367 | 0.0704 | 1.6367 | 1.2793 |
| 0.3259 | 15.8421 | 602 | 1.6949 | 0.0290 | 1.6949 | 1.3019 |
| 0.3259 | 15.8947 | 604 | 1.6413 | 0.0057 | 1.6413 | 1.2811 |
| 0.3259 | 15.9474 | 606 | 1.4916 | 0.0057 | 1.4916 | 1.2213 |
| 0.3259 | 16.0 | 608 | 1.3184 | 0.0895 | 1.3184 | 1.1482 |
| 0.3259 | 16.0526 | 610 | 1.2577 | 0.1758 | 1.2577 | 1.1215 |
| 0.3259 | 16.1053 | 612 | 1.2451 | 0.1758 | 1.2451 | 1.1159 |
| 0.3259 | 16.1579 | 614 | 1.2468 | 0.2059 | 1.2468 | 1.1166 |
| 0.3259 | 16.2105 | 616 | 1.2825 | 0.2059 | 1.2825 | 1.1325 |
| 0.3259 | 16.2632 | 618 | 1.3432 | 0.1758 | 1.3432 | 1.1590 |
| 0.3259 | 16.3158 | 620 | 1.3414 | 0.2223 | 1.3414 | 1.1582 |
| 0.3259 | 16.3684 | 622 | 1.2666 | 0.1821 | 1.2666 | 1.1255 |
| 0.3259 | 16.4211 | 624 | 1.1430 | 0.2046 | 1.1430 | 1.0691 |
| 0.3259 | 16.4737 | 626 | 1.0881 | 0.0924 | 1.0881 | 1.0431 |
| 0.3259 | 16.5263 | 628 | 1.0945 | 0.1534 | 1.0945 | 1.0462 |
| 0.3259 | 16.5789 | 630 | 1.0896 | 0.1534 | 1.0896 | 1.0439 |
| 0.3259 | 16.6316 | 632 | 1.0943 | 0.0993 | 1.0943 | 1.0461 |
| 0.3259 | 16.6842 | 634 | 1.1262 | 0.1454 | 1.1262 | 1.0612 |
| 0.3259 | 16.7368 | 636 | 1.1243 | 0.1501 | 1.1243 | 1.0603 |
| 0.3259 | 16.7895 | 638 | 1.1451 | 0.1734 | 1.1451 | 1.0701 |
| 0.3259 | 16.8421 | 640 | 1.1876 | 0.1348 | 1.1876 | 1.0898 |
| 0.3259 | 16.8947 | 642 | 1.2514 | 0.0925 | 1.2514 | 1.1187 |
| 0.3259 | 16.9474 | 644 | 1.3550 | 0.1739 | 1.3550 | 1.1640 |
| 0.3259 | 17.0 | 646 | 1.4051 | 0.1630 | 1.4051 | 1.1854 |
| 0.3259 | 17.0526 | 648 | 1.4640 | 0.1042 | 1.4640 | 1.2099 |
| 0.3259 | 17.1053 | 650 | 1.5184 | 0.0551 | 1.5184 | 1.2322 |
| 0.3259 | 17.1579 | 652 | 1.5181 | 0.0564 | 1.5181 | 1.2321 |
| 0.3259 | 17.2105 | 654 | 1.3914 | 0.1093 | 1.3914 | 1.1796 |
| 0.3259 | 17.2632 | 656 | 1.2229 | 0.2183 | 1.2229 | 1.1059 |
| 0.3259 | 17.3158 | 658 | 1.1415 | 0.1110 | 1.1415 | 1.0684 |
| 0.3259 | 17.3684 | 660 | 1.0838 | 0.1293 | 1.0838 | 1.0411 |
| 0.3259 | 17.4211 | 662 | 1.0890 | 0.1332 | 1.0890 | 1.0436 |
| 0.3259 | 17.4737 | 664 | 1.1706 | 0.1584 | 1.1706 | 1.0820 |
| 0.3259 | 17.5263 | 666 | 1.2811 | 0.1485 | 1.2811 | 1.1319 |
| 0.3259 | 17.5789 | 668 | 1.4522 | 0.0832 | 1.4522 | 1.2051 |
| 0.3259 | 17.6316 | 670 | 1.6284 | 0.0619 | 1.6284 | 1.2761 |
| 0.3259 | 17.6842 | 672 | 1.6345 | 0.0845 | 1.6345 | 1.2785 |
| 0.3259 | 17.7368 | 674 | 1.5049 | 0.0577 | 1.5049 | 1.2268 |
| 0.3259 | 17.7895 | 676 | 1.3096 | 0.1417 | 1.3096 | 1.1444 |
| 0.3259 | 17.8421 | 678 | 1.1618 | 0.1584 | 1.1618 | 1.0779 |
| 0.3259 | 17.8947 | 680 | 1.1296 | 0.0338 | 1.1296 | 1.0628 |
| 0.3259 | 17.9474 | 682 | 1.1466 | 0.0653 | 1.1466 | 1.0708 |
| 0.3259 | 18.0 | 684 | 1.2110 | 0.1348 | 1.2110 | 1.1004 |
| 0.3259 | 18.0526 | 686 | 1.2509 | 0.1573 | 1.2509 | 1.1184 |
| 0.3259 | 18.1053 | 688 | 1.3272 | 0.1583 | 1.3272 | 1.1521 |
| 0.3259 | 18.1579 | 690 | 1.4979 | 0.1042 | 1.4979 | 1.2239 |
| 0.3259 | 18.2105 | 692 | 1.5777 | 0.0293 | 1.5777 | 1.2561 |
| 0.3259 | 18.2632 | 694 | 1.5916 | 0.0293 | 1.5916 | 1.2616 |
| 0.3259 | 18.3158 | 696 | 1.5494 | 0.0570 | 1.5494 | 1.2447 |
| 0.3259 | 18.3684 | 698 | 1.5091 | 0.0468 | 1.5091 | 1.2285 |
| 0.3259 | 18.4211 | 700 | 1.5216 | 0.0201 | 1.5216 | 1.2335 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Daemontatox/ImmyV2.5
|
Daemontatox
| 2025-01-21T15:29:12Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T14:26:13Z |
---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Daemontatox
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceTB/SmolLM2-1.7B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vm29741/videomae-base-finetuned-midv-holo-500
|
vm29741
| 2025-01-21T15:28:20Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-01-13T15:33:17Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-midv-holo-500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-midv-holo-500
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8322
- Accuracy: 0.4264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1925
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.2647 | 0.0405 | 78 | 1.2009 | 0.5736 |
| 1.3169 | 1.0405 | 156 | 1.2652 | 0.5078 |
| 1.3004 | 2.0405 | 234 | 1.2036 | 0.5736 |
| 1.1562 | 3.0405 | 312 | 1.1888 | 0.5736 |
| 1.265 | 4.0405 | 390 | 1.2808 | 0.4922 |
| 1.1946 | 5.0405 | 468 | 1.3394 | 0.3295 |
| 1.1222 | 6.0405 | 546 | 1.3204 | 0.5 |
| 1.0244 | 7.0405 | 624 | 1.3697 | 0.3643 |
| 1.1604 | 8.0405 | 702 | 1.3813 | 0.4535 |
| 0.8901 | 9.0405 | 780 | 1.4395 | 0.4225 |
| 0.6032 | 10.0405 | 858 | 1.6609 | 0.4535 |
| 0.6889 | 11.0405 | 936 | 1.7041 | 0.3605 |
| 0.5777 | 12.0405 | 1014 | 1.9075 | 0.3915 |
| 0.4317 | 13.0405 | 1092 | 1.6528 | 0.4690 |
| 0.5087 | 14.0405 | 1170 | 1.6126 | 0.4845 |
| 0.1257 | 15.0405 | 1248 | 2.0637 | 0.4147 |
| 0.3774 | 16.0405 | 1326 | 2.5347 | 0.3721 |
| 0.1259 | 17.0405 | 1404 | 2.6514 | 0.3760 |
| 0.2158 | 18.0405 | 1482 | 2.4290 | 0.4806 |
| 0.0505 | 19.0405 | 1560 | 2.6998 | 0.4225 |
| 0.0563 | 20.0405 | 1638 | 2.6474 | 0.4225 |
| 0.0671 | 21.0405 | 1716 | 2.6487 | 0.4496 |
| 0.1847 | 22.0405 | 1794 | 2.7012 | 0.4496 |
| 0.0041 | 23.0405 | 1872 | 2.7938 | 0.4341 |
| 0.0079 | 24.0275 | 1925 | 2.8322 | 0.4264 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu118
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso14/ead220c7-d7af-400b-9069-33ef2e0d5260
|
lesso14
| 2025-01-21T15:27:05Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T15:24:13Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ead220c7-d7af-400b-9069-33ef2e0d5260
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 50727019ec5046e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50727019ec5046e2_train_data.json
type:
field_input: category
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso14/ead220c7-d7af-400b-9069-33ef2e0d5260
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/50727019ec5046e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 61ce37e2-d75b-4435-bf72-23ce5c13f0b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 61ce37e2-d75b-4435-bf72-23ce5c13f0b8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ead220c7-d7af-400b-9069-33ef2e0d5260
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0013 | 1 | nan |
| 0.0 | 0.0066 | 5 | nan |
| 0.0 | 0.0133 | 10 | nan |
| 0.0 | 0.0199 | 15 | nan |
| 0.0 | 0.0266 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kk-aivio/b437dc4b-7763-411e-a393-a2e5f52c48ed
|
kk-aivio
| 2025-01-21T15:26:29Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | 2025-01-21T14:53:44Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b437dc4b-7763-411e-a393-a2e5f52c48ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ea142ac27856f425_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ea142ac27856f425_train_data.json
type:
field_instruction: 'Purpose: Clinical Focus'
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/b437dc4b-7763-411e-a393-a2e5f52c48ed
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ea142ac27856f425_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8d608405-ef66-4e7a-9976-6a448c75bbeb
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8d608405-ef66-4e7a-9976-6a448c75bbeb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b437dc4b-7763-411e-a393-a2e5f52c48ed
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mamung/7ff1eced-2e8b-4846-8663-e202ce37b08c
|
mamung
| 2025-01-21T15:25:43Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-01-21T14:38:55Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ff1eced-2e8b-4846-8663-e202ce37b08c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9f860ccb7e806546_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9f860ccb7e806546_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: mamung/7ff1eced-2e8b-4846-8663-e202ce37b08c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/9f860ccb7e806546_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: eddysang
wandb_mode: online
wandb_name: 92906d73-ae0c-43b3-9735-14fe2124bf2a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92906d73-ae0c-43b3-9735-14fe2124bf2a
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# 7ff1eced-2e8b-4846-8663-e202ce37b08c
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0191 | 1 | 4.2603 |
| 3.1815 | 0.1720 | 9 | 2.6312 |
| 2.0577 | 0.3441 | 18 | 1.9758 |
| 1.9087 | 0.5161 | 27 | 1.8731 |
| 1.8836 | 0.6882 | 36 | 1.8448 |
| 1.8482 | 0.8602 | 45 | 1.8272 |
| 2.0545 | 1.0323 | 54 | 1.7873 |
| 1.4003 | 1.2043 | 63 | 1.8731 |
| 1.2632 | 1.3763 | 72 | 1.8426 |
| 1.3178 | 1.5484 | 81 | 1.8405 |
| 1.2796 | 1.7204 | 90 | 1.8262 |
| 1.2715 | 1.8925 | 99 | 1.8266 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik2987/1ce01585-3962-41d9-9d9a-34b2b9bb47e0
|
dimasik2987
| 2025-01-21T15:25:42Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T15:24:11Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1ce01585-3962-41d9-9d9a-34b2b9bb47e0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 50727019ec5046e2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50727019ec5046e2_train_data.json
type:
field_input: category
field_instruction: text
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik2987/1ce01585-3962-41d9-9d9a-34b2b9bb47e0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/50727019ec5046e2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 61ce37e2-d75b-4435-bf72-23ce5c13f0b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 61ce37e2-d75b-4435-bf72-23ce5c13f0b8
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 1ce01585-3962-41d9-9d9a-34b2b9bb47e0
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0027 | 1 | nan |
| 0.0 | 0.0133 | 5 | nan |
| 0.0 | 0.0266 | 10 | nan |
| 0.0 | 0.0399 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
alyzbane/2025-01-21-15-21-31-convnextv2-tiny-1k-224
|
alyzbane
| 2025-01-21T15:25:37Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-tiny-1k-224",
"base_model:finetune:facebook/convnextv2-tiny-1k-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-01-21T15:25:32Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/convnextv2-tiny-1k-224
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 2025-01-21-15-21-31-convnextv2-tiny-1k-224
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2025-01-21-15-21-31-convnextv2-tiny-1k-224
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9799
- Recall: 0.9778
- F1: 0.9776
- Accuracy: 0.976
- Top1 Accuracy: 0.9778
- Error Rate: 0.0240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 0.7126 | 1.0 | 135 | 1.3514 | 0.7672 | 0.6593 | 0.6371 | 0.6614 | 0.6593 | 0.3386 |
| 0.4328 | 2.0 | 270 | 0.2026 | 0.9348 | 0.9333 | 0.9330 | 0.9347 | 0.9333 | 0.0653 |
| 0.3438 | 3.0 | 405 | 0.0591 | 0.9799 | 0.9778 | 0.9776 | 0.976 | 0.9778 | 0.0240 |
| 0.2082 | 4.0 | 540 | 0.0919 | 0.9725 | 0.9704 | 0.9703 | 0.9719 | 0.9704 | 0.0281 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
nhoxinh/4cc6bcd9-9c41-4126-a1a4-29c8c6531f14
|
nhoxinh
| 2025-01-21T15:25:07Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:40:32Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4cc6bcd9-9c41-4126-a1a4-29c8c6531f14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/4cc6bcd9-9c41-4126-a1a4-29c8c6531f14
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4cc6bcd9-9c41-4126-a1a4-29c8c6531f14
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4093 | 0.2224 | 200 | 1.4330 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hashirconfinlaity/MMH_v1.8
|
hashirconfinlaity
| 2025-01-21T15:21:33Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T15:20:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JustinStrauch/t5-form-sent-and-quest
|
JustinStrauch
| 2025-01-21T15:19:56Z | 603 | 0 | null |
[
"safetensors",
"t5",
"region:us"
] | null | 2025-01-21T13:25:15Z |
Generate a question using the words provided in the prompt, or simply return the question if the words already form one. Use promot "Formuliere eine Frage basierend auf den folgenden Begriffen: {prompt}."
Provide a sentence that answers a specific question with the given answer in german. Use the prompt "Beantworte die Frage: {question} mit der Antwort: {answer}"
|
RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter2T1_b0.1
|
RyanYr
| 2025-01-21T15:18:38Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b1.0",
"base_model:finetune:RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T14:07:26Z |
---
base_model: RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b1.0
library_name: transformers
model_name: reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter2T1_b0.1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter2T1_b0.1
This model is a fine-tuned version of [RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b1.0](https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter2T1_b0.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/l11cs9v5)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jebish7/QWEN-3B-B30
|
jebish7
| 2025-01-21T15:16:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T15:00:36Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jebish7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
philip-hightech/a0ad1708-8935-445c-a57f-d305a49216bf
|
philip-hightech
| 2025-01-21T15:15:59Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-21T15:15:35Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a0ad1708-8935-445c-a57f-d305a49216bf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 372a9aad3c15aaa5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/372a9aad3c15aaa5_train_data.json
type:
field_instruction: file_path
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/a0ad1708-8935-445c-a57f-d305a49216bf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/372a9aad3c15aaa5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ab42417-e3dd-430c-9b06-cae6baa46502
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ab42417-e3dd-430c-9b06-cae6baa46502
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a0ad1708-8935-445c-a57f-d305a49216bf
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8344 | 0.0014 | 1 | 10.8340 |
| 10.8394 | 0.0043 | 3 | 10.8338 |
| 10.8415 | 0.0086 | 6 | 10.8324 |
| 10.8331 | 0.0129 | 9 | 10.8300 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JustinStrauch/t5-form-sent
|
JustinStrauch
| 2025-01-21T15:15:41Z | 238 | 0 | null |
[
"safetensors",
"t5",
"region:us"
] | null | 2025-01-21T14:10:17Z |
Provide a sentence that answers a specific question with the given answer in german.
Use the prompt "Beantworte die Frage: {question} mit der Antwort: {answer}"
|
snu-bdai/palbot-classifier-full
|
snu-bdai
| 2025-01-21T15:15:32Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T13:56:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mergekit-community/llasa-3b-upscaled
|
mergekit-community
| 2025-01-21T15:15:02Z | 12 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:srinivasbilla/llasa-3b",
"base_model:finetune:srinivasbilla/llasa-3b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T15:12:16Z |
---
base_model:
- srinivasbilla/llasa-3b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [srinivasbilla/llasa-3b](https://huggingface.co/srinivasbilla/llasa-3b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 21]
model: srinivasbilla/llasa-3b
- sources:
- layer_range: [11, 22]
model: srinivasbilla/llasa-3b
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [11, 22]
model: srinivasbilla/llasa-3b
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [22, 28]
model: srinivasbilla/llasa-3b
```
|
havinash-ai/aa68b77f-7fdc-41ad-bad5-e164a605672b
|
havinash-ai
| 2025-01-21T15:14:44Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"region:us"
] | null | 2025-01-21T15:11:28Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa68b77f-7fdc-41ad-bad5-e164a605672b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2adafa906e5c58da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2adafa906e5c58da_train_data.json
type:
field_input: ''
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/aa68b77f-7fdc-41ad-bad5-e164a605672b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2adafa906e5c58da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa7d9493-c5b5-4e83-b6c1-5e33a1c2ac47
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aa68b77f-7fdc-41ad-bad5-e164a605672b
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6761 | 0.0011 | 1 | 1.7808 |
| 1.6812 | 0.0033 | 3 | 1.7725 |
| 1.6486 | 0.0067 | 6 | 1.6792 |
| 1.5698 | 0.0100 | 9 | 1.6612 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JacksonBrune/30000cd4-346b-477d-90cc-2dd936641928
|
JacksonBrune
| 2025-01-21T15:14:42Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-01-21T15:12:59Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 30000cd4-346b-477d-90cc-2dd936641928
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9f860ccb7e806546_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9f860ccb7e806546_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/30000cd4-346b-477d-90cc-2dd936641928
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/9f860ccb7e806546_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92906d73-ae0c-43b3-9735-14fe2124bf2a
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92906d73-ae0c-43b3-9735-14fe2124bf2a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 30000cd4-346b-477d-90cc-2dd936641928
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1942 | 0.0024 | 1 | 4.2598 |
| 3.7557 | 0.0072 | 3 | 4.2341 |
| 3.9389 | 0.0143 | 6 | 3.7675 |
| 3.0219 | 0.0215 | 9 | 3.1368 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dangon8k/abarmando
|
dangon8k
| 2025-01-21T15:13:50Z | 15 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T14:36:11Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: abarmando
---
# Abarmando
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `abarmando` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('dangon8k/abarmando', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k12_task7_organization
|
MayBashendy
| 2025-01-21T15:10:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T15:05:39Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k12_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k12_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1268
- Qwk: 0.2059
- Mse: 1.1268
- Rmse: 1.0615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0690 | 2 | 2.5453 | -0.0702 | 2.5453 | 1.5954 |
| No log | 0.1379 | 4 | 1.4224 | 0.0698 | 1.4224 | 1.1926 |
| No log | 0.2069 | 6 | 1.3452 | -0.2040 | 1.3452 | 1.1598 |
| No log | 0.2759 | 8 | 1.0964 | -0.1887 | 1.0964 | 1.0471 |
| No log | 0.3448 | 10 | 0.9759 | -0.0475 | 0.9759 | 0.9879 |
| No log | 0.4138 | 12 | 0.8375 | 0.0410 | 0.8375 | 0.9151 |
| No log | 0.4828 | 14 | 0.7690 | 0.0410 | 0.7690 | 0.8769 |
| No log | 0.5517 | 16 | 0.7869 | 0.0410 | 0.7869 | 0.8871 |
| No log | 0.6207 | 18 | 0.8093 | 0.0410 | 0.8093 | 0.8996 |
| No log | 0.6897 | 20 | 0.7982 | 0.0444 | 0.7982 | 0.8934 |
| No log | 0.7586 | 22 | 0.7896 | 0.0410 | 0.7896 | 0.8886 |
| No log | 0.8276 | 24 | 0.9492 | 0.0520 | 0.9492 | 0.9743 |
| No log | 0.8966 | 26 | 1.4011 | -0.1847 | 1.4011 | 1.1837 |
| No log | 0.9655 | 28 | 1.4887 | -0.3187 | 1.4887 | 1.2201 |
| No log | 1.0345 | 30 | 1.3264 | -0.2966 | 1.3264 | 1.1517 |
| No log | 1.1034 | 32 | 1.1235 | -0.1224 | 1.1235 | 1.0599 |
| No log | 1.1724 | 34 | 0.9535 | -0.0444 | 0.9535 | 0.9765 |
| No log | 1.2414 | 36 | 0.8409 | 0.0 | 0.8409 | 0.9170 |
| No log | 1.3103 | 38 | 0.7680 | 0.0 | 0.7680 | 0.8764 |
| No log | 1.3793 | 40 | 0.7320 | 0.0840 | 0.7320 | 0.8556 |
| No log | 1.4483 | 42 | 0.7055 | 0.2046 | 0.7055 | 0.8400 |
| No log | 1.5172 | 44 | 0.7217 | 0.2471 | 0.7217 | 0.8495 |
| No log | 1.5862 | 46 | 0.8735 | 0.3042 | 0.8735 | 0.9346 |
| No log | 1.6552 | 48 | 0.9523 | 0.2651 | 0.9523 | 0.9759 |
| No log | 1.7241 | 50 | 0.9411 | 0.2651 | 0.9411 | 0.9701 |
| No log | 1.7931 | 52 | 0.7864 | 0.3894 | 0.7864 | 0.8868 |
| No log | 1.8621 | 54 | 0.6670 | 0.1660 | 0.6670 | 0.8167 |
| No log | 1.9310 | 56 | 0.7295 | 0.1700 | 0.7295 | 0.8541 |
| No log | 2.0 | 58 | 0.7406 | 0.2736 | 0.7406 | 0.8606 |
| No log | 2.0690 | 60 | 0.7391 | 0.2736 | 0.7391 | 0.8597 |
| No log | 2.1379 | 62 | 0.7837 | 0.2736 | 0.7837 | 0.8853 |
| No log | 2.2069 | 64 | 0.7412 | 0.1304 | 0.7412 | 0.8609 |
| No log | 2.2759 | 66 | 0.6872 | 0.1136 | 0.6872 | 0.8290 |
| No log | 2.3448 | 68 | 0.6557 | 0.2621 | 0.6557 | 0.8098 |
| No log | 2.4138 | 70 | 0.7477 | 0.4470 | 0.7477 | 0.8647 |
| No log | 2.4828 | 72 | 0.6980 | 0.3372 | 0.6980 | 0.8355 |
| No log | 2.5517 | 74 | 0.6719 | 0.3020 | 0.6719 | 0.8197 |
| No log | 2.6207 | 76 | 0.8024 | 0.3562 | 0.8024 | 0.8958 |
| No log | 2.6897 | 78 | 0.9731 | 0.3538 | 0.9731 | 0.9864 |
| No log | 2.7586 | 80 | 1.0102 | 0.2756 | 1.0102 | 1.0051 |
| No log | 2.8276 | 82 | 0.9947 | 0.3082 | 0.9947 | 0.9974 |
| No log | 2.8966 | 84 | 1.2863 | 0.1919 | 1.2863 | 1.1342 |
| No log | 2.9655 | 86 | 1.2776 | 0.2417 | 1.2776 | 1.1303 |
| No log | 3.0345 | 88 | 1.0469 | 0.2252 | 1.0469 | 1.0232 |
| No log | 3.1034 | 90 | 0.8326 | 0.2492 | 0.8326 | 0.9125 |
| No log | 3.1724 | 92 | 0.8370 | 0.2492 | 0.8370 | 0.9149 |
| No log | 3.2414 | 94 | 1.0226 | 0.2781 | 1.0226 | 1.0112 |
| No log | 3.3103 | 96 | 1.1287 | 0.1810 | 1.1287 | 1.0624 |
| No log | 3.3793 | 98 | 1.2322 | 0.2421 | 1.2322 | 1.1100 |
| No log | 3.4483 | 100 | 1.3511 | 0.1810 | 1.3511 | 1.1624 |
| No log | 3.5172 | 102 | 1.3720 | 0.1705 | 1.3720 | 1.1713 |
| No log | 3.5862 | 104 | 1.1706 | 0.1739 | 1.1706 | 1.0819 |
| No log | 3.6552 | 106 | 1.1696 | 0.2306 | 1.1696 | 1.0815 |
| No log | 3.7241 | 108 | 1.2230 | 0.1931 | 1.2230 | 1.1059 |
| No log | 3.7931 | 110 | 1.3401 | 0.0684 | 1.3401 | 1.1576 |
| No log | 3.8621 | 112 | 1.2987 | 0.1422 | 1.2987 | 1.1396 |
| No log | 3.9310 | 114 | 1.3404 | 0.1171 | 1.3404 | 1.1577 |
| No log | 4.0 | 116 | 1.2589 | 0.1611 | 1.2589 | 1.1220 |
| No log | 4.0690 | 118 | 1.0669 | 0.3082 | 1.0669 | 1.0329 |
| No log | 4.1379 | 120 | 0.9152 | 0.2275 | 0.9152 | 0.9567 |
| No log | 4.2069 | 122 | 0.9295 | 0.2604 | 0.9295 | 0.9641 |
| No log | 4.2759 | 124 | 1.0276 | 0.3193 | 1.0276 | 1.0137 |
| No log | 4.3448 | 126 | 1.0569 | 0.2910 | 1.0569 | 1.0281 |
| No log | 4.4138 | 128 | 1.0820 | 0.2806 | 1.0820 | 1.0402 |
| No log | 4.4828 | 130 | 1.0495 | 0.2677 | 1.0495 | 1.0245 |
| No log | 4.5517 | 132 | 0.9602 | 0.2518 | 0.9602 | 0.9799 |
| No log | 4.6207 | 134 | 0.9603 | 0.2518 | 0.9603 | 0.9799 |
| No log | 4.6897 | 136 | 1.0480 | 0.2389 | 1.0480 | 1.0237 |
| No log | 4.7586 | 138 | 1.1838 | 0.1873 | 1.1838 | 1.0880 |
| No log | 4.8276 | 140 | 1.1795 | 0.1561 | 1.1795 | 1.0860 |
| No log | 4.8966 | 142 | 1.2737 | 0.1961 | 1.2737 | 1.1286 |
| No log | 4.9655 | 144 | 1.2553 | 0.2264 | 1.2553 | 1.1204 |
| No log | 5.0345 | 146 | 1.2432 | 0.2499 | 1.2432 | 1.1150 |
| No log | 5.1034 | 148 | 1.2924 | 0.1502 | 1.2924 | 1.1368 |
| No log | 5.1724 | 150 | 1.3153 | 0.1230 | 1.3153 | 1.1469 |
| No log | 5.2414 | 152 | 1.2418 | 0.1787 | 1.2418 | 1.1144 |
| No log | 5.3103 | 154 | 1.0938 | 0.3193 | 1.0938 | 1.0458 |
| No log | 5.3793 | 156 | 1.0401 | 0.2602 | 1.0401 | 1.0198 |
| No log | 5.4483 | 158 | 1.0799 | 0.2271 | 1.0799 | 1.0392 |
| No log | 5.5172 | 160 | 1.0130 | 0.3251 | 1.0130 | 1.0065 |
| No log | 5.5862 | 162 | 0.9178 | 0.2967 | 0.9178 | 0.9580 |
| No log | 5.6552 | 164 | 0.8603 | 0.3372 | 0.8603 | 0.9275 |
| No log | 5.7241 | 166 | 0.8260 | 0.3518 | 0.8260 | 0.9088 |
| No log | 5.7931 | 168 | 0.7966 | 0.2883 | 0.7966 | 0.8925 |
| No log | 5.8621 | 170 | 0.8056 | 0.2171 | 0.8056 | 0.8976 |
| No log | 5.9310 | 172 | 0.8280 | 0.2171 | 0.8280 | 0.9100 |
| No log | 6.0 | 174 | 0.9096 | 0.2917 | 0.9096 | 0.9537 |
| No log | 6.0690 | 176 | 1.0177 | 0.2659 | 1.0177 | 1.0088 |
| No log | 6.1379 | 178 | 1.0625 | 0.2613 | 1.0625 | 1.0308 |
| No log | 6.2069 | 180 | 1.0355 | 0.2547 | 1.0355 | 1.0176 |
| No log | 6.2759 | 182 | 0.9736 | 0.1586 | 0.9736 | 0.9867 |
| No log | 6.3448 | 184 | 0.9372 | 0.1998 | 0.9372 | 0.9681 |
| No log | 6.4138 | 186 | 0.9627 | 0.2394 | 0.9627 | 0.9812 |
| No log | 6.4828 | 188 | 0.9926 | 0.2881 | 0.9926 | 0.9963 |
| No log | 6.5517 | 190 | 1.0013 | 0.2554 | 1.0013 | 1.0006 |
| No log | 6.6207 | 192 | 1.0249 | 0.2872 | 1.0249 | 1.0124 |
| No log | 6.6897 | 194 | 1.0761 | 0.2872 | 1.0761 | 1.0374 |
| No log | 6.7586 | 196 | 1.0180 | 0.2881 | 1.0180 | 1.0089 |
| No log | 6.8276 | 198 | 1.0663 | 0.3597 | 1.0663 | 1.0326 |
| No log | 6.8966 | 200 | 1.1288 | 0.3161 | 1.1288 | 1.0625 |
| No log | 6.9655 | 202 | 1.1331 | 0.3059 | 1.1331 | 1.0645 |
| No log | 7.0345 | 204 | 1.1162 | 0.3110 | 1.1162 | 1.0565 |
| No log | 7.1034 | 206 | 1.0572 | 0.2977 | 1.0572 | 1.0282 |
| No log | 7.1724 | 208 | 1.0731 | 0.3228 | 1.0731 | 1.0359 |
| No log | 7.2414 | 210 | 1.2394 | 0.2258 | 1.2394 | 1.1133 |
| No log | 7.3103 | 212 | 1.4367 | 0.1464 | 1.4367 | 1.1986 |
| No log | 7.3793 | 214 | 1.3517 | 0.1939 | 1.3517 | 1.1626 |
| No log | 7.4483 | 216 | 1.2243 | 0.2045 | 1.2243 | 1.1065 |
| No log | 7.5172 | 218 | 1.0628 | 0.2460 | 1.0628 | 1.0309 |
| No log | 7.5862 | 220 | 0.9252 | 0.2193 | 0.9252 | 0.9619 |
| No log | 7.6552 | 222 | 0.8722 | 0.1718 | 0.8722 | 0.9339 |
| No log | 7.7241 | 224 | 0.9319 | 0.2726 | 0.9319 | 0.9654 |
| No log | 7.7931 | 226 | 0.9832 | 0.2389 | 0.9832 | 0.9916 |
| No log | 7.8621 | 228 | 1.0887 | 0.2682 | 1.0887 | 1.0434 |
| No log | 7.9310 | 230 | 1.1154 | 0.3059 | 1.1154 | 1.0561 |
| No log | 8.0 | 232 | 1.0497 | 0.2252 | 1.0497 | 1.0246 |
| No log | 8.0690 | 234 | 1.0813 | 0.2209 | 1.0813 | 1.0399 |
| No log | 8.1379 | 236 | 1.2322 | 0.2559 | 1.2322 | 1.1100 |
| No log | 8.2069 | 238 | 1.3467 | 0.1195 | 1.3467 | 1.1605 |
| No log | 8.2759 | 240 | 1.1966 | 0.2358 | 1.1966 | 1.0939 |
| No log | 8.3448 | 242 | 0.9679 | 0.3137 | 0.9679 | 0.9838 |
| No log | 8.4138 | 244 | 0.8394 | 0.2142 | 0.8394 | 0.9162 |
| No log | 8.4828 | 246 | 0.7723 | 0.1866 | 0.7723 | 0.8788 |
| No log | 8.5517 | 248 | 0.7801 | 0.2149 | 0.7801 | 0.8832 |
| No log | 8.6207 | 250 | 0.8350 | 0.2643 | 0.8350 | 0.9138 |
| No log | 8.6897 | 252 | 1.0078 | 0.2850 | 1.0078 | 1.0039 |
| No log | 8.7586 | 254 | 1.2352 | 0.1732 | 1.2352 | 1.1114 |
| No log | 8.8276 | 256 | 1.2438 | 0.2056 | 1.2438 | 1.1152 |
| No log | 8.8966 | 258 | 1.1438 | 0.2567 | 1.1438 | 1.0695 |
| No log | 8.9655 | 260 | 0.9768 | 0.3347 | 0.9768 | 0.9883 |
| No log | 9.0345 | 262 | 0.8724 | 0.1718 | 0.8724 | 0.9340 |
| No log | 9.1034 | 264 | 0.8107 | 0.1142 | 0.8107 | 0.9004 |
| No log | 9.1724 | 266 | 0.8257 | 0.2632 | 0.8257 | 0.9087 |
| No log | 9.2414 | 268 | 0.9729 | 0.2615 | 0.9729 | 0.9863 |
| No log | 9.3103 | 270 | 1.2462 | 0.2184 | 1.2462 | 1.1163 |
| No log | 9.3793 | 272 | 1.3690 | 0.1222 | 1.3690 | 1.1701 |
| No log | 9.4483 | 274 | 1.2990 | 0.1067 | 1.2990 | 1.1397 |
| No log | 9.5172 | 276 | 1.1337 | 0.2075 | 1.1337 | 1.0648 |
| No log | 9.5862 | 278 | 0.9737 | 0.1461 | 0.9737 | 0.9867 |
| No log | 9.6552 | 280 | 0.9036 | 0.1672 | 0.9036 | 0.9506 |
| No log | 9.7241 | 282 | 0.9262 | 0.1962 | 0.9262 | 0.9624 |
| No log | 9.7931 | 284 | 1.0436 | 0.1651 | 1.0436 | 1.0216 |
| No log | 9.8621 | 286 | 1.2188 | 0.1057 | 1.2188 | 1.1040 |
| No log | 9.9310 | 288 | 1.3176 | 0.0947 | 1.3176 | 1.1479 |
| No log | 10.0 | 290 | 1.2297 | 0.1328 | 1.2297 | 1.1089 |
| No log | 10.0690 | 292 | 1.1279 | 0.1949 | 1.1279 | 1.0620 |
| No log | 10.1379 | 294 | 1.0514 | 0.1747 | 1.0514 | 1.0254 |
| No log | 10.2069 | 296 | 0.9404 | 0.2358 | 0.9404 | 0.9697 |
| No log | 10.2759 | 298 | 0.8677 | 0.3359 | 0.8677 | 0.9315 |
| No log | 10.3448 | 300 | 0.8773 | 0.3359 | 0.8773 | 0.9366 |
| No log | 10.4138 | 302 | 0.9634 | 0.2703 | 0.9634 | 0.9816 |
| No log | 10.4828 | 304 | 1.1425 | 0.2799 | 1.1425 | 1.0689 |
| No log | 10.5517 | 306 | 1.3152 | 0.2259 | 1.3152 | 1.1468 |
| No log | 10.6207 | 308 | 1.3419 | 0.1864 | 1.3419 | 1.1584 |
| No log | 10.6897 | 310 | 1.2340 | 0.2145 | 1.2340 | 1.1109 |
| No log | 10.7586 | 312 | 1.0281 | 0.2635 | 1.0281 | 1.0140 |
| No log | 10.8276 | 314 | 0.9060 | 0.3110 | 0.9060 | 0.9518 |
| No log | 10.8966 | 316 | 0.9223 | 0.3347 | 0.9223 | 0.9603 |
| No log | 10.9655 | 318 | 1.0634 | 0.3213 | 1.0634 | 1.0312 |
| No log | 11.0345 | 320 | 1.2036 | 0.1849 | 1.2036 | 1.0971 |
| No log | 11.1034 | 322 | 1.1860 | 0.1870 | 1.1860 | 1.0890 |
| No log | 11.1724 | 324 | 1.1287 | 0.1635 | 1.1287 | 1.0624 |
| No log | 11.2414 | 326 | 1.0516 | 0.2756 | 1.0516 | 1.0255 |
| No log | 11.3103 | 328 | 0.9724 | 0.2779 | 0.9724 | 0.9861 |
| No log | 11.3793 | 330 | 0.8987 | 0.3473 | 0.8987 | 0.9480 |
| No log | 11.4483 | 332 | 0.8950 | 0.3473 | 0.8950 | 0.9461 |
| No log | 11.5172 | 334 | 1.0123 | 0.2926 | 1.0123 | 1.0062 |
| No log | 11.5862 | 336 | 1.2553 | 0.1973 | 1.2553 | 1.1204 |
| No log | 11.6552 | 338 | 1.3704 | 0.1678 | 1.3704 | 1.1707 |
| No log | 11.7241 | 340 | 1.3078 | 0.1985 | 1.3078 | 1.1436 |
| No log | 11.7931 | 342 | 1.0969 | 0.2733 | 1.0969 | 1.0473 |
| No log | 11.8621 | 344 | 0.9092 | 0.2562 | 0.9092 | 0.9535 |
| No log | 11.9310 | 346 | 0.8589 | 0.1217 | 0.8589 | 0.9268 |
| No log | 12.0 | 348 | 0.8591 | 0.2358 | 0.8591 | 0.9269 |
| No log | 12.0690 | 350 | 0.9603 | 0.2601 | 0.9603 | 0.9800 |
| No log | 12.1379 | 352 | 1.0927 | 0.2853 | 1.0927 | 1.0453 |
| No log | 12.2069 | 354 | 1.1822 | 0.2436 | 1.1822 | 1.0873 |
| No log | 12.2759 | 356 | 1.2399 | 0.2206 | 1.2399 | 1.1135 |
| No log | 12.3448 | 358 | 1.2793 | 0.2086 | 1.2793 | 1.1310 |
| No log | 12.4138 | 360 | 1.1402 | 0.2579 | 1.1402 | 1.0678 |
| No log | 12.4828 | 362 | 0.9911 | 0.3337 | 0.9911 | 0.9955 |
| No log | 12.5517 | 364 | 0.9211 | 0.3739 | 0.9211 | 0.9597 |
| No log | 12.6207 | 366 | 0.7966 | 0.1254 | 0.7966 | 0.8925 |
| No log | 12.6897 | 368 | 0.7697 | 0.2530 | 0.7697 | 0.8773 |
| No log | 12.7586 | 370 | 0.7956 | 0.2813 | 0.7956 | 0.8920 |
| No log | 12.8276 | 372 | 0.8864 | 0.1822 | 0.8864 | 0.9415 |
| No log | 12.8966 | 374 | 0.9749 | 0.2703 | 0.9749 | 0.9874 |
| No log | 12.9655 | 376 | 1.0574 | 0.2683 | 1.0574 | 1.0283 |
| No log | 13.0345 | 378 | 1.1364 | 0.2805 | 1.1364 | 1.0660 |
| No log | 13.1034 | 380 | 1.1175 | 0.2853 | 1.1175 | 1.0571 |
| No log | 13.1724 | 382 | 1.1097 | 0.2059 | 1.1097 | 1.0534 |
| No log | 13.2414 | 384 | 1.0509 | 0.1787 | 1.0509 | 1.0251 |
| No log | 13.3103 | 386 | 1.0176 | 0.1955 | 1.0176 | 1.0087 |
| No log | 13.3793 | 388 | 1.0844 | 0.1662 | 1.0844 | 1.0414 |
| No log | 13.4483 | 390 | 1.1733 | 0.1293 | 1.1733 | 1.0832 |
| No log | 13.5172 | 392 | 1.1925 | 0.2115 | 1.1925 | 1.0920 |
| No log | 13.5862 | 394 | 1.1660 | 0.2319 | 1.1660 | 1.0798 |
| No log | 13.6552 | 396 | 1.1356 | 0.2417 | 1.1356 | 1.0656 |
| No log | 13.7241 | 398 | 1.1431 | 0.2501 | 1.1431 | 1.0692 |
| No log | 13.7931 | 400 | 1.0807 | 0.2075 | 1.0807 | 1.0395 |
| No log | 13.8621 | 402 | 1.0667 | 0.2075 | 1.0667 | 1.0328 |
| No log | 13.9310 | 404 | 1.1108 | 0.2032 | 1.1108 | 1.0539 |
| No log | 14.0 | 406 | 1.1794 | 0.1535 | 1.1794 | 1.0860 |
| No log | 14.0690 | 408 | 1.1869 | 0.1146 | 1.1869 | 1.0894 |
| No log | 14.1379 | 410 | 1.1716 | 0.1146 | 1.1716 | 1.0824 |
| No log | 14.2069 | 412 | 1.1896 | 0.2459 | 1.1896 | 1.0907 |
| No log | 14.2759 | 414 | 1.0928 | 0.2125 | 1.0928 | 1.0454 |
| No log | 14.3448 | 416 | 1.0205 | 0.2756 | 1.0205 | 1.0102 |
| No log | 14.4138 | 418 | 1.0495 | 0.2125 | 1.0495 | 1.0245 |
| No log | 14.4828 | 420 | 1.0959 | 0.2567 | 1.0959 | 1.0468 |
| No log | 14.5517 | 422 | 0.9856 | 0.3029 | 0.9856 | 0.9928 |
| No log | 14.6207 | 424 | 0.9658 | 0.3029 | 0.9658 | 0.9827 |
| No log | 14.6897 | 426 | 0.9227 | 0.3170 | 0.9227 | 0.9606 |
| No log | 14.7586 | 428 | 0.9029 | 0.3516 | 0.9029 | 0.9502 |
| No log | 14.8276 | 430 | 0.8452 | 0.3709 | 0.8452 | 0.9194 |
| No log | 14.8966 | 432 | 0.7839 | 0.2604 | 0.7839 | 0.8854 |
| No log | 14.9655 | 434 | 0.6884 | 0.2204 | 0.6884 | 0.8297 |
| No log | 15.0345 | 436 | 0.6691 | 0.2345 | 0.6691 | 0.8180 |
| No log | 15.1034 | 438 | 0.6985 | 0.2171 | 0.6985 | 0.8358 |
| No log | 15.1724 | 440 | 0.7806 | 0.2518 | 0.7806 | 0.8835 |
| No log | 15.2414 | 442 | 0.9461 | 0.2756 | 0.9462 | 0.9727 |
| No log | 15.3103 | 444 | 1.0573 | 0.3399 | 1.0573 | 1.0283 |
| No log | 15.3793 | 446 | 1.1502 | 0.3486 | 1.1502 | 1.0725 |
| No log | 15.4483 | 448 | 1.0732 | 0.3110 | 1.0732 | 1.0359 |
| No log | 15.5172 | 450 | 0.9770 | 0.3214 | 0.9770 | 0.9884 |
| No log | 15.5862 | 452 | 0.8948 | 0.2343 | 0.8948 | 0.9459 |
| No log | 15.6552 | 454 | 0.8177 | 0.2094 | 0.8177 | 0.9042 |
| No log | 15.7241 | 456 | 0.8148 | 0.2670 | 0.8148 | 0.9027 |
| No log | 15.7931 | 458 | 0.9017 | 0.3579 | 0.9017 | 0.9496 |
| No log | 15.8621 | 460 | 1.0208 | 0.3517 | 1.0208 | 1.0103 |
| No log | 15.9310 | 462 | 1.0848 | 0.1931 | 1.0848 | 1.0415 |
| No log | 16.0 | 464 | 1.0526 | 0.2567 | 1.0526 | 1.0260 |
| No log | 16.0690 | 466 | 1.0483 | 0.2367 | 1.0483 | 1.0239 |
| No log | 16.1379 | 468 | 0.9972 | 0.2732 | 0.9972 | 0.9986 |
| No log | 16.2069 | 470 | 0.9784 | 0.2732 | 0.9784 | 0.9891 |
| No log | 16.2759 | 472 | 0.9925 | 0.2782 | 0.9925 | 0.9963 |
| No log | 16.3448 | 474 | 1.0099 | 0.2782 | 1.0099 | 1.0050 |
| No log | 16.4138 | 476 | 1.0268 | 0.3029 | 1.0268 | 1.0133 |
| No log | 16.4828 | 478 | 0.9923 | 0.2529 | 0.9923 | 0.9961 |
| No log | 16.5517 | 480 | 1.0032 | 0.2529 | 1.0032 | 1.0016 |
| No log | 16.6207 | 482 | 1.0193 | 0.2682 | 1.0193 | 1.0096 |
| No log | 16.6897 | 484 | 1.0125 | 0.3481 | 1.0125 | 1.0062 |
| No log | 16.7586 | 486 | 0.9792 | 0.2779 | 0.9792 | 0.9896 |
| No log | 16.8276 | 488 | 0.8717 | 0.1723 | 0.8717 | 0.9337 |
| No log | 16.8966 | 490 | 0.8205 | 0.2463 | 0.8205 | 0.9058 |
| No log | 16.9655 | 492 | 0.8503 | 0.1867 | 0.8503 | 0.9221 |
| No log | 17.0345 | 494 | 0.9786 | 0.3269 | 0.9786 | 0.9893 |
| No log | 17.1034 | 496 | 1.2209 | 0.2559 | 1.2209 | 1.1049 |
| No log | 17.1724 | 498 | 1.3928 | 0.2041 | 1.3928 | 1.1802 |
| 0.3191 | 17.2414 | 500 | 1.3962 | 0.2074 | 1.3962 | 1.1816 |
| 0.3191 | 17.3103 | 502 | 1.2199 | 0.2153 | 1.2199 | 1.1045 |
| 0.3191 | 17.3793 | 504 | 1.0404 | 0.2109 | 1.0404 | 1.0200 |
| 0.3191 | 17.4483 | 506 | 0.9734 | 0.2164 | 0.9734 | 0.9866 |
| 0.3191 | 17.5172 | 508 | 0.9450 | 0.2211 | 0.9450 | 0.9721 |
| 0.3191 | 17.5862 | 510 | 0.9168 | 0.1422 | 0.9168 | 0.9575 |
| 0.3191 | 17.6552 | 512 | 0.9271 | 0.1692 | 0.9271 | 0.9628 |
| 0.3191 | 17.7241 | 514 | 0.9883 | 0.2363 | 0.9883 | 0.9941 |
| 0.3191 | 17.7931 | 516 | 1.1032 | 0.2780 | 1.1032 | 1.0503 |
| 0.3191 | 17.8621 | 518 | 1.1113 | 0.2780 | 1.1113 | 1.0542 |
| 0.3191 | 17.9310 | 520 | 1.0438 | 0.2612 | 1.0438 | 1.0217 |
| 0.3191 | 18.0 | 522 | 0.9722 | 0.2075 | 0.9722 | 0.9860 |
| 0.3191 | 18.0690 | 524 | 0.9171 | 0.2923 | 0.9171 | 0.9577 |
| 0.3191 | 18.1379 | 526 | 0.8797 | 0.2193 | 0.8797 | 0.9379 |
| 0.3191 | 18.2069 | 528 | 0.8692 | 0.2244 | 0.8692 | 0.9323 |
| 0.3191 | 18.2759 | 530 | 0.8871 | 0.2193 | 0.8871 | 0.9419 |
| 0.3191 | 18.3448 | 532 | 0.9605 | 0.1612 | 0.9605 | 0.9800 |
| 0.3191 | 18.4138 | 534 | 1.0073 | 0.3006 | 1.0073 | 1.0037 |
| 0.3191 | 18.4828 | 536 | 1.1021 | 0.2903 | 1.1021 | 1.0498 |
| 0.3191 | 18.5517 | 538 | 1.2427 | 0.1621 | 1.2427 | 1.1148 |
| 0.3191 | 18.6207 | 540 | 1.3046 | 0.1550 | 1.3046 | 1.1422 |
| 0.3191 | 18.6897 | 542 | 1.2771 | 0.1825 | 1.2771 | 1.1301 |
| 0.3191 | 18.7586 | 544 | 1.1838 | 0.2543 | 1.1838 | 1.0880 |
| 0.3191 | 18.8276 | 546 | 1.0547 | 0.3059 | 1.0547 | 1.0270 |
| 0.3191 | 18.8966 | 548 | 0.9850 | 0.2833 | 0.9850 | 0.9925 |
| 0.3191 | 18.9655 | 550 | 0.8833 | 0.2995 | 0.8833 | 0.9398 |
| 0.3191 | 19.0345 | 552 | 0.8150 | 0.3294 | 0.8150 | 0.9028 |
| 0.3191 | 19.1034 | 554 | 0.8461 | 0.3538 | 0.8461 | 0.9199 |
| 0.3191 | 19.1724 | 556 | 0.8462 | 0.3538 | 0.8462 | 0.9199 |
| 0.3191 | 19.2414 | 558 | 0.8813 | 0.3455 | 0.8813 | 0.9388 |
| 0.3191 | 19.3103 | 560 | 0.9183 | 0.3059 | 0.9183 | 0.9583 |
| 0.3191 | 19.3793 | 562 | 0.9153 | 0.3059 | 0.9153 | 0.9567 |
| 0.3191 | 19.4483 | 564 | 0.9542 | 0.3280 | 0.9542 | 0.9768 |
| 0.3191 | 19.5172 | 566 | 0.9975 | 0.2954 | 0.9975 | 0.9987 |
| 0.3191 | 19.5862 | 568 | 1.0434 | 0.2499 | 1.0434 | 1.0215 |
| 0.3191 | 19.6552 | 570 | 1.0337 | 0.2499 | 1.0337 | 1.0167 |
| 0.3191 | 19.7241 | 572 | 0.9722 | 0.3225 | 0.9722 | 0.9860 |
| 0.3191 | 19.7931 | 574 | 0.9153 | 0.3739 | 0.9153 | 0.9567 |
| 0.3191 | 19.8621 | 576 | 0.8565 | 0.3473 | 0.8565 | 0.9255 |
| 0.3191 | 19.9310 | 578 | 0.8089 | 0.3105 | 0.8089 | 0.8994 |
| 0.3191 | 20.0 | 580 | 0.8166 | 0.2574 | 0.8166 | 0.9036 |
| 0.3191 | 20.0690 | 582 | 0.8680 | 0.2463 | 0.8680 | 0.9316 |
| 0.3191 | 20.1379 | 584 | 0.9500 | 0.2756 | 0.9500 | 0.9747 |
| 0.3191 | 20.2069 | 586 | 1.0641 | 0.2504 | 1.0641 | 1.0316 |
| 0.3191 | 20.2759 | 588 | 1.1553 | 0.2522 | 1.1553 | 1.0748 |
| 0.3191 | 20.3448 | 590 | 1.1547 | 0.2059 | 1.1547 | 1.0746 |
| 0.3191 | 20.4138 | 592 | 1.1744 | 0.1688 | 1.1744 | 1.0837 |
| 0.3191 | 20.4828 | 594 | 1.1474 | 0.1688 | 1.1474 | 1.0712 |
| 0.3191 | 20.5517 | 596 | 1.1295 | 0.1723 | 1.1295 | 1.0628 |
| 0.3191 | 20.6207 | 598 | 1.0577 | 0.2782 | 1.0577 | 1.0285 |
| 0.3191 | 20.6897 | 600 | 1.0228 | 0.3114 | 1.0228 | 1.0113 |
| 0.3191 | 20.7586 | 602 | 0.9943 | 0.2651 | 0.9943 | 0.9971 |
| 0.3191 | 20.8276 | 604 | 0.9281 | 0.3709 | 0.9281 | 0.9634 |
| 0.3191 | 20.8966 | 606 | 0.8694 | 0.2574 | 0.8694 | 0.9324 |
| 0.3191 | 20.9655 | 608 | 0.8275 | 0.2297 | 0.8275 | 0.9097 |
| 0.3191 | 21.0345 | 610 | 0.8427 | 0.2297 | 0.8427 | 0.9180 |
| 0.3191 | 21.1034 | 612 | 0.9285 | 0.3709 | 0.9285 | 0.9636 |
| 0.3191 | 21.1724 | 614 | 1.0567 | 0.2367 | 1.0567 | 1.0279 |
| 0.3191 | 21.2414 | 616 | 1.2050 | 0.1944 | 1.2050 | 1.0977 |
| 0.3191 | 21.3103 | 618 | 1.2365 | 0.1621 | 1.2365 | 1.1120 |
| 0.3191 | 21.3793 | 620 | 1.1261 | 0.2282 | 1.1261 | 1.0612 |
| 0.3191 | 21.4483 | 622 | 0.9719 | 0.3739 | 0.9719 | 0.9859 |
| 0.3191 | 21.5172 | 624 | 0.9406 | 0.3110 | 0.9406 | 0.9698 |
| 0.3191 | 21.5862 | 626 | 0.9441 | 0.3169 | 0.9441 | 0.9717 |
| 0.3191 | 21.6552 | 628 | 0.9875 | 0.3287 | 0.9875 | 0.9937 |
| 0.3191 | 21.7241 | 630 | 1.0743 | 0.2504 | 1.0743 | 1.0365 |
| 0.3191 | 21.7931 | 632 | 1.1332 | 0.2141 | 1.1332 | 1.0645 |
| 0.3191 | 21.8621 | 634 | 1.2131 | 0.1884 | 1.2131 | 1.1014 |
| 0.3191 | 21.9310 | 636 | 1.2510 | 0.1884 | 1.2510 | 1.1185 |
| 0.3191 | 22.0 | 638 | 1.2384 | 0.2336 | 1.2384 | 1.1128 |
| 0.3191 | 22.0690 | 640 | 1.2260 | 0.2336 | 1.2260 | 1.1072 |
| 0.3191 | 22.1379 | 642 | 1.1516 | 0.2459 | 1.1516 | 1.0731 |
| 0.3191 | 22.2069 | 644 | 1.0521 | 0.2850 | 1.0521 | 1.0257 |
| 0.3191 | 22.2759 | 646 | 0.9533 | 0.3371 | 0.9533 | 0.9764 |
| 0.3191 | 22.3448 | 648 | 0.9010 | 0.3384 | 0.9010 | 0.9492 |
| 0.3191 | 22.4138 | 650 | 0.8998 | 0.3384 | 0.8998 | 0.9486 |
| 0.3191 | 22.4828 | 652 | 0.9101 | 0.3319 | 0.9101 | 0.9540 |
| 0.3191 | 22.5517 | 654 | 0.9524 | 0.3601 | 0.9524 | 0.9759 |
| 0.3191 | 22.6207 | 656 | 0.9424 | 0.3371 | 0.9424 | 0.9708 |
| 0.3191 | 22.6897 | 658 | 0.9976 | 0.3110 | 0.9976 | 0.9988 |
| 0.3191 | 22.7586 | 660 | 0.9895 | 0.3214 | 0.9895 | 0.9947 |
| 0.3191 | 22.8276 | 662 | 0.9138 | 0.3740 | 0.9138 | 0.9559 |
| 0.3191 | 22.8966 | 664 | 0.8354 | 0.3630 | 0.8354 | 0.9140 |
| 0.3191 | 22.9655 | 666 | 0.8584 | 0.3869 | 0.8584 | 0.9265 |
| 0.3191 | 23.0345 | 668 | 0.9549 | 0.3761 | 0.9549 | 0.9772 |
| 0.3191 | 23.1034 | 670 | 1.0170 | 0.3425 | 1.0170 | 1.0085 |
| 0.3191 | 23.1724 | 672 | 1.0268 | 0.3425 | 1.0268 | 1.0133 |
| 0.3191 | 23.2414 | 674 | 1.0660 | 0.3183 | 1.0660 | 1.0325 |
| 0.3191 | 23.3103 | 676 | 1.0135 | 0.3787 | 1.0135 | 1.0067 |
| 0.3191 | 23.3793 | 678 | 0.9069 | 0.3251 | 0.9069 | 0.9523 |
| 0.3191 | 23.4483 | 680 | 0.8996 | 0.3310 | 0.8996 | 0.9485 |
| 0.3191 | 23.5172 | 682 | 0.9143 | 0.3310 | 0.9143 | 0.9562 |
| 0.3191 | 23.5862 | 684 | 0.8977 | 0.3731 | 0.8977 | 0.9474 |
| 0.3191 | 23.6552 | 686 | 0.9740 | 0.2964 | 0.9740 | 0.9869 |
| 0.3191 | 23.7241 | 688 | 1.0142 | 0.3636 | 1.0142 | 1.0071 |
| 0.3191 | 23.7931 | 690 | 0.9973 | 0.3417 | 0.9973 | 0.9987 |
| 0.3191 | 23.8621 | 692 | 1.0250 | 0.3302 | 1.0250 | 1.0124 |
| 0.3191 | 23.9310 | 694 | 1.1086 | 0.2827 | 1.1086 | 1.0529 |
| 0.3191 | 24.0 | 696 | 1.1506 | 0.2223 | 1.1506 | 1.0726 |
| 0.3191 | 24.0690 | 698 | 1.0693 | 0.2876 | 1.0693 | 1.0341 |
| 0.3191 | 24.1379 | 700 | 0.9858 | 0.3302 | 0.9858 | 0.9929 |
| 0.3191 | 24.2069 | 702 | 0.9774 | 0.3359 | 0.9774 | 0.9887 |
| 0.3191 | 24.2759 | 704 | 0.9548 | 0.3761 | 0.9548 | 0.9772 |
| 0.3191 | 24.3448 | 706 | 0.9741 | 0.3082 | 0.9741 | 0.9870 |
| 0.3191 | 24.4138 | 708 | 0.9945 | 0.2125 | 0.9945 | 0.9973 |
| 0.3191 | 24.4828 | 710 | 1.0045 | 0.1870 | 1.0045 | 1.0023 |
| 0.3191 | 24.5517 | 712 | 1.0783 | 0.1564 | 1.0783 | 1.0384 |
| 0.3191 | 24.6207 | 714 | 1.1379 | 0.2020 | 1.1379 | 1.0667 |
| 0.3191 | 24.6897 | 716 | 1.1374 | 0.2020 | 1.1374 | 1.0665 |
| 0.3191 | 24.7586 | 718 | 1.1268 | 0.2059 | 1.1268 | 1.0615 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
reda2002/MONSLIPPERS
|
reda2002
| 2025-01-21T15:10:05Z | 15 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T14:46:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MONSLIPPERS
---
# Monslippers
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MONSLIPPERS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('reda2002/MONSLIPPERS', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
IAmSkyDra/BARTBana_Translation_v1
|
IAmSkyDra
| 2025-01-21T15:06:45Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-01-21T00:54:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Romain-XV/f0e65db1-9e2c-40dc-961b-bb25593ee140
|
Romain-XV
| 2025-01-21T15:06:37Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-01-21T14:38:59Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f0e65db1-9e2c-40dc-961b-bb25593ee140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9f860ccb7e806546_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9f860ccb7e806546_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 30
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/f0e65db1-9e2c-40dc-961b-bb25593ee140
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
micro_batch_size: 4
mlflow_experiment_name: /tmp/9f860ccb7e806546_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92906d73-ae0c-43b3-9735-14fe2124bf2a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92906d73-ae0c-43b3-9735-14fe2124bf2a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f0e65db1-9e2c-40dc-961b-bb25593ee140
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1138 | 0.0191 | 1 | 4.2181 |
| 1.9066 | 0.9558 | 50 | 1.8719 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k11_task7_organization
|
MayBashendy
| 2025-01-21T15:05:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-01-21T15:00:57Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k11_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run2_AugV5_k11_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2725
- Qwk: 0.1497
- Mse: 1.2725
- Rmse: 1.1280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0769 | 2 | 2.5880 | -0.0262 | 2.5880 | 1.6087 |
| No log | 0.1538 | 4 | 1.3290 | 0.0511 | 1.3290 | 1.1528 |
| No log | 0.2308 | 6 | 1.0293 | -0.1255 | 1.0293 | 1.0145 |
| No log | 0.3077 | 8 | 0.9372 | 0.0 | 0.9372 | 0.9681 |
| No log | 0.3846 | 10 | 0.9429 | -0.0425 | 0.9429 | 0.9710 |
| No log | 0.4615 | 12 | 0.9954 | -0.0392 | 0.9954 | 0.9977 |
| No log | 0.5385 | 14 | 0.9280 | -0.0070 | 0.9280 | 0.9633 |
| No log | 0.6154 | 16 | 0.9033 | 0.1504 | 0.9033 | 0.9504 |
| No log | 0.6923 | 18 | 0.9861 | 0.1962 | 0.9861 | 0.9930 |
| No log | 0.7692 | 20 | 1.0962 | -0.0128 | 1.0962 | 1.0470 |
| No log | 0.8462 | 22 | 1.1395 | -0.0281 | 1.1395 | 1.0675 |
| No log | 0.9231 | 24 | 1.1856 | -0.2126 | 1.1856 | 1.0889 |
| No log | 1.0 | 26 | 1.0757 | -0.0472 | 1.0757 | 1.0372 |
| No log | 1.0769 | 28 | 1.0227 | 0.1217 | 1.0227 | 1.0113 |
| No log | 1.1538 | 30 | 1.0181 | 0.0982 | 1.0181 | 1.0090 |
| No log | 1.2308 | 32 | 0.9611 | 0.1542 | 0.9611 | 0.9804 |
| No log | 1.3077 | 34 | 0.9876 | 0.1918 | 0.9876 | 0.9938 |
| No log | 1.3846 | 36 | 1.0594 | -0.0634 | 1.0594 | 1.0293 |
| No log | 1.4615 | 38 | 1.0132 | -0.0479 | 1.0132 | 1.0066 |
| No log | 1.5385 | 40 | 0.9048 | 0.0966 | 0.9048 | 0.9512 |
| No log | 1.6154 | 42 | 0.8778 | 0.2171 | 0.8778 | 0.9369 |
| No log | 1.6923 | 44 | 0.8321 | 0.1972 | 0.8321 | 0.9122 |
| No log | 1.7692 | 46 | 0.7792 | 0.1407 | 0.7792 | 0.8827 |
| No log | 1.8462 | 48 | 0.7523 | 0.1508 | 0.7523 | 0.8673 |
| No log | 1.9231 | 50 | 0.7500 | 0.1236 | 0.7500 | 0.8661 |
| No log | 2.0 | 52 | 0.7803 | 0.0 | 0.7803 | 0.8833 |
| No log | 2.0769 | 54 | 0.8039 | 0.0 | 0.8039 | 0.8966 |
| No log | 2.1538 | 56 | 0.8265 | 0.0359 | 0.8265 | 0.9091 |
| No log | 2.2308 | 58 | 0.8628 | 0.1094 | 0.8628 | 0.9289 |
| No log | 2.3077 | 60 | 0.9631 | 0.1504 | 0.9631 | 0.9814 |
| No log | 2.3846 | 62 | 1.1027 | 0.0487 | 1.1027 | 1.0501 |
| No log | 2.4615 | 64 | 1.1507 | -0.0103 | 1.1507 | 1.0727 |
| No log | 2.5385 | 66 | 1.1130 | 0.1183 | 1.1130 | 1.0550 |
| No log | 2.6154 | 68 | 1.0896 | 0.0573 | 1.0896 | 1.0439 |
| No log | 2.6923 | 70 | 1.1578 | -0.1665 | 1.1578 | 1.0760 |
| No log | 2.7692 | 72 | 1.1796 | -0.0345 | 1.1796 | 1.0861 |
| No log | 2.8462 | 74 | 1.0470 | -0.0960 | 1.0470 | 1.0232 |
| No log | 2.9231 | 76 | 0.9014 | 0.1737 | 0.9014 | 0.9494 |
| No log | 3.0 | 78 | 0.8859 | 0.1313 | 0.8859 | 0.9412 |
| No log | 3.0769 | 80 | 0.9550 | 0.1268 | 0.9550 | 0.9773 |
| No log | 3.1538 | 82 | 1.0242 | 0.0933 | 1.0242 | 1.0120 |
| No log | 3.2308 | 84 | 1.0977 | -0.0197 | 1.0977 | 1.0477 |
| No log | 3.3077 | 86 | 1.1815 | 0.0236 | 1.1815 | 1.0870 |
| No log | 3.3846 | 88 | 1.1975 | 0.0563 | 1.1975 | 1.0943 |
| No log | 3.4615 | 90 | 1.1961 | -0.0160 | 1.1961 | 1.0936 |
| No log | 3.5385 | 92 | 1.1858 | -0.0358 | 1.1858 | 1.0890 |
| No log | 3.6154 | 94 | 1.1988 | 0.1348 | 1.1988 | 1.0949 |
| No log | 3.6923 | 96 | 1.1991 | 0.1384 | 1.1991 | 1.0950 |
| No log | 3.7692 | 98 | 1.1142 | 0.0205 | 1.1142 | 1.0556 |
| No log | 3.8462 | 100 | 1.0679 | 0.0616 | 1.0679 | 1.0334 |
| No log | 3.9231 | 102 | 1.0353 | 0.0839 | 1.0353 | 1.0175 |
| No log | 4.0 | 104 | 1.0270 | 0.0134 | 1.0270 | 1.0134 |
| No log | 4.0769 | 106 | 1.0919 | 0.1781 | 1.0919 | 1.0449 |
| No log | 4.1538 | 108 | 1.1002 | 0.1113 | 1.1002 | 1.0489 |
| No log | 4.2308 | 110 | 0.9776 | 0.0112 | 0.9776 | 0.9888 |
| No log | 4.3077 | 112 | 0.9881 | 0.1672 | 0.9881 | 0.9940 |
| No log | 4.3846 | 114 | 1.0446 | 0.1209 | 1.0446 | 1.0221 |
| No log | 4.4615 | 116 | 1.0382 | 0.1201 | 1.0382 | 1.0189 |
| No log | 4.5385 | 118 | 0.8993 | 0.1733 | 0.8993 | 0.9483 |
| No log | 4.6154 | 120 | 0.8918 | 0.1835 | 0.8918 | 0.9443 |
| No log | 4.6923 | 122 | 1.0720 | 0.1550 | 1.0720 | 1.0354 |
| No log | 4.7692 | 124 | 1.1172 | 0.1058 | 1.1172 | 1.0570 |
| No log | 4.8462 | 126 | 1.0092 | 0.0691 | 1.0092 | 1.0046 |
| No log | 4.9231 | 128 | 0.9939 | 0.1091 | 0.9939 | 0.9969 |
| No log | 5.0 | 130 | 1.1074 | 0.1814 | 1.1074 | 1.0523 |
| No log | 5.0769 | 132 | 1.0850 | 0.0753 | 1.0850 | 1.0416 |
| No log | 5.1538 | 134 | 1.0663 | 0.0682 | 1.0663 | 1.0326 |
| No log | 5.2308 | 136 | 1.1592 | 0.0904 | 1.1592 | 1.0767 |
| No log | 5.3077 | 138 | 1.1171 | 0.0784 | 1.1171 | 1.0569 |
| No log | 5.3846 | 140 | 1.0340 | 0.0379 | 1.0340 | 1.0169 |
| No log | 5.4615 | 142 | 1.0183 | 0.1500 | 1.0183 | 1.0091 |
| No log | 5.5385 | 144 | 1.0498 | 0.2193 | 1.0498 | 1.0246 |
| No log | 5.6154 | 146 | 0.9950 | 0.0748 | 0.9950 | 0.9975 |
| No log | 5.6923 | 148 | 1.0192 | 0.0031 | 1.0192 | 1.0095 |
| No log | 5.7692 | 150 | 1.0541 | 0.0600 | 1.0541 | 1.0267 |
| No log | 5.8462 | 152 | 1.0698 | 0.0934 | 1.0698 | 1.0343 |
| No log | 5.9231 | 154 | 1.0586 | 0.0934 | 1.0586 | 1.0289 |
| No log | 6.0 | 156 | 0.9983 | 0.0442 | 0.9983 | 0.9991 |
| No log | 6.0769 | 158 | 1.0270 | 0.0934 | 1.0270 | 1.0134 |
| No log | 6.1538 | 160 | 1.0252 | 0.1205 | 1.0252 | 1.0125 |
| No log | 6.2308 | 162 | 1.1064 | 0.2081 | 1.1064 | 1.0518 |
| No log | 6.3077 | 164 | 1.1990 | 0.0763 | 1.1990 | 1.0950 |
| No log | 6.3846 | 166 | 1.1675 | 0.0421 | 1.1675 | 1.0805 |
| No log | 6.4615 | 168 | 1.1031 | 0.1267 | 1.1031 | 1.0503 |
| No log | 6.5385 | 170 | 1.0497 | 0.0758 | 1.0497 | 1.0246 |
| No log | 6.6154 | 172 | 1.0380 | 0.1033 | 1.0380 | 1.0188 |
| No log | 6.6923 | 174 | 1.0331 | 0.0696 | 1.0331 | 1.0164 |
| No log | 6.7692 | 176 | 1.0045 | 0.0724 | 1.0045 | 1.0023 |
| No log | 6.8462 | 178 | 0.9824 | 0.0724 | 0.9824 | 0.9911 |
| No log | 6.9231 | 180 | 0.9414 | 0.0839 | 0.9414 | 0.9702 |
| No log | 7.0 | 182 | 0.9232 | 0.0899 | 0.9232 | 0.9608 |
| No log | 7.0769 | 184 | 0.9531 | 0.1031 | 0.9531 | 0.9763 |
| No log | 7.1538 | 186 | 0.9784 | 0.1031 | 0.9784 | 0.9892 |
| No log | 7.2308 | 188 | 1.0060 | 0.0968 | 1.0060 | 1.0030 |
| No log | 7.3077 | 190 | 1.0908 | 0.1615 | 1.0908 | 1.0444 |
| No log | 7.3846 | 192 | 1.0782 | 0.1615 | 1.0782 | 1.0384 |
| No log | 7.4615 | 194 | 1.0124 | 0.2026 | 1.0124 | 1.0062 |
| No log | 7.5385 | 196 | 1.0139 | 0.1771 | 1.0139 | 1.0069 |
| No log | 7.6154 | 198 | 1.0536 | 0.1472 | 1.0536 | 1.0264 |
| No log | 7.6923 | 200 | 1.1895 | 0.1310 | 1.1895 | 1.0906 |
| No log | 7.7692 | 202 | 1.2772 | 0.1462 | 1.2772 | 1.1301 |
| No log | 7.8462 | 204 | 1.2817 | 0.1307 | 1.2817 | 1.1321 |
| No log | 7.9231 | 206 | 1.2388 | 0.1230 | 1.2388 | 1.1130 |
| No log | 8.0 | 208 | 1.1612 | 0.0275 | 1.1612 | 1.0776 |
| No log | 8.0769 | 210 | 1.2015 | 0.1370 | 1.2015 | 1.0961 |
| No log | 8.1538 | 212 | 1.2220 | 0.1169 | 1.2220 | 1.1055 |
| No log | 8.2308 | 214 | 1.0821 | 0.2070 | 1.0821 | 1.0402 |
| No log | 8.3077 | 216 | 0.9369 | 0.2857 | 0.9369 | 0.9679 |
| No log | 8.3846 | 218 | 0.9066 | 0.3221 | 0.9066 | 0.9521 |
| No log | 8.4615 | 220 | 0.9160 | 0.2967 | 0.9160 | 0.9571 |
| No log | 8.5385 | 222 | 0.9793 | 0.2781 | 0.9793 | 0.9896 |
| No log | 8.6154 | 224 | 1.0682 | 0.2220 | 1.0682 | 1.0335 |
| No log | 8.6923 | 226 | 1.0427 | 0.2364 | 1.0427 | 1.0211 |
| No log | 8.7692 | 228 | 1.0549 | 0.1859 | 1.0549 | 1.0271 |
| No log | 8.8462 | 230 | 1.0550 | 0.2081 | 1.0550 | 1.0271 |
| No log | 8.9231 | 232 | 1.0520 | 0.1961 | 1.0520 | 1.0257 |
| No log | 9.0 | 234 | 0.9981 | 0.2310 | 0.9981 | 0.9990 |
| No log | 9.0769 | 236 | 0.9468 | 0.2099 | 0.9468 | 0.9731 |
| No log | 9.1538 | 238 | 0.9866 | 0.1899 | 0.9866 | 0.9933 |
| No log | 9.2308 | 240 | 1.0632 | 0.2014 | 1.0632 | 1.0311 |
| No log | 9.3077 | 242 | 1.0937 | 0.1636 | 1.0937 | 1.0458 |
| No log | 9.3846 | 244 | 0.9715 | 0.2633 | 0.9715 | 0.9857 |
| No log | 9.4615 | 246 | 0.8899 | 0.2270 | 0.8899 | 0.9433 |
| No log | 9.5385 | 248 | 0.8661 | 0.2135 | 0.8661 | 0.9306 |
| No log | 9.6154 | 250 | 0.8964 | 0.1970 | 0.8964 | 0.9468 |
| No log | 9.6923 | 252 | 1.1055 | 0.1569 | 1.1055 | 1.0514 |
| No log | 9.7692 | 254 | 1.2708 | 0.0861 | 1.2708 | 1.1273 |
| No log | 9.8462 | 256 | 1.1493 | 0.1031 | 1.1493 | 1.0721 |
| No log | 9.9231 | 258 | 0.9836 | 0.2498 | 0.9836 | 0.9918 |
| No log | 10.0 | 260 | 0.9285 | 0.2838 | 0.9285 | 0.9636 |
| No log | 10.0769 | 262 | 0.9052 | 0.2564 | 0.9052 | 0.9514 |
| No log | 10.1538 | 264 | 0.9367 | 0.2888 | 0.9367 | 0.9679 |
| No log | 10.2308 | 266 | 1.0791 | 0.1141 | 1.0791 | 1.0388 |
| No log | 10.3077 | 268 | 1.1145 | 0.1530 | 1.1145 | 1.0557 |
| No log | 10.3846 | 270 | 0.9867 | 0.2420 | 0.9867 | 0.9933 |
| No log | 10.4615 | 272 | 0.9728 | 0.2471 | 0.9728 | 0.9863 |
| No log | 10.5385 | 274 | 0.9952 | 0.2301 | 0.9952 | 0.9976 |
| No log | 10.6154 | 276 | 1.0330 | 0.1265 | 1.0330 | 1.0164 |
| No log | 10.6923 | 278 | 1.1188 | 0.1814 | 1.1188 | 1.0577 |
| No log | 10.7692 | 280 | 1.1027 | 0.0931 | 1.1027 | 1.0501 |
| No log | 10.8462 | 282 | 1.0163 | 0.1242 | 1.0163 | 1.0081 |
| No log | 10.9231 | 284 | 1.0520 | 0.1207 | 1.0520 | 1.0257 |
| No log | 11.0 | 286 | 1.1820 | 0.0852 | 1.1820 | 1.0872 |
| No log | 11.0769 | 288 | 1.1824 | 0.1085 | 1.1824 | 1.0874 |
| No log | 11.1538 | 290 | 1.0389 | 0.2116 | 1.0389 | 1.0193 |
| No log | 11.2308 | 292 | 0.9184 | 0.1886 | 0.9184 | 0.9584 |
| No log | 11.3077 | 294 | 0.8912 | 0.1592 | 0.8912 | 0.9440 |
| No log | 11.3846 | 296 | 0.9435 | 0.2359 | 0.9435 | 0.9713 |
| No log | 11.4615 | 298 | 1.0659 | 0.1998 | 1.0659 | 1.0324 |
| No log | 11.5385 | 300 | 1.1165 | 0.1113 | 1.1165 | 1.0567 |
| No log | 11.6154 | 302 | 1.0759 | 0.1681 | 1.0759 | 1.0373 |
| No log | 11.6923 | 304 | 1.1140 | 0.1620 | 1.1140 | 1.0555 |
| No log | 11.7692 | 306 | 1.0790 | 0.1494 | 1.0790 | 1.0388 |
| No log | 11.8462 | 308 | 1.0459 | 0.1606 | 1.0459 | 1.0227 |
| No log | 11.9231 | 310 | 0.9533 | 0.1651 | 0.9533 | 0.9764 |
| No log | 12.0 | 312 | 0.9501 | 0.1945 | 0.9501 | 0.9748 |
| No log | 12.0769 | 314 | 0.9727 | 0.2212 | 0.9727 | 0.9863 |
| No log | 12.1538 | 316 | 1.0478 | 0.1662 | 1.0478 | 1.0236 |
| No log | 12.2308 | 318 | 1.0816 | 0.1764 | 1.0816 | 1.0400 |
| No log | 12.3077 | 320 | 1.0384 | 0.2589 | 1.0384 | 1.0190 |
| No log | 12.3846 | 322 | 0.9470 | 0.3195 | 0.9470 | 0.9731 |
| No log | 12.4615 | 324 | 0.9089 | 0.2605 | 0.9089 | 0.9533 |
| No log | 12.5385 | 326 | 0.9739 | 0.2617 | 0.9739 | 0.9869 |
| No log | 12.6154 | 328 | 1.0920 | 0.1803 | 1.0920 | 1.0450 |
| No log | 12.6923 | 330 | 1.2393 | 0.1525 | 1.2393 | 1.1132 |
| No log | 12.7692 | 332 | 1.2031 | 0.1578 | 1.2031 | 1.0969 |
| No log | 12.8462 | 334 | 1.0994 | 0.1140 | 1.0994 | 1.0485 |
| No log | 12.9231 | 336 | 1.0377 | 0.1265 | 1.0377 | 1.0187 |
| No log | 13.0 | 338 | 1.0615 | 0.2044 | 1.0615 | 1.0303 |
| No log | 13.0769 | 340 | 1.1483 | 0.1870 | 1.1483 | 1.0716 |
| No log | 13.1538 | 342 | 1.1549 | 0.2141 | 1.1549 | 1.0747 |
| No log | 13.2308 | 344 | 1.0842 | 0.2626 | 1.0842 | 1.0412 |
| No log | 13.3077 | 346 | 1.0019 | 0.3043 | 1.0019 | 1.0010 |
| No log | 13.3846 | 348 | 1.0251 | 0.2975 | 1.0251 | 1.0125 |
| No log | 13.4615 | 350 | 1.0723 | 0.1926 | 1.0723 | 1.0355 |
| No log | 13.5385 | 352 | 1.1779 | 0.1468 | 1.1779 | 1.0853 |
| No log | 13.6154 | 354 | 1.2477 | 0.1196 | 1.2477 | 1.1170 |
| No log | 13.6923 | 356 | 1.2466 | 0.1196 | 1.2466 | 1.1165 |
| No log | 13.7692 | 358 | 1.1993 | 0.1536 | 1.1993 | 1.0951 |
| No log | 13.8462 | 360 | 1.1113 | 0.2029 | 1.1113 | 1.0542 |
| No log | 13.9231 | 362 | 1.0695 | 0.2348 | 1.0695 | 1.0342 |
| No log | 14.0 | 364 | 1.0972 | 0.2303 | 1.0972 | 1.0475 |
| No log | 14.0769 | 366 | 1.1402 | 0.2200 | 1.1402 | 1.0678 |
| No log | 14.1538 | 368 | 1.2260 | 0.1468 | 1.2260 | 1.1072 |
| No log | 14.2308 | 370 | 1.1719 | 0.1779 | 1.1719 | 1.0825 |
| No log | 14.3077 | 372 | 1.1355 | 0.1898 | 1.1355 | 1.0656 |
| No log | 14.3846 | 374 | 1.0978 | 0.2070 | 1.0978 | 1.0478 |
| No log | 14.4615 | 376 | 1.0772 | 0.2113 | 1.0772 | 1.0379 |
| No log | 14.5385 | 378 | 1.1238 | 0.2070 | 1.1238 | 1.0601 |
| No log | 14.6154 | 380 | 1.1039 | 0.2113 | 1.1039 | 1.0507 |
| No log | 14.6923 | 382 | 1.0236 | 0.1662 | 1.0236 | 1.0117 |
| No log | 14.7692 | 384 | 1.0151 | 0.2059 | 1.0151 | 1.0075 |
| No log | 14.8462 | 386 | 0.9724 | 0.2839 | 0.9724 | 0.9861 |
| No log | 14.9231 | 388 | 0.9189 | 0.2576 | 0.9189 | 0.9586 |
| No log | 15.0 | 390 | 0.9004 | 0.1961 | 0.9004 | 0.9489 |
| No log | 15.0769 | 392 | 0.9584 | 0.2839 | 0.9584 | 0.9790 |
| No log | 15.1538 | 394 | 1.1356 | 0.1636 | 1.1356 | 1.0656 |
| No log | 15.2308 | 396 | 1.2127 | 0.1687 | 1.2127 | 1.1012 |
| No log | 15.3077 | 398 | 1.1528 | 0.1824 | 1.1528 | 1.0737 |
| No log | 15.3846 | 400 | 1.0122 | 0.3280 | 1.0122 | 1.0061 |
| No log | 15.4615 | 402 | 0.9582 | 0.2523 | 0.9582 | 0.9789 |
| No log | 15.5385 | 404 | 0.9714 | 0.2262 | 0.9714 | 0.9856 |
| No log | 15.6154 | 406 | 1.0135 | 0.2054 | 1.0135 | 1.0067 |
| No log | 15.6923 | 408 | 1.1015 | 0.2055 | 1.1015 | 1.0495 |
| No log | 15.7692 | 410 | 1.0614 | 0.1330 | 1.0614 | 1.0303 |
| No log | 15.8462 | 412 | 1.0175 | 0.1441 | 1.0175 | 1.0087 |
| No log | 15.9231 | 414 | 0.9897 | 0.1783 | 0.9897 | 0.9948 |
| No log | 16.0 | 416 | 1.0096 | 0.1691 | 1.0096 | 1.0048 |
| No log | 16.0769 | 418 | 0.9507 | 0.2495 | 0.9507 | 0.9750 |
| No log | 16.1538 | 420 | 0.9267 | 0.1715 | 0.9267 | 0.9627 |
| No log | 16.2308 | 422 | 0.9329 | 0.1760 | 0.9329 | 0.9659 |
| No log | 16.3077 | 424 | 0.9243 | 0.1760 | 0.9243 | 0.9614 |
| No log | 16.3846 | 426 | 0.9595 | 0.1870 | 0.9595 | 0.9796 |
| No log | 16.4615 | 428 | 1.0631 | 0.1819 | 1.0631 | 1.0311 |
| No log | 16.5385 | 430 | 1.1692 | 0.1779 | 1.1692 | 1.0813 |
| No log | 16.6154 | 432 | 1.1831 | 0.2065 | 1.1831 | 1.0877 |
| No log | 16.6923 | 434 | 1.0933 | 0.2488 | 1.0933 | 1.0456 |
| No log | 16.7692 | 436 | 1.0014 | 0.1940 | 1.0014 | 1.0007 |
| No log | 16.8462 | 438 | 1.0096 | 0.2183 | 1.0096 | 1.0048 |
| No log | 16.9231 | 440 | 1.0929 | 0.2603 | 1.0929 | 1.0454 |
| No log | 17.0 | 442 | 1.1386 | 0.2143 | 1.1386 | 1.0670 |
| No log | 17.0769 | 444 | 1.1449 | 0.2359 | 1.1449 | 1.0700 |
| No log | 17.1538 | 446 | 1.0952 | 0.3394 | 1.0952 | 1.0465 |
| No log | 17.2308 | 448 | 1.0126 | 0.3159 | 1.0126 | 1.0063 |
| No log | 17.3077 | 450 | 0.9722 | 0.3159 | 0.9722 | 0.9860 |
| No log | 17.3846 | 452 | 0.9505 | 0.3219 | 0.9505 | 0.9749 |
| No log | 17.4615 | 454 | 1.0061 | 0.3159 | 1.0061 | 1.0030 |
| No log | 17.5385 | 456 | 0.9923 | 0.3159 | 0.9923 | 0.9961 |
| No log | 17.6154 | 458 | 0.9556 | 0.2670 | 0.9556 | 0.9775 |
| No log | 17.6923 | 460 | 0.9883 | 0.2616 | 0.9883 | 0.9941 |
| No log | 17.7692 | 462 | 0.9892 | 0.2670 | 0.9892 | 0.9946 |
| No log | 17.8462 | 464 | 0.9903 | 0.2670 | 0.9903 | 0.9951 |
| No log | 17.9231 | 466 | 1.0084 | 0.2616 | 1.0084 | 1.0042 |
| No log | 18.0 | 468 | 1.0485 | 0.3159 | 1.0485 | 1.0239 |
| No log | 18.0769 | 470 | 0.9913 | 0.2616 | 0.9913 | 0.9957 |
| No log | 18.1538 | 472 | 0.9333 | 0.2839 | 0.9333 | 0.9660 |
| No log | 18.2308 | 474 | 0.9187 | 0.2839 | 0.9187 | 0.9585 |
| No log | 18.3077 | 476 | 0.9258 | 0.2975 | 0.9258 | 0.9622 |
| No log | 18.3846 | 478 | 0.9281 | 0.2975 | 0.9281 | 0.9634 |
| No log | 18.4615 | 480 | 0.9763 | 0.3159 | 0.9763 | 0.9881 |
| No log | 18.5385 | 482 | 0.9998 | 0.3159 | 0.9998 | 0.9999 |
| No log | 18.6154 | 484 | 1.0420 | 0.3159 | 1.0420 | 1.0208 |
| No log | 18.6923 | 486 | 1.0898 | 0.3100 | 1.0898 | 1.0439 |
| No log | 18.7692 | 488 | 1.0684 | 0.3159 | 1.0684 | 1.0336 |
| No log | 18.8462 | 490 | 1.0393 | 0.2670 | 1.0393 | 1.0194 |
| No log | 18.9231 | 492 | 0.9973 | 0.2751 | 0.9973 | 0.9987 |
| No log | 19.0 | 494 | 0.9664 | 0.2495 | 0.9664 | 0.9830 |
| No log | 19.0769 | 496 | 0.9822 | 0.2617 | 0.9822 | 0.9910 |
| No log | 19.1538 | 498 | 0.9966 | 0.2943 | 0.9966 | 0.9983 |
| 0.3114 | 19.2308 | 500 | 1.0656 | 0.2097 | 1.0656 | 1.0323 |
| 0.3114 | 19.3077 | 502 | 1.0683 | 0.2905 | 1.0683 | 1.0336 |
| 0.3114 | 19.3846 | 504 | 1.0200 | 0.2905 | 1.0200 | 1.0100 |
| 0.3114 | 19.4615 | 506 | 0.9818 | 0.3095 | 0.9818 | 0.9908 |
| 0.3114 | 19.5385 | 508 | 0.9591 | 0.3095 | 0.9591 | 0.9793 |
| 0.3114 | 19.6154 | 510 | 0.9573 | 0.3095 | 0.9573 | 0.9784 |
| 0.3114 | 19.6923 | 512 | 0.9966 | 0.2943 | 0.9966 | 0.9983 |
| 0.3114 | 19.7692 | 514 | 1.0009 | 0.2943 | 1.0009 | 1.0004 |
| 0.3114 | 19.8462 | 516 | 1.0760 | 0.2097 | 1.0760 | 1.0373 |
| 0.3114 | 19.9231 | 518 | 1.1806 | 0.2317 | 1.1806 | 1.0865 |
| 0.3114 | 20.0 | 520 | 1.1615 | 0.1628 | 1.1615 | 1.0777 |
| 0.3114 | 20.0769 | 522 | 1.0525 | 0.2643 | 1.0525 | 1.0259 |
| 0.3114 | 20.1538 | 524 | 0.9980 | 0.1856 | 0.9980 | 0.9990 |
| 0.3114 | 20.2308 | 526 | 1.0265 | 0.2696 | 1.0265 | 1.0131 |
| 0.3114 | 20.3077 | 528 | 1.1166 | 0.2401 | 1.1166 | 1.0567 |
| 0.3114 | 20.3846 | 530 | 1.1002 | 0.2401 | 1.1002 | 1.0489 |
| 0.3114 | 20.4615 | 532 | 1.0084 | 0.3012 | 1.0084 | 1.0042 |
| 0.3114 | 20.5385 | 534 | 0.9238 | 0.2839 | 0.9238 | 0.9611 |
| 0.3114 | 20.6154 | 536 | 0.8798 | 0.2077 | 0.8798 | 0.9380 |
| 0.3114 | 20.6923 | 538 | 0.8929 | 0.2077 | 0.8929 | 0.9449 |
| 0.3114 | 20.7692 | 540 | 0.9635 | 0.2183 | 0.9635 | 0.9816 |
| 0.3114 | 20.8462 | 542 | 1.1025 | 0.1923 | 1.1025 | 1.0500 |
| 0.3114 | 20.9231 | 544 | 1.1957 | 0.1594 | 1.1957 | 1.0935 |
| 0.3114 | 21.0 | 546 | 1.1812 | 0.2317 | 1.1812 | 1.0868 |
| 0.3114 | 21.0769 | 548 | 1.1079 | 0.2285 | 1.1079 | 1.0525 |
| 0.3114 | 21.1538 | 550 | 0.9961 | 0.2315 | 0.9961 | 0.9981 |
| 0.3114 | 21.2308 | 552 | 0.9203 | 0.2383 | 0.9203 | 0.9593 |
| 0.3114 | 21.3077 | 554 | 0.8681 | 0.1850 | 0.8681 | 0.9317 |
| 0.3114 | 21.3846 | 556 | 0.8525 | 0.1850 | 0.8525 | 0.9233 |
| 0.3114 | 21.4615 | 558 | 0.8767 | 0.2096 | 0.8767 | 0.9363 |
| 0.3114 | 21.5385 | 560 | 0.9360 | 0.2781 | 0.9360 | 0.9675 |
| 0.3114 | 21.6154 | 562 | 0.9958 | 0.2113 | 0.9958 | 0.9979 |
| 0.3114 | 21.6923 | 564 | 1.0443 | 0.1594 | 1.0443 | 1.0219 |
| 0.3114 | 21.7692 | 566 | 1.0221 | 0.1697 | 1.0221 | 1.0110 |
| 0.3114 | 21.8462 | 568 | 0.9428 | 0.2139 | 0.9428 | 0.9710 |
| 0.3114 | 21.9231 | 570 | 0.9138 | 0.2370 | 0.9138 | 0.9559 |
| 0.3114 | 22.0 | 572 | 0.8716 | 0.3146 | 0.8716 | 0.9336 |
| 0.3114 | 22.0769 | 574 | 0.8403 | 0.2633 | 0.8403 | 0.9167 |
| 0.3114 | 22.1538 | 576 | 0.8307 | 0.2359 | 0.8307 | 0.9114 |
| 0.3114 | 22.2308 | 578 | 0.8812 | 0.2576 | 0.8812 | 0.9387 |
| 0.3114 | 22.3077 | 580 | 1.0000 | 0.1926 | 1.0000 | 1.0000 |
| 0.3114 | 22.3846 | 582 | 1.1450 | 0.1920 | 1.1450 | 1.0701 |
| 0.3114 | 22.4615 | 584 | 1.1682 | 0.2308 | 1.1682 | 1.0809 |
| 0.3114 | 22.5385 | 586 | 1.0816 | 0.2159 | 1.0816 | 1.0400 |
| 0.3114 | 22.6154 | 588 | 0.9121 | 0.2359 | 0.9121 | 0.9550 |
| 0.3114 | 22.6923 | 590 | 0.8491 | 0.2009 | 0.8491 | 0.9215 |
| 0.3114 | 22.7692 | 592 | 0.8662 | 0.2359 | 0.8662 | 0.9307 |
| 0.3114 | 22.8462 | 594 | 0.9967 | 0.2220 | 0.9967 | 0.9983 |
| 0.3114 | 22.9231 | 596 | 1.1470 | 0.1922 | 1.1470 | 1.0710 |
| 0.3114 | 23.0 | 598 | 1.2457 | 0.1819 | 1.2457 | 1.1161 |
| 0.3114 | 23.0769 | 600 | 1.4038 | 0.1511 | 1.4038 | 1.1848 |
| 0.3114 | 23.1538 | 602 | 1.4013 | 0.1511 | 1.4013 | 1.1838 |
| 0.3114 | 23.2308 | 604 | 1.2725 | 0.1497 | 1.2725 | 1.1280 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nhung02/db2c9545-4607-487b-9cc3-b7a0587316fc
|
nhung02
| 2025-01-21T15:04:41Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T14:43:07Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: db2c9545-4607-487b-9cc3-b7a0587316fc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c65c4353c858f05_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c65c4353c858f05_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung02/db2c9545-4607-487b-9cc3-b7a0587316fc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c65c4353c858f05_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4200e893-cfbb-40b3-bdb2-0eb0fdaa99b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4200e893-cfbb-40b3-bdb2-0eb0fdaa99b8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# db2c9545-4607-487b-9cc3-b7a0587316fc
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2298 | 0.0164 | 200 | 0.8664 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
frnka/Llama-3.2-3B-Instruct-DMP-synthetic-context
|
frnka
| 2025-01-21T15:03:31Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"question-answering",
"dataset:frnka/dmp-qa-with-context-2",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"doi:10.57967/hf/4188",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-12-30T10:19:14Z |
---
library_name: transformers
tags:
- unsloth
datasets:
- frnka/dmp-qa-with-context-2
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: question-answering
---
## Improved with data generated by Qwen 2.5 14B
# Model info
Model is used for finding answers to questions in Data Management Plans. It achieves F1 score of 49.1 on the [user annotated dmps dataset](https://doi.org/10.57967/hf/4175).
## Prompts used:
**System:**
```
You are an AI assistant specialized in extracting exact answers from given contexts. Your task is to:
1. Read the provided Data management plan carefully.
2. Find the specific answer to the given question within the context.
3. Extract and present the answer exactly as it appears, without any modifications.
Guidelines:
- Provide the answer word-for-word, preserving all original formatting, punctuation, and capitalization.
- If no exact answer is found, respond with "NO ANSWER"
- Do not offer any explanations, interpretations, or additional information beyond the exact answer found in the context.
- Do not modify, paraphrase, or summarize the answer in any way.
If you follow the guidelines exactly, I will give you a 10$ tip.
```
**User:**
```
Context:
{dmp}
Question:
{question}
|
paramedik/phi-4-Q4_K_M-GGUF
|
paramedik
| 2025-01-21T15:02:33Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T15:01:54Z |
---
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: How should I explain the Internet?
library_name: transformers
base_model: microsoft/phi-4
---
# paramedik/phi-4-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/phi-4`](https://huggingface.co/microsoft/phi-4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/phi-4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo paramedik/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo paramedik/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo paramedik/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo paramedik/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -c 2048
```
|
cvoffer/b36d7c73-0527-464f-9f48-f03978f44fbe
|
cvoffer
| 2025-01-21T15:01:31Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-64k",
"region:us"
] | null | 2025-01-21T12:38:49Z |
---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b36d7c73-0527-464f-9f48-f03978f44fbe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 80c30cd5e02284af_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80c30cd5e02284af_train_data.json
type:
field_instruction: prompt
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cvoffer/b36d7c73-0527-464f-9f48-f03978f44fbe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/80c30cd5e02284af_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 711e6929-304b-493a-a64e-98f30c7ab861
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 711e6929-304b-493a-a64e-98f30c7ab861
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# b36d7c73-0527-464f-9f48-f03978f44fbe
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.6747 |
| 7.1131 | 0.0002 | 5 | 2.6087 |
| 8.4284 | 0.0003 | 10 | 2.5324 |
| 7.3734 | 0.0005 | 15 | 2.6186 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fahd200581/AILOOOGOAI
|
fahd200581
| 2025-01-21T15:00:21Z | 7 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T14:41:20Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AILOOOGOAI
---
# Ailooogoai
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AILOOOGOAI` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fahd200581/AILOOOGOAI', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kucher7serg/ruz1l
|
kucher7serg
| 2025-01-21T14:59:23Z | 140 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T14:57:52Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/ruz1l_000150_05_20250121130650_42.png
text: ruz1l a young man smiles --d 42
- output:
url: sample/ruz1l_000300_05_20250121131236_42.png
text: ruz1l a young man looks into the distance in surprise --d 42
- output:
url: sample/ruz1l_000450_05_20250121131824_42.png
text: ruz1l a young man is wildly scared --d 42
- output:
url: sample/ruz1l_000600_05_20250121132408_42.png
text: ruz1l a young man happy --d 42
- output:
url: sample/ruz1l_000750_05_20250121132955_42.png
text: ruz1l a young man full length photo --d 42
- output:
url: sample/ruz1l_000900_05_20250121133540_42.png
text: ruz1l a young man goes surfing --d 42
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ruz1l a young man
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ruz1l
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ruz1l a young man` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
kostiantynk/6a09c1ea-4493-40af-9df4-50424480a269
|
kostiantynk
| 2025-01-21T14:59:04Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-21T14:52:23Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a09c1ea-4493-40af-9df4-50424480a269
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 98d61748b31bcb23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/98d61748b31bcb23_train_data.json
type:
field_input: src
field_instruction: task
field_output: tgt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/6a09c1ea-4493-40af-9df4-50424480a269
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/98d61748b31bcb23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4807618-cf8c-4200-9ce1-34371a66a038
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4807618-cf8c-4200-9ce1-34371a66a038
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6a09c1ea-4493-40af-9df4-50424480a269
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4129 | 0.0001 | 1 | 3.0375 |
| 2.2835 | 0.0004 | 3 | 2.9579 |
| 1.9105 | 0.0007 | 6 | 2.0824 |
| 1.2443 | 0.0011 | 9 | 1.2474 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nathanialhunt/5b13285f-38a0-4c51-a18d-e8025a07e8c9
|
nathanialhunt
| 2025-01-21T14:57:58Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-21T14:50:18Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5b13285f-38a0-4c51-a18d-e8025a07e8c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 98d61748b31bcb23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/98d61748b31bcb23_train_data.json
type:
field_input: src
field_instruction: task
field_output: tgt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/5b13285f-38a0-4c51-a18d-e8025a07e8c9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/98d61748b31bcb23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4807618-cf8c-4200-9ce1-34371a66a038
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4807618-cf8c-4200-9ce1-34371a66a038
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5b13285f-38a0-4c51-a18d-e8025a07e8c9
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4129 | 0.0001 | 1 | 3.0375 |
| 2.2805 | 0.0004 | 3 | 2.9507 |
| 1.8911 | 0.0007 | 6 | 2.0529 |
| 1.2268 | 0.0011 | 9 | 1.2346 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik87/02984339-5881-4d63-b827-3e1a1186dd03
|
dimasik87
| 2025-01-21T14:57:06Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T14:55:20Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02984339-5881-4d63-b827-3e1a1186dd03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- da3ed2ddf3136e5d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/da3ed2ddf3136e5d_train_data.json
type:
field_input: Documents
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/02984339-5881-4d63-b827-3e1a1186dd03
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/da3ed2ddf3136e5d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b59b5a01-befc-45d4-a9af-2ea93cb31c85
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b59b5a01-befc-45d4-a9af-2ea93cb31c85
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 02984339-5881-4d63-b827-3e1a1186dd03
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 8.1071 |
| 5.3075 | 0.0090 | 5 | 3.6301 |
| 2.7418 | 0.0180 | 10 | 2.5294 |
| 2.2275 | 0.0270 | 15 | 2.0721 |
| 2.1091 | 0.0360 | 20 | 1.8647 |
| 1.8717 | 0.0450 | 25 | 1.7659 |
| 1.7748 | 0.0541 | 30 | 1.7420 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
kokovova/f29bfaa2-dc23-4fa5-810a-c9cc9535777c
|
kokovova
| 2025-01-21T14:56:53Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T14:55:05Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f29bfaa2-dc23-4fa5-810a-c9cc9535777c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- da3ed2ddf3136e5d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/da3ed2ddf3136e5d_train_data.json
type:
field_input: Documents
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: kokovova/f29bfaa2-dc23-4fa5-810a-c9cc9535777c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/da3ed2ddf3136e5d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b59b5a01-befc-45d4-a9af-2ea93cb31c85
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b59b5a01-befc-45d4-a9af-2ea93cb31c85
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# f29bfaa2-dc23-4fa5-810a-c9cc9535777c
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 8.1071 |
| 5.3176 | 0.0090 | 5 | 3.6362 |
| 2.7207 | 0.0180 | 10 | 2.5180 |
| 2.1964 | 0.0270 | 15 | 2.0322 |
| 2.0648 | 0.0360 | 20 | 1.8257 |
| 1.839 | 0.0450 | 25 | 1.7338 |
| 1.7392 | 0.0541 | 30 | 1.7079 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
fedovtt/26d46fd0-6d97-4263-a6cd-32079ce6117e
|
fedovtt
| 2025-01-21T14:55:39Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T14:42:31Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26d46fd0-6d97-4263-a6cd-32079ce6117e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9c65c4353c858f05_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c65c4353c858f05_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/26d46fd0-6d97-4263-a6cd-32079ce6117e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c65c4353c858f05_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4200e893-cfbb-40b3-bdb2-0eb0fdaa99b8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4200e893-cfbb-40b3-bdb2-0eb0fdaa99b8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 26d46fd0-6d97-4263-a6cd-32079ce6117e
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.5574 |
| 2.5043 | 0.0004 | 5 | 2.3664 |
| 2.223 | 0.0008 | 10 | 1.9743 |
| 1.7793 | 0.0012 | 15 | 1.6835 |
| 1.6453 | 0.0016 | 20 | 1.5297 |
| 1.4489 | 0.0021 | 25 | 1.4597 |
| 1.4061 | 0.0025 | 30 | 1.4442 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
metricspace/GDPR_Input_Detection_and_Anonymization_0.5B
|
metricspace
| 2025-01-21T14:54:17Z | 113 | 3 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2024-09-09T12:15:37Z |
---
license: apache-2.0
---
# The GDPR Input Detection and Anonymization model
The **The GDPR Input Detection and Anonymization model** is designed to protect sensitive information locally before it is processed by larger AI models in external clouds.
## Intended Use
The model is made to bridge the user inputs to external LLM input like a firewall or proxy.
The model analysis the user prompts and computes two scores.
The first score helps to identify if it needs a small or more cabable model to process the user input.
The second score rates the sensitivity of the prompt. When it detects sensitive information, the further cloud processing of the prompt can be blocked or at least be replaced by an anonymized version.
## Complexity Scoring
The score rates the complexity of a task on a scale from 1 to 10, where **1 represents simple information retrieval of world knowledge** and **10 involves highly complex, expert-level analysis of domain specific content**. With lower scores, smaller models like **LLaMA** are sufficient, while mid to high scores suggest using more powerful models like from OpenAI or Anthropic.
Also the score incresease, if the number of instructions and contraints in a prompt increase.
This scoring system provides guidance for selecting the right model, helping to **save costs** and **optimize resources** by aligning the task’s complexity with the appropriate computational power.
| **Score** | **Description** | **Example Tasks** | Number of Instructions and Constraints* | **Suggested Models** |
|:---------:|--------------------------------------------|-------------------------------------------------------------------|-|------------------------------------|
| **1** | Basic fact retrieval or identification | "What is the capital of France?" | 1 | Llama, smaller models |
| **2** | Simple rephrasing or minor adjustments | "Paraphrase the sentence: 'The dog barked loudly.'" | 1 | Llama, GPT-4o Mini |
| **3** | Summaries or brief overviews | "Summarize the key points of a short text." | 1 | Llama, GPT-4o Mini |
| **4** | List creation or short reasoning | "List three reasons why the following exercise is more beneficial in Greek than in Italy: Here is the exercise: ..." | 2 | GPT-4o Mini |
| **5** | Key point extraction, organized info | "Exctract challenges and solutions as bullet points. no summary, no intro. Here is the text: ..." | 2 | GPT-4o Mini |
| **6** | Basic narrative or simple analysis | "Write a short story about a character learning a lesson. Written in Harry Potter style, but not about Harry potter, but for coding amateur, humorous" | 3 | GPT-4o Mini |
| **7** | Multi-step reasoning, scenario-based tasks | "Base on the following sprint reports, describe the challenges for project manager, team-building, psychologically, deadline-restriction, building technical doubt etc." | 3+ | GPT-4o Mini or GPT-4o |
| **8** | In-depth explanations or complex analysis | "I will give you 3 text snippets. Explain how climate change affects ocean currents, find overlapping arugments and disargreements of the authors, and use the Harvard Concept to solve the discrepancies" | 3+ | GPT-4o |
| **9** | Advanced analysis or cross-disciplinary | "Check the next 200 mails and filter out conversations between engineering and sales that are helpfull and not helpfull for the company. Give me the result as CSV-Table." | 3+ | GPT-4o |
| **10** | Expert-level analysis and synthesis | "Write a report on AI’s impact on real estate invest decision in the middle class, covering the following the main trends from 2025 to 2030 in consumption development, given by the following reports of ten different organizations. Have cultural differences in mind around the world and convert to german culture"| 3+ | GPT-4o |
## Sensitivity Scoring
The sensitivity score rates the **confidentiality** of a task on a scale from 0 to 3, where **0 represents non-sensitive, public information** (e.g., event announcements) and **3 involves highly confidential, critical data** (e.g., medical reports). Lower scores indicate content that can be shared freely, while higher scores require **secure handling** and **anonymization**. This system ensures sensitive content is properly protected, preventing any **unauthorized exposure**.
Here's the revised table with the addition of a GDPR-related example for each sensitivity level:
| **Score** | **Sensitivity and Description** | **Example 1** | **Example 2** |
|:---------:|-----------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| **0** | **Public, Non-Sensitive** — Free to share; contains no confidential or personal information. | “Community Picnic this Saturday at Greenfield Park, 11 AM. RSVP by Thursday!” | "Company Press Release: New sustainability initiatives for 2024." |
| **1** | **Internal, Low Sensitivity** — For internal use only but only minor impact if disclosed. | “New office hours starting next month: 9 AM - 6 PM Mon-Thurs, half-day Fri.” | "Employee List for Training Day: Names only, no personal data." |
| **2** | **Confidential, Medium Sensitivity** — For managers or leadership; could impact finances or reputation if disclosed. | “Q3 Budget Breakdown: Marketing: $150K, R&D: $200K” | "Customer Analysis Report: Aggregate data only, segmented by region." |
| **3** | **Highly Confidential, Critical Sensitivity** — Restricted to top executives or medical professionals; severe harm if disclosed. | “Merger Negotiation: Acme to acquire 75% of BetaTech for $2.5B…” | "Employee Health Records: Only accessible to HR and necessary medical staff in compliance with GDPR." |
## Annonymization and Re-Annonymization
The model can be used locally before and after the processing data with external models
| Step | Content |
|-------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Anonymization and Re-Anonymization** | |
| **User Input** | We have a community picnic this Saturday at Greenfield Park, 11 AM. RSVP by Thursday. Write me an e-mail announcement. |
| **Anonymized Version by Local Model** | We have a community picnic this Saturday at Maplewood Park, Late Morning. RSVP by Thursday. Write me an e-mail announcement. |
| **Processed with Main LLM (GPT-4)** | **Subject:** 🍃 Community Picnic This Saturday at Maplewood Park! 🌞 <br> <br> Hello [Community/Neighborhood] Friends, We’re excited to announce our Community Picnic happening this Saturday at Maplewood Park! This is a wonderful opportunity to relax, connect with neighbors, and enjoy some outdoor fun. We’ll gather late morning (around 10:30 AM), so bring your favorite snacks, picnic blankets, and plenty of smiles! <br> <br> **Details:** <br> 📅 **Date:** Saturday, [Exact Date] <br> 📍 **Location:** Maplewood Park <br> ⏰ **Time:** Late Morning (around 10:30 AM) <br> <br> We’ll have games, music, and a few surprises, so it’ll be a great time for all ages! Feel free to bring family, friends, and any special treats you’d like to share with the group. <br> <br> **RSVP** <br> To help us plan, please RSVP by this Thursday by replying to this email or contacting [Contact Information]. We can’t wait to see you all there and make this picnic a memorable one for our community. Don’t forget your sunscreen and a lawn chair if you’d like some extra comfort! <br> <br> **Warm regards,** [Your Name/Community Group] |
| **Entities Replaced by Local Model** | **Subject:** 🍃 Community Picnic This Saturday at Greenfield Park! 🌞 <br> <br> Hello [Community/Neighborhood] Friends, We’re excited to announce our Community Picnic happening this Saturday at Greenfield Park! This is a wonderful opportunity to relax, connect with neighbors, and enjoy some outdoor fun. We’ll gather late morning (around 10:30 AM), so bring your favorite snacks, picnic blankets, and plenty of smiles! <br> <br> **Details:** <br> 📅 **Date:** Saturday, [Exact Date] <br> 📍 **Location:** Greenfield Park <br> ⏰ **Time:** Late Morning (around 10:30 AM) <br> <br> We’ll have games, music, and a few surprises, so it’ll be a great time for all ages! Feel free to bring family, friends, and any special treats you’d like to share with the group. <br> <br> **RSVP** <br> To help us plan, please RSVP by this Thursday by replying to this email or contacting [Contact Information]. We can’t wait to see you all there and make this picnic a memorable one for our community. Don’t forget your sunscreen and a lawn chair if you’d like some extra comfort! <br> <br> **Warm regards,** [Your Name/Community Group] |
## How to Use the Model
This model provides functionalities for sensitivity assessment, complexity assesment, and entity anonymization. Each function is illustrated below with example code snippets.
<details>
<summary>Model setup and inference</summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_path = "metricspace/GDPR_Input_Detection_and_Anonymization_0.5B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16 if device == "cuda" else torch.float32).to(device)
import re
import json
# Set tokenizer tokens and model padding
tokenizer.pad_token = "<|im_start|>"
tokenizer.eos_token = "<|im_end|>"
tokenizer.padding_side = "left"
model.generation_config.pad_token_id = tokenizer.pad_token_id
# Define formats for different analysis types
formats = {
"sensitivity": """<|im_start|>system\nSensitivity<|im_end|>\n<|im_start|>user\n{text}<|im_end|>\n<|im_start|>assistant\n""",
"complexity": """<|im_start|>system\nComplexity<|im_end|>\n<|im_start|>user\n{text}<|im_end|>\n<|im_start|>assistant\n""",
"entity_detection": """<|im_start|>system\nEntity Detection<|im_end|>\n<|im_start|>user\n{text}<|im_end|>\n<|im_start|>assistant\n""",
"entity_swapping": """<|im_start|>system\nEntity Swapping<|im_end|>\n<|im_start|>user\nentities:\n{entities}\ntext:\n{text}<|im_end|>\n<|im_start|>assistant\n"""
}
def model_inference(text, mode="anonymization", max_new_tokens=2028, config=None, entity_mapping=None, return_entities=False, reverse_mapping=False):
if mode not in formats and mode != "anonymization":
raise ValueError("Invalid mode. Choose from 'sensitivity', 'complexity', 'entity_detection', 'anonymization'.")
# Configuration for anonymization
# The `config` dictionary specifies the anonymization behavior for each type of entity detected.
# Each key in `config` represents an entity type (e.g., "LOC" for location, "PERSON" for personal names),
# and the value assigned to that key determines how entities of that type should be anonymized:
#
# - "RANDOM": Replaces the entity with a randomly selected placeholder.
# - "GENERAL LOW", "GENERAL MEDIUM", "GENERAL HIGH": Replaces the entity with a generalized label,
# with the intensity level (LOW, MEDIUM, HIGH) controlling the specificity. For example,
# "GENERAL LOW" might use a more specific label ("Local Park") while "GENERAL HIGH" would use
# a broader label ("Recreational Area").
#
# This allows fine-grained control over anonymization, ensuring that different types of sensitive
# information can be replaced in ways that are appropriate for the context. For example:
# - "LOC": "RANDOM" replaces any detected location with a random placeholder.
# - "DATETIME": "GENERAL LOW" uses a lower-intensity generalization for dates and times.
#
# This flexibility enables custom anonymization policies to suit different privacy or obfuscation needs.
if config is None:
config = {
"LOC": "RANDOM",
"PERSON": "RANDOM",
"DEM": "RANDOM",
"CODE": "RANDOM",
"ORG": "GENERAL MEDIUM",
"DATETIME": "GENERAL LOW",
"QUANTITY": "RANDOM",
"MISC": "RANDOM",
}
# Anonymization Mode
if mode == "anonymization":
# Step 1: Entity detection
detection_prompt = formats["entity_detection"].format(text=text)
detection_inputs = tokenizer(detection_prompt, return_tensors="pt").to(device)
detection_output = model.generate(
**detection_inputs,
max_new_tokens=max_new_tokens,
use_cache=True,
eos_token_id=151645
)
detection_text = tokenizer.decode(detection_output[0], skip_special_tokens=True)
detected_entities = postprocess_entity_recognition(detection_text)
# Step 2: Select entities based on config
selected_entities = select_entities_based_on_json(detected_entities, config)
entities_str = "\n".join([f"{entity} : {label}" for entity, label in selected_entities])
# Step 3: Entity swapping for anonymization
swapping_prompt = formats["entity_swapping"].format(entities=entities_str, text=text)
swapping_inputs = tokenizer(swapping_prompt, return_tensors="pt").to(device)
swapping_output = model.generate(
**swapping_inputs,
max_new_tokens=max_new_tokens,
use_cache=True,
eos_token_id=151645
)
anonymized_text = tokenizer.decode(swapping_output[0], skip_special_tokens=True)
anonymized_text = anonymized_text.split("assistant\n", 1)[-1].strip() # Extract only the assistant's response
if return_entities:
return anonymized_text, entities_str
return anonymized_text
# Entity Restoration Mode using entity_swapping
elif mode == "entity_swapping" and entity_mapping:
# Reverse the entity mapping
if reverse_mapping:
reversed_mapping = []
for line in entity_mapping.splitlines():
if ':' in line: # Ensure the line contains a colon
left, right = map(str.strip, line.split(":", 1)) # Split and strip spaces
reversed_mapping.append(f"{right} : {left}") # Reverse and format
entity_mapping = "\n".join(reversed_mapping)
# Create the swapping prompt with the aggregated reversed mappings
swapping_prompt = formats["entity_swapping"].format(entities=entity_mapping, text=text)
swapping_inputs = tokenizer(swapping_prompt, return_tensors="pt").to(device)
swapping_output = model.generate(
**swapping_inputs,
max_new_tokens=max_new_tokens,
use_cache=True,
eos_token_id=151645
)
# Decode and extract the restored text
output_text = tokenizer.decode(swapping_output[0], skip_special_tokens=True)
output_text = output_text.split("assistant\n", 1)[-1].strip() # Extract only the assistant's response
return output_text
# Other modes (sensitivity, complexity, entity_detection)
else:
prompt = formats[mode].format(text=text)
model_inputs = tokenizer(prompt, return_tensors="pt").to(device)
generation_output = model.generate(
**model_inputs,
max_new_tokens=5,
use_cache=True,
eos_token_id=151645
)
full_output = tokenizer.decode(generation_output[0], skip_special_tokens=True)
if mode in ["sensitivity", "complexity"]:
assistant_text = full_output.split("assistant\n", 1)[-1].strip()
return assistant_text
elif mode == "entity_detection":
return postprocess_entity_recognition(full_output)
# Function to parse entity detection output
def postprocess_entity_recognition(detection_output: str) -> dict:
output_json = {}
entity_pattern = re.compile(
r'(?P<entity>[\w\s]+)--(?P<type>[\w]+)--(?P<random>[\w\s]+)--(?P<generalizations>.+)'
)
generalization_pattern = re.compile(r'([\w\s]+)::([\w\s]+)')
lines = detection_output.strip().split("\n")
for line in lines:
match = entity_pattern.search(line)
if match:
entity_name = match.group("entity").strip()
entity_type = match.group("type").strip()
random_replacement = match.group("random").strip()
generalizations = []
for gen_match in generalization_pattern.findall(match.group("generalizations")):
first, second = gen_match
# Check if the first part is a digit (score) and swap if needed
if first.isdigit() and not second.isdigit():
score = first
label = second
generalizations.append([label.strip(), score.strip()])
elif not first.isdigit() and second.isdigit():
label = first
score = second
generalizations.append([label.strip(), score.strip()])
output_json[entity_name] = {
"TYPE": entity_type,
"RANDOM": random_replacement,
"GENERAL": generalizations
}
return output_json
# Function to select entities based on config
def select_entities_based_on_json(prediction_json, entity_json):
entities = []
for key, value in prediction_json.items():
entity_type = value["TYPE"]
if entity_type.upper() in entity_json:
anonymization_type = entity_json[entity_type]
if anonymization_type == "RANDOM":
entities.append([key, value["RANDOM"]])
elif "GENERAL" in anonymization_type:
intensity = anonymization_type.split(" ")[1]
if intensity == "LOW" and value["GENERAL"]:
entities.append([key, value["GENERAL"][0][0]])
elif intensity == "MEDIUM":
for gen in value["GENERAL"]:
if int(gen[1]) >= 4:
entities.append([key, gen[0]])
break
elif intensity == "HIGH":
if value["GENERAL"]:
entities.append([key, value["GENERAL"][0][0]])
return entities
```
</details>
### 1. Sensitivity and Complexity Analysis
Example text
```
We have a community picnic at Greenfield Park, it is on thursday at 11 AM. Write me an e-mail annoucment!
```
The sensitivity analysis feature evaluates the sensitivity of a given text and the complexitivity feature rates the complexity.
```python
text = "We have a community picnic at Greenfield Park, it is on thursday at 11 AM. Write me an e-mail annoucment!"
# Generate sensitivity score
sensitivity_score = model_inference(text, mode="sensitivity")
print(f"Sensitivity Score: {sensitivity_score}")
# Generate complexity score
complexity_score = model_inference(text, mode="complexity")
print(f"Complexity: {complexity_score}")
```
Output
```
Sensitivity Score: 0
Complexity: 3
```
### 3. Anonymization and Re-Anonymization
To protect sensitive information, the model detects specific entities in the text and anonymizes them based on a pre-configured setting.
```python
# Anonymize the text
anonymized_text = model_inference(text, mode="anonymization")
print(f"Anonymized Text: {anonymized_text}\n")
```
```python
# Restore the original text
anonymized_text, entity_mapping = model_inference(text, mode="anonymization", return_entities=True)
print(f"Entity Mapping:\n{entity_mapping}\n")
restored_text = model_inference(anonymized_text, mode="entity_swapping", entity_mapping=entity_mapping, reverse_mapping=True)
print(f"Restored Text: {restored_text}")
```
Output
```
Anonymized Text: We have a community picnic at Sunnyvale Park, it is on A Day of the Week at Morning. Write me an e-mail announcement!
Entity Mapping:
Greenfield Park : Sunnyvale Park
thursday : A Day of the Week
11 AM : Morning
Restored Text: We have a community picnic at Greenfield Park, it is on thursday at 11 AM. Write me an e-mail announcement!
```
Normally you would process the annonymized version with a LLM and than reanonymize the result back.
## Limitations
For **complexity and sensitivity scoring**, the model can process inputs up to **2,048 tokens**. However, for **entity detection**, the combined limit for input and output is **3,000 tokens**. If a text is too long or contains numerous entities, this limit may be exceeded, which could result in truncated outputs or inconsistent behavior. To ensure accurate results, it's recommended to keep input lengths within these token limits.
## Language Support
The model supports 29 language. It was trained with a mixture of (80% english examples, 20% multilanguage examples)
## Licence
Apache license 2.0
|
kucher7serg/ruz1l2
|
kucher7serg
| 2025-01-21T14:53:14Z | 141 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-16T19:38:55Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/ruz1l_000150_05_20250121130650_42.png
text: ruz1l a young man smiles --d 42
- output:
url: sample/ruz1l_000300_05_20250121131236_42.png
text: ruz1l a young man looks into the distance in surprise --d 42
- output:
url: sample/ruz1l_000450_05_20250121131824_42.png
text: ruz1l a young man is wildly scared --d 42
- output:
url: sample/ruz1l_000600_05_20250121132408_42.png
text: ruz1l a young man happy --d 42
- output:
url: sample/ruz1l_000750_05_20250121132955_42.png
text: ruz1l a young man full length photo --d 42
- output:
url: sample/ruz1l_000900_05_20250121133540_42.png
text: ruz1l a young man goes surfing --d 42
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ruz1l a young man
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ruz1l
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ruz1l a young man` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
gaianet/DeepSeek-R1-Distill-Qwen-7B-GGUF
|
gaianet
| 2025-01-21T14:51:57Z | 993 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-01-21T01:50:55Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
license: apache-2.0
license_link: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/blob/main/LICENSE
model_creator: deepseek-ai
model_name: DeepSeek-R1-Distill-Qwen-7B
quantized_by: Second State Inc.
library_name: transformers
---
# DeepSeek-R1-Distill-Qwen-7B-GGUF
## Original Model
[deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
## Run with Gaianet
**Prompt template**
prompt template: coming soon
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b4519*
|
great0001/13e5f5de-bc2c-43ab-9c16-ff0337aa04ca
|
great0001
| 2025-01-21T14:50:44Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-21T14:50:19Z |
---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 13e5f5de-bc2c-43ab-9c16-ff0337aa04ca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0e23f2f8e5ee0c7e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0e23f2f8e5ee0c7e_train_data.json
type:
field_input: chosen
field_instruction: prompt
field_output: rejected
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/13e5f5de-bc2c-43ab-9c16-ff0337aa04ca
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0e23f2f8e5ee0c7e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 377241ed-8c47-4895-a452-0912cff45921
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 377241ed-8c47-4895-a452-0912cff45921
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 13e5f5de-bc2c-43ab-9c16-ff0337aa04ca
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9375 | 0.0028 | 1 | 11.9321 |
| 11.9333 | 0.0083 | 3 | 11.9321 |
| 11.9323 | 0.0167 | 6 | 11.9320 |
| 11.935 | 0.0250 | 9 | 11.9320 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
adammandic87/eb977de0-ecdc-4a78-a7ff-65d542503fca
|
adammandic87
| 2025-01-21T14:49:05Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-21T14:48:42Z |
---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eb977de0-ecdc-4a78-a7ff-65d542503fca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0e23f2f8e5ee0c7e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0e23f2f8e5ee0c7e_train_data.json
type:
field_input: chosen
field_instruction: prompt
field_output: rejected
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/eb977de0-ecdc-4a78-a7ff-65d542503fca
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0e23f2f8e5ee0c7e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 377241ed-8c47-4895-a452-0912cff45921
wandb_project: Birthday-SN56-13-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 377241ed-8c47-4895-a452-0912cff45921
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# eb977de0-ecdc-4a78-a7ff-65d542503fca
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9375 | 0.0028 | 1 | 11.9321 |
| 11.9333 | 0.0083 | 3 | 11.9321 |
| 11.9323 | 0.0167 | 6 | 11.9320 |
| 11.935 | 0.0250 | 9 | 11.9320 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso12/758c18d6-6353-43b9-827a-9e2a20f3f1ed
|
lesso12
| 2025-01-21T14:46:02Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T11:44:53Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 758c18d6-6353-43b9-827a-9e2a20f3f1ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: true
chat_template: llama3
datasets:
- data_files:
- 6bb273fb8d3c0253_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6bb273fb8d3c0253_train_data.json
type:
field_input: condition
field_instruction: drugName
field_output: review
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso12/758c18d6-6353-43b9-827a-9e2a20f3f1ed
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/6bb273fb8d3c0253_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f44a8599-bd2c-4b24-9468-fb17670debf8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f44a8599-bd2c-4b24-9468-fb17670debf8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 758c18d6-6353-43b9-827a-9e2a20f3f1ed
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0006 | 15 | nan |
| 0.0 | 0.0008 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
vertings6/fe5697ba-eae2-45bb-a32c-bffb7663e094
|
vertings6
| 2025-01-21T14:46:00Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-21T14:14:53Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe5697ba-eae2-45bb-a32c-bffb7663e094
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 98d61748b31bcb23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/98d61748b31bcb23_train_data.json
type:
field_input: src
field_instruction: task
field_output: tgt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vertings6/fe5697ba-eae2-45bb-a32c-bffb7663e094
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/98d61748b31bcb23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4807618-cf8c-4200-9ce1-34371a66a038
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4807618-cf8c-4200-9ce1-34371a66a038
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# fe5697ba-eae2-45bb-a32c-bffb7663e094
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 4.7657 |
| 2.9262 | 0.0012 | 5 | 3.4368 |
| 2.2913 | 0.0024 | 10 | 2.7031 |
| 2.2022 | 0.0036 | 15 | 2.4745 |
| 2.1667 | 0.0048 | 20 | 2.3293 |
| 2.18 | 0.0060 | 25 | 2.2495 |
| 2.3111 | 0.0072 | 30 | 2.2318 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sergioalves/5fe532b3-3f3c-46f1-967d-c14a99d7a24f
|
sergioalves
| 2025-01-21T14:45:15Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-01-21T14:39:00Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5fe532b3-3f3c-46f1-967d-c14a99d7a24f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9f860ccb7e806546_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9f860ccb7e806546_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: sergioalves/5fe532b3-3f3c-46f1-967d-c14a99d7a24f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/9f860ccb7e806546_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92906d73-ae0c-43b3-9735-14fe2124bf2a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92906d73-ae0c-43b3-9735-14fe2124bf2a
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 5fe532b3-3f3c-46f1-967d-c14a99d7a24f
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0048 | 1 | 4.3587 |
| 3.4221 | 0.0239 | 5 | 3.6436 |
| 3.0062 | 0.0478 | 10 | 2.7344 |
| 2.4666 | 0.0717 | 15 | 2.5904 |
| 2.5231 | 0.0956 | 20 | 2.4611 |
| 2.3598 | 0.1195 | 25 | 2.4224 |
| 2.4138 | 0.1434 | 30 | 2.4153 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF
|
mradermacher
| 2025-01-21T14:45:06Z | 460 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Statuo/Deepseeker-Kunou-Qwen2.5-14b",
"base_model:quantized:Statuo/Deepseeker-Kunou-Qwen2.5-14b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-21T12:16:17Z |
---
base_model: Statuo/Deepseeker-Kunou-Qwen2.5-14b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Statuo/Deepseeker-Kunou-Qwen2.5-14b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Deepseeker-Kunou-Qwen2.5-14b-GGUF/resolve/main/Deepseeker-Kunou-Qwen2.5-14b.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ValiantLabs/Llama2-70B-ShiningValiant
|
ValiantLabs
| 2025-01-21T14:44:43Z | 124 | 74 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"shining-valiant",
"valiant",
"valiant-labs",
"llama-2",
"llama-2-chat",
"70b",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T17:22:30Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- shining-valiant
- valiant
- valiant-labs
- llama
- llama-2
- llama-2-chat
- 70b
model_type: llama
license: llama2
---
**This model is legacy - we recommend [Shining Valiant 2](https://huggingface.co/ValiantLabs/Llama3.1-70B-ShiningValiant2) for Llama 3.1 70b!**

Shining Valiant is a chat model built on the Llama 2 architecture, finetuned on our data for insight, creativity, passion, and friendliness.
- Uses the llama-2-70b-chat model, with safetensors
- Finetuned on multiple runs across private and public data
- Data focused on knowledge, enthusiasm, and structured reasoning
- **Our new release features greatly expanded personality capability**, bringing a more immersive chat experience
## Version
The current version is **1.4!** We've greatly expanded our personality dataset and fixed some bugs to deliver our strongest real-chat experience so far.
(We're also exploring **new models and architectures**, to deliver helpful open source capabilities for users and creators!)
Previous versions remain available in the repository. New models will be released for everyone once our team's training and validation process is complete.
## Evaluation
Version 1.4 is awaiting results from the Open LLM leaderboard.
## Prompting Guide
Shining Valiant uses the same prompt format as Llama 2 Chat - feel free to use your existing prompts and scripts!
A few examples of different formats:
1. [INST] Good morning! Can you let me know how to parse a text file and turn the semicolons into commas? [/INST]
2. [INST] (You are an intelligent, helpful AI assistant.) Hello, can you write me a thank you letter? [/INST]
3. [INST] << SYS >>You are an intelligent, helpful AI assistant.<< /SYS >>Deep dive about a country with interesting history: [/INST]
## The Model
Shining Valiant is built on top of Spell Blade, which uses Llama 2's 70b parameter architecture and features upgraded general and chat capability.
Our private data focuses primarily on applying Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn!
With this release, the personality component of our Shining Valiant dataset has been greatly improved. We're excited to use it in future releases of this model and others.

Shining Valiant is created by [Valiant Labs.](http://valiantlabs.ca/)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
|
kostiantynk1205/5fab3088-ce91-4ec5-90e3-847e944bbc7d
|
kostiantynk1205
| 2025-01-21T14:44:41Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | 2025-01-21T14:11:39Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5fab3088-ce91-4ec5-90e3-847e944bbc7d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ea142ac27856f425_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ea142ac27856f425_train_data.json
type:
field_instruction: 'Purpose: Clinical Focus'
field_output: Description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/5fab3088-ce91-4ec5-90e3-847e944bbc7d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ea142ac27856f425_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8d608405-ef66-4e7a-9976-6a448c75bbeb
wandb_project: Birthday-SN56-6-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8d608405-ef66-4e7a-9976-6a448c75bbeb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5fab3088-ce91-4ec5-90e3-847e944bbc7d
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ValiantLabs/Llama3.1-70B-ShiningValiant2
|
ValiantLabs
| 2025-01-21T14:43:45Z | 11 | 3 | null |
[
"safetensors",
"llama",
"shining-valiant",
"shining-valiant-2",
"valiant",
"valiant-labs",
"llama-3.1",
"llama-3.1-instruct",
"llama-3.1-instruct-70b",
"llama-3",
"llama-3-instruct",
"llama-3-instruct-70b",
"70b",
"science",
"physics",
"biology",
"chemistry",
"compsci",
"computer-science",
"engineering",
"logic",
"rationality",
"advanced",
"expert",
"technical",
"conversational",
"chat",
"instruct",
"text-generation",
"en",
"dataset:sequelbox/Celestia",
"dataset:sequelbox/Spurline",
"dataset:sequelbox/Supernova",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-70B-Instruct",
"license:llama3.1",
"model-index",
"region:us"
] |
text-generation
| 2024-10-30T00:27:06Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-70b
- llama-3
- llama-3-instruct
- llama-3-instruct-70b
- 70b
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- logic
- rationality
- advanced
- expert
- technical
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
datasets:
- sequelbox/Celestia
- sequelbox/Spurline
- sequelbox/Supernova
model_type: llama
model-index:
- name: Llama3.1-70B-ShiningValiant2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.93
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Biology (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 93.75
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU High School Biology (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 91.94
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU Conceptual Physics (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.70
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Physics (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.78
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU High School Physics (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.91
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Chemistry (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.00
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU High School Chemistry (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.86
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU Astronomy (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 89.47
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU College Computer Science (5-Shot)
type: MMLU
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.00
name: acc
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 53.55
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 52.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 27.19
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.48
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 46.37
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-70B-ShiningValiant2
name: Open LLM Leaderboard
license: llama3.1
---

Shining Valiant 2 is a chat model built on Llama 3.1 70b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
- Finetuned on [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) for best available general performance
- Trained on a variety of high quality open source data; focused on science, engineering, technical knowledge, and structured reasoning
- Also available for [Llama 3.1 8b](https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2) and [Llama 3.2 3b!](https://huggingface.co/ValiantLabs/Llama3.2-3B-ShiningValiant2)
## Version
This is the **2024-10-30** release of Shining Valiant 2 for Llama 3.1 70b.
This release uses our newest datasets, open-sourced for everyone's use, including our expanded [science-instruct dataset](https://huggingface.co/datasets/sequelbox/Celestia). This release features improvements in logical thinking and structured reasoning as well as physics, chemistry, biology, astronomy, Earth science, computer science, and information theory.
Future upgrades will continue to expand Shining Valiant's technical knowledge base.
Help us and recommend Shining Valiant 2 to your friends!
## Prompting Guide
Shining Valiant 2 uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
```python
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-70B-ShiningValiant2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "What is the role of lysosomal enzymes in the regulation of cellular processes?"}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
```
## The Model
Shining Valiant 2 is built on top of Llama 3.1 70b Instruct.
The current version of Shining Valiant 2 is trained on technical knowledge using [sequelbox/Celestia](https://huggingface.co/datasets/sequelbox/Celestia), complex reasoning using [sequelbox/Spurline](https://huggingface.co/datasets/sequelbox/Spurline), and general chat capability using [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)
We're super excited that Shining Valiant's dataset has been fully open-sourced! She's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.

Shining Valiant 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for our open-source Build Tools models, including code-specialist Enigma and DevOps-expert Esper 2!](https://huggingface.co/ValiantLabs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
|
dimasik87/d2cd3ae6-17b4-4dcf-b76b-671ea7c636e8
|
dimasik87
| 2025-01-21T14:42:28Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-21T14:42:01Z |
---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d2cd3ae6-17b4-4dcf-b76b-671ea7c636e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0e23f2f8e5ee0c7e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0e23f2f8e5ee0c7e_train_data.json
type:
field_input: chosen
field_instruction: prompt
field_output: rejected
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/d2cd3ae6-17b4-4dcf-b76b-671ea7c636e8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/0e23f2f8e5ee0c7e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 377241ed-8c47-4895-a452-0912cff45921
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 377241ed-8c47-4895-a452-0912cff45921
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# d2cd3ae6-17b4-4dcf-b76b-671ea7c636e8
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0056 | 1 | 11.9330 |
| 11.9332 | 0.0278 | 5 | 11.9331 |
| 11.9326 | 0.0556 | 10 | 11.9330 |
| 11.9331 | 0.0834 | 15 | 11.9328 |
| 11.9327 | 0.1113 | 20 | 11.9328 |
| 11.9323 | 0.1391 | 25 | 11.9329 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lisetteahubnerh/watercolor
|
lisetteahubnerh
| 2025-01-21T14:42:22Z | 81 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-01-21T14:17:27Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A little fairy princess with sparkling wings and flowing dress, in an
enchanted garden filled with glowing flowers and fireflies,The overall style
of the illustration is cartoon watercolor style
output:
url: images/1.png
- text: >-
A magical candy house with gingerbread walls, in a lollipop forest with
chocolate rivers and cotton candy clouds,The overall style of the
illustration is cartoon watercolor style
output:
url: images/2.png
- text: >-
A sleeping unicorn with rainbow mane, in a misty meadow covered with morning
dew and silver moonflowers.The overall style of the illustration is cartoon
watercolor style
output:
url: images/3.png
- text: >-
Two dolphins are playing with a ball in the ocean.The overall style of the
illustration is cartoon watercolor style
output:
url: images/4.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: '{}.The overall style of the illustration is cartoon watercolor style'
license: apache-2.0
---
# fluxLora_watercolor
<Gallery />
## Model description
Generate the style of children's watercolor picture books
## Trigger words
You should use `{}.The overall style of the illustration is cartoon watercolor style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
|
BKKSPY/Masmodel
|
BKKSPY
| 2025-01-21T14:40:12Z | 92 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-01-21T14:21:04Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MLLW
---
# Masmodel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MLLW` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BKKSPY/Masmodel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ValiantLabs/CodeLlama-70B-Esper
|
ValiantLabs
| 2025-01-21T14:39:04Z | 15 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"esper",
"dev-ops",
"developer",
"code",
"code-instruct",
"valiant",
"valiant-labs",
"code-llama",
"llama-2",
"llama-2-chat",
"70b",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-12T13:39:37Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- esper
- dev-ops
- developer
- code
- code-instruct
- valiant
- valiant-labs
- code-llama
- llama
- llama-2
- llama-2-chat
- 70b
model_type: llama
license: apache-2.0
---

Esper-70b is the DevOps code specialist!
- Overall code capabilities with a DevOps focus: specialized in scripting language code, Terraform files, Dockerfiles, YAML, and more!
- Also trained on further code-instruct and chat-instruct data for generally improved chat quality.
- Built on llama-2-70b architecture, using [CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) as the base model.
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant))
## Version
This is Version **1.0** of Esper-70b.
The current version of Esper-70b uses [CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) trained on two sets of data:
- code from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup), with our sub-selection focused on scripting languages, Terraform/build scripts, and YAML files.
- our private data for general code-instruct performance, chat-quality response, and user satisfaction. (A portion of this data was also used in [Shining Valiant 1.4](https://huggingface.co/ValiantLabs/ShiningValiant), our previous general-purpose Llama 70b finetune.)
Esper-70b is the newest release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators. We're working on more tools to come! For everyone to use :)
We're planning on continually upgrading this model with more data, to improve existing capabilities and add new ones relevant to a DevOps user base.
## Prompting Guide
Esper-70b uses the following recommended chat format, based on CodeLlama-70b chat format:
Source: system\n\n You are Esper, an expert technical assistant AI. Provide high quality code to the user. <step> Source: user\n\n Hi! Can you explain this Terraform code, thank you:
(Generally, anything that works with [CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) will work with Esper-70b.)

Esper-70b is created by [Valiant Labs.](http://valiantlabs.ca/)
Try our flagship chat model, [Shining Valiant!](https://huggingface.co/ValiantLabs/ShiningValiant)
Check out our function-calling model [Fireplace](https://huggingface.co/ValiantLabs/Fireplace-13b) for Llama-13b!
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
|
jiinking/3_layer_GQA2_llama_model
|
jiinking
| 2025-01-21T14:38:44Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T14:09:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adammandic87/faaf7534-3d39-48b1-a783-a136b82e51a5
|
adammandic87
| 2025-01-21T14:38:39Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-21T14:36:46Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: faaf7534-3d39-48b1-a783-a136b82e51a5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f2dedbddc7d6df54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f2dedbddc7d6df54_train_data.json
type:
field_input: structural_annotation
field_instruction: sequence
field_output: functional_annotation
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/faaf7534-3d39-48b1-a783-a136b82e51a5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f2dedbddc7d6df54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6948954f-ff52-442b-9973-a3e89572c631
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6948954f-ff52-442b-9973-a3e89572c631
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# faaf7534-3d39-48b1-a783-a136b82e51a5
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3105 | 0.0006 | 1 | 0.4071 |
| 0.4019 | 0.0019 | 3 | 0.4002 |
| 0.2902 | 0.0038 | 6 | 0.2981 |
| 0.1965 | 0.0057 | 9 | 0.1967 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
AmberYifan/Qwen2.5-7B-sft-peers-pool
|
AmberYifan
| 2025-01-21T14:38:08Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-17T09:25:52Z |
---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-peers-pool
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-7B-sft-peers-pool
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-peers-pool", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/7yo5f5zf)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu118
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
adammandic87/420b1233-32ee-46ae-9c58-606c8ee9d0da
|
adammandic87
| 2025-01-21T14:35:32Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-21T14:27:57Z |
---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 420b1233-32ee-46ae-9c58-606c8ee9d0da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 98d61748b31bcb23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/98d61748b31bcb23_train_data.json
type:
field_input: src
field_instruction: task
field_output: tgt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/420b1233-32ee-46ae-9c58-606c8ee9d0da
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/98d61748b31bcb23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4807618-cf8c-4200-9ce1-34371a66a038
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4807618-cf8c-4200-9ce1-34371a66a038
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 420b1233-32ee-46ae-9c58-606c8ee9d0da
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4129 | 0.0001 | 1 | 3.0375 |
| 2.2895 | 0.0004 | 3 | 2.9496 |
| 1.8981 | 0.0007 | 6 | 2.0524 |
| 1.2361 | 0.0011 | 9 | 1.2382 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso08/59058c38-6d3a-49c1-84b6-77ab8ff13dd2
|
lesso08
| 2025-01-21T14:34:50Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-21T12:49:13Z |
---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 59058c38-6d3a-49c1-84b6-77ab8ff13dd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 79b65eb626883bbf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79b65eb626883bbf_train_data.json
type:
field_instruction: x
field_output: y1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso08/59058c38-6d3a-49c1-84b6-77ab8ff13dd2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/79b65eb626883bbf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ff05e196-bdf5-4bc9-89bd-189c7fd219ba
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ff05e196-bdf5-4bc9-89bd-189c7fd219ba
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 59058c38-6d3a-49c1-84b6-77ab8ff13dd2
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
| 0.0 | 0.0010 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
paramedik/Codestral-22B-v0.1-Q4_K_M-GGUF
|
paramedik
| 2025-01-21T14:31:18Z | 40 | 0 | null |
[
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"region:us",
"conversational"
] | null | 2025-01-21T14:30:17Z |
---
language:
- code
license: other
tags:
- code
- llama-cpp
- gguf-my-repo
inference: false
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model: mistralai/Codestral-22B-v0.1
---
# paramedik/Codestral-22B-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Codestral-22B-v0.1`](https://huggingface.co/mistralai/Codestral-22B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Codestral-22B-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo paramedik/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo paramedik/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo paramedik/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo paramedik/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
|
kk-aivio/c14f7460-7d0f-48c8-b195-215dd00a23f4
|
kk-aivio
| 2025-01-21T14:30:25Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"olmo",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-olmo-hf",
"base_model:adapter:katuni4ka/tiny-random-olmo-hf",
"region:us"
] | null | 2025-01-21T14:29:34Z |
---
library_name: peft
base_model: katuni4ka/tiny-random-olmo-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c14f7460-7d0f-48c8-b195-215dd00a23f4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-olmo-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0592b2f60790c3af_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0592b2f60790c3af_train_data.json
type:
field_input: Artist
field_instruction: Song
field_output: Lyrics
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/c14f7460-7d0f-48c8-b195-215dd00a23f4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/0592b2f60790c3af_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0b450307-3da3-4928-8fcc-03c61c5025d9
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0b450307-3da3-4928-8fcc-03c61c5025d9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c14f7460-7d0f-48c8-b195-215dd00a23f4
This model is a fine-tuned version of [katuni4ka/tiny-random-olmo-hf](https://huggingface.co/katuni4ka/tiny-random-olmo-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.8356 | 0.0001 | 1 | 10.8343 |
| 10.8309 | 0.0004 | 3 | 10.8342 |
| 10.8325 | 0.0008 | 6 | 10.8339 |
| 10.8287 | 0.0013 | 9 | 10.8332 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.