modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
nhunglaaaaaaa/fccd41c3-68c9-4af6-9694-bcf592003ec3 | nhunglaaaaaaa | 2025-01-20T23:58:40Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:37:37Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fccd41c3-68c9-4af6-9694-bcf592003ec3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 670127402f937a76_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/670127402f937a76_train_data.json
type:
field_input: content
field_instruction: aspect
field_output: sentiment
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/fccd41c3-68c9-4af6-9694-bcf592003ec3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/670127402f937a76_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e13dcc00-4d7d-439f-bbb5-ed9061820333
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e13dcc00-4d7d-439f-bbb5-ed9061820333
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fccd41c3-68c9-4af6-9694-bcf592003ec3
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9569 | 0.6070 | 200 | 0.2271 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/eb986324-3dd5-4e2b-868e-a31fb3ad18d2 | ClarenceDan | 2025-01-20T23:57:26Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T23:41:06Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eb986324-3dd5-4e2b-868e-a31fb3ad18d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 854bca96bed40197_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/854bca96bed40197_train_data.json
type:
field_input: state_before
field_instruction: tactic
field_output: state_after
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/eb986324-3dd5-4e2b-868e-a31fb3ad18d2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/854bca96bed40197_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cff9d1c5-a847-4707-b347-d0451baf6b24
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cff9d1c5-a847-4707-b347-d0451baf6b24
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# eb986324-3dd5-4e2b-868e-a31fb3ad18d2
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1411 | 0.0000 | 1 | 2.8415 |
| 0.8263 | 0.0001 | 3 | 2.8217 |
| 0.3436 | 0.0002 | 6 | 2.6084 |
| 0.4392 | 0.0003 | 9 | 2.1671 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso11/1db4e142-5823-4549-b311-c5325f577241 | lesso11 | 2025-01-20T23:55:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:48:03Z | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1db4e142-5823-4549-b311-c5325f577241
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: true
chat_template: llama3
datasets:
- data_files:
- 09f295eb7fd803c2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09f295eb7fd803c2_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/1db4e142-5823-4549-b311-c5325f577241
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/09f295eb7fd803c2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a727ab38-5312-4e68-8885-5980a4cae8a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a727ab38-5312-4e68-8885-5980a4cae8a9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1db4e142-5823-4549-b311-c5325f577241
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0042 | 10 | nan |
| 0.0 | 0.0063 | 15 | nan |
| 0.0 | 0.0085 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
joboffer/c80d23f2-e3e7-4dab-a9fa-432251a8e9d7 | joboffer | 2025-01-20T23:54:50Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | 2025-01-20T23:48:23Z | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c80d23f2-e3e7-4dab-a9fa-432251a8e9d7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 09f295eb7fd803c2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09f295eb7fd803c2_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: joboffer/c80d23f2-e3e7-4dab-a9fa-432251a8e9d7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 80GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/09f295eb7fd803c2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a727ab38-5312-4e68-8885-5980a4cae8a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a727ab38-5312-4e68-8885-5980a4cae8a9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c80d23f2-e3e7-4dab-a9fa-432251a8e9d7
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0042 | 10 | nan |
| 0.0 | 0.0063 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nadejdatarabukina/c739481e-a786-4a3a-b2b4-3cd78c486811 | nadejdatarabukina | 2025-01-20T23:54:08Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | 2025-01-20T23:48:13Z | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c739481e-a786-4a3a-b2b4-3cd78c486811
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 09f295eb7fd803c2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09f295eb7fd803c2_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/c739481e-a786-4a3a-b2b4-3cd78c486811
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/09f295eb7fd803c2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a727ab38-5312-4e68-8885-5980a4cae8a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a727ab38-5312-4e68-8885-5980a4cae8a9
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c739481e-a786-4a3a-b2b4-3cd78c486811
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0021 | 5 | nan |
| 0.0 | 0.0042 | 10 | nan |
| 0.0 | 0.0063 | 15 | nan |
| 0.0 | 0.0085 | 20 | nan |
| 0.0 | 0.0106 | 25 | nan |
| 0.0 | 0.0127 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/aireduxreal | LHRuig | 2025-01-20T23:53:34Z | 22 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:53:25Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# aireduxreal
<Gallery />
## Model description
aireduxreal lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/aireduxreal/tree/main) them in the Files & versions tab.
|
lesso09/f177f676-835b-4da0-b0b0-ede96e7966ba | lesso09 | 2025-01-20T23:52:14Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:50:43Z | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f177f676-835b-4da0-b0b0-ede96e7966ba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 641edd60ca44ac19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/641edd60ca44ac19_train_data.json
type:
field_input: tokens
field_instruction: sentence
field_output: corrupted
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/f177f676-835b-4da0-b0b0-ede96e7966ba
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/641edd60ca44ac19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ab7df99e-418b-42ee-9175-b39bc5d0f0ce
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ab7df99e-418b-42ee-9175-b39bc5d0f0ce
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f177f676-835b-4da0-b0b0-ede96e7966ba
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0026 | 1 | nan |
| 0.0 | 0.0130 | 5 | nan |
| 0.0 | 0.0260 | 10 | nan |
| 0.0 | 0.0390 | 15 | nan |
| 0.0 | 0.0520 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k18_task5_organization | MayBashendy | 2025-01-20T23:52:05Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-20T16:11:17Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k18_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k18_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0523
- Qwk: 0.4790
- Mse: 1.0523
- Rmse: 1.0258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0345 | 2 | 3.8827 | -0.0151 | 3.8827 | 1.9705 |
| No log | 0.0690 | 4 | 2.3503 | -0.0189 | 2.3503 | 1.5331 |
| No log | 0.1034 | 6 | 3.3976 | -0.0343 | 3.3976 | 1.8433 |
| No log | 0.1379 | 8 | 4.3908 | 0.0171 | 4.3908 | 2.0954 |
| No log | 0.1724 | 10 | 2.9131 | -0.0295 | 2.9131 | 1.7068 |
| No log | 0.2069 | 12 | 1.3737 | 0.0894 | 1.3737 | 1.1720 |
| No log | 0.2414 | 14 | 1.0962 | 0.2416 | 1.0962 | 1.0470 |
| No log | 0.2759 | 16 | 1.0484 | 0.3449 | 1.0484 | 1.0239 |
| No log | 0.3103 | 18 | 1.1433 | 0.3927 | 1.1433 | 1.0692 |
| No log | 0.3448 | 20 | 1.5454 | 0.1429 | 1.5454 | 1.2431 |
| No log | 0.3793 | 22 | 1.9269 | 0.1296 | 1.9269 | 1.3881 |
| No log | 0.4138 | 24 | 2.1738 | 0.1465 | 2.1738 | 1.4744 |
| No log | 0.4483 | 26 | 2.2513 | 0.1221 | 2.2513 | 1.5004 |
| No log | 0.4828 | 28 | 1.6165 | 0.2351 | 1.6165 | 1.2714 |
| No log | 0.5172 | 30 | 1.1390 | 0.2560 | 1.1390 | 1.0672 |
| No log | 0.5517 | 32 | 0.9792 | 0.2526 | 0.9792 | 0.9895 |
| No log | 0.5862 | 34 | 1.0101 | 0.3066 | 1.0101 | 1.0050 |
| No log | 0.6207 | 36 | 1.1235 | 0.4012 | 1.1235 | 1.0600 |
| No log | 0.6552 | 38 | 1.1648 | 0.3542 | 1.1648 | 1.0793 |
| No log | 0.6897 | 40 | 1.1022 | 0.3243 | 1.1022 | 1.0499 |
| No log | 0.7241 | 42 | 1.1218 | 0.3243 | 1.1218 | 1.0591 |
| No log | 0.7586 | 44 | 1.0679 | 0.2977 | 1.0679 | 1.0334 |
| No log | 0.7931 | 46 | 1.0611 | 0.2567 | 1.0611 | 1.0301 |
| No log | 0.8276 | 48 | 0.9578 | 0.3691 | 0.9578 | 0.9787 |
| No log | 0.8621 | 50 | 0.8676 | 0.2770 | 0.8676 | 0.9315 |
| No log | 0.8966 | 52 | 0.8749 | 0.2770 | 0.8749 | 0.9354 |
| No log | 0.9310 | 54 | 0.8883 | 0.3498 | 0.8883 | 0.9425 |
| No log | 0.9655 | 56 | 0.9563 | 0.3250 | 0.9563 | 0.9779 |
| No log | 1.0 | 58 | 1.1407 | 0.3283 | 1.1407 | 1.0681 |
| No log | 1.0345 | 60 | 1.0584 | 0.2711 | 1.0584 | 1.0288 |
| No log | 1.0690 | 62 | 0.9629 | 0.3129 | 0.9629 | 0.9813 |
| No log | 1.1034 | 64 | 1.0630 | 0.3902 | 1.0630 | 1.0310 |
| No log | 1.1379 | 66 | 1.1496 | 0.2471 | 1.1496 | 1.0722 |
| No log | 1.1724 | 68 | 1.1000 | 0.3024 | 1.1000 | 1.0488 |
| No log | 1.2069 | 70 | 0.9757 | 0.3326 | 0.9757 | 0.9878 |
| No log | 1.2414 | 72 | 0.9358 | 0.3198 | 0.9358 | 0.9674 |
| No log | 1.2759 | 74 | 0.9934 | 0.1794 | 0.9934 | 0.9967 |
| No log | 1.3103 | 76 | 1.0084 | 0.2188 | 1.0084 | 1.0042 |
| No log | 1.3448 | 78 | 1.0067 | 0.3713 | 1.0067 | 1.0033 |
| No log | 1.3793 | 80 | 0.9656 | 0.3455 | 0.9656 | 0.9827 |
| No log | 1.4138 | 82 | 0.9832 | 0.3198 | 0.9832 | 0.9915 |
| No log | 1.4483 | 84 | 1.0364 | 0.2795 | 1.0364 | 1.0180 |
| No log | 1.4828 | 86 | 1.0609 | 0.3063 | 1.0609 | 1.0300 |
| No log | 1.5172 | 88 | 1.0794 | 0.2582 | 1.0794 | 1.0389 |
| No log | 1.5517 | 90 | 1.0631 | 0.3860 | 1.0631 | 1.0311 |
| No log | 1.5862 | 92 | 1.0703 | 0.3915 | 1.0703 | 1.0346 |
| No log | 1.6207 | 94 | 1.2684 | 0.3070 | 1.2684 | 1.1262 |
| No log | 1.6552 | 96 | 1.4565 | 0.3333 | 1.4565 | 1.2068 |
| No log | 1.6897 | 98 | 1.3804 | 0.3318 | 1.3804 | 1.1749 |
| No log | 1.7241 | 100 | 1.1446 | 0.2837 | 1.1446 | 1.0698 |
| No log | 1.7586 | 102 | 1.0397 | 0.3133 | 1.0397 | 1.0197 |
| No log | 1.7931 | 104 | 1.0443 | 0.4373 | 1.0443 | 1.0219 |
| No log | 1.8276 | 106 | 1.0965 | 0.4392 | 1.0965 | 1.0471 |
| No log | 1.8621 | 108 | 1.0463 | 0.4206 | 1.0463 | 1.0229 |
| No log | 1.8966 | 110 | 1.0460 | 0.3482 | 1.0460 | 1.0228 |
| No log | 1.9310 | 112 | 1.0853 | 0.3711 | 1.0853 | 1.0418 |
| No log | 1.9655 | 114 | 1.1043 | 0.3758 | 1.1043 | 1.0509 |
| No log | 2.0 | 116 | 1.1332 | 0.3718 | 1.1332 | 1.0645 |
| No log | 2.0345 | 118 | 1.1377 | 0.3917 | 1.1377 | 1.0666 |
| No log | 2.0690 | 120 | 1.1435 | 0.3648 | 1.1435 | 1.0693 |
| No log | 2.1034 | 122 | 1.1013 | 0.4232 | 1.1013 | 1.0494 |
| No log | 2.1379 | 124 | 0.9759 | 0.3607 | 0.9759 | 0.9879 |
| No log | 2.1724 | 126 | 0.8873 | 0.3820 | 0.8873 | 0.9420 |
| No log | 2.2069 | 128 | 0.8663 | 0.3625 | 0.8663 | 0.9307 |
| No log | 2.2414 | 130 | 0.8625 | 0.4089 | 0.8625 | 0.9287 |
| No log | 2.2759 | 132 | 0.8833 | 0.4056 | 0.8833 | 0.9398 |
| No log | 2.3103 | 134 | 0.9124 | 0.4313 | 0.9124 | 0.9552 |
| No log | 2.3448 | 136 | 0.9654 | 0.3286 | 0.9654 | 0.9826 |
| No log | 2.3793 | 138 | 1.1335 | 0.3687 | 1.1335 | 1.0647 |
| No log | 2.4138 | 140 | 1.0551 | 0.3918 | 1.0551 | 1.0272 |
| No log | 2.4483 | 142 | 1.0378 | 0.4737 | 1.0378 | 1.0187 |
| No log | 2.4828 | 144 | 1.1433 | 0.4264 | 1.1433 | 1.0693 |
| No log | 2.5172 | 146 | 1.0562 | 0.4503 | 1.0562 | 1.0277 |
| No log | 2.5517 | 148 | 1.0218 | 0.2694 | 1.0218 | 1.0109 |
| No log | 2.5862 | 150 | 0.9906 | 0.3514 | 0.9906 | 0.9953 |
| No log | 2.6207 | 152 | 0.9382 | 0.3224 | 0.9382 | 0.9686 |
| No log | 2.6552 | 154 | 0.9202 | 0.2941 | 0.9202 | 0.9592 |
| No log | 2.6897 | 156 | 0.9203 | 0.3357 | 0.9203 | 0.9593 |
| No log | 2.7241 | 158 | 0.9347 | 0.4272 | 0.9347 | 0.9668 |
| No log | 2.7586 | 160 | 0.9881 | 0.4273 | 0.9881 | 0.9940 |
| No log | 2.7931 | 162 | 1.0588 | 0.4309 | 1.0588 | 1.0290 |
| No log | 2.8276 | 164 | 1.1410 | 0.4973 | 1.1410 | 1.0682 |
| No log | 2.8621 | 166 | 1.3562 | 0.2730 | 1.3562 | 1.1646 |
| No log | 2.8966 | 168 | 1.4988 | 0.2789 | 1.4988 | 1.2242 |
| No log | 2.9310 | 170 | 1.3422 | 0.2230 | 1.3422 | 1.1585 |
| No log | 2.9655 | 172 | 1.1816 | 0.4020 | 1.1816 | 1.0870 |
| No log | 3.0 | 174 | 1.1237 | 0.3924 | 1.1237 | 1.0601 |
| No log | 3.0345 | 176 | 1.1163 | 0.3843 | 1.1163 | 1.0566 |
| No log | 3.0690 | 178 | 1.2763 | 0.4032 | 1.2763 | 1.1297 |
| No log | 3.1034 | 180 | 1.3986 | 0.3243 | 1.3986 | 1.1826 |
| No log | 3.1379 | 182 | 1.2453 | 0.3345 | 1.2453 | 1.1159 |
| No log | 3.1724 | 184 | 0.9703 | 0.4161 | 0.9703 | 0.9850 |
| No log | 3.2069 | 186 | 0.9294 | 0.3804 | 0.9294 | 0.9641 |
| No log | 3.2414 | 188 | 0.9516 | 0.4845 | 0.9516 | 0.9755 |
| No log | 3.2759 | 190 | 0.9303 | 0.4223 | 0.9303 | 0.9645 |
| No log | 3.3103 | 192 | 1.0026 | 0.4196 | 1.0026 | 1.0013 |
| No log | 3.3448 | 194 | 1.1127 | 0.4976 | 1.1127 | 1.0548 |
| No log | 3.3793 | 196 | 1.0237 | 0.4568 | 1.0237 | 1.0118 |
| No log | 3.4138 | 198 | 0.9068 | 0.3454 | 0.9068 | 0.9523 |
| No log | 3.4483 | 200 | 0.8817 | 0.4118 | 0.8817 | 0.9390 |
| No log | 3.4828 | 202 | 0.9082 | 0.4729 | 0.9082 | 0.9530 |
| No log | 3.5172 | 204 | 0.9099 | 0.4581 | 0.9099 | 0.9539 |
| No log | 3.5517 | 206 | 0.8902 | 0.4350 | 0.8902 | 0.9435 |
| No log | 3.5862 | 208 | 0.9101 | 0.4215 | 0.9101 | 0.9540 |
| No log | 3.6207 | 210 | 0.9443 | 0.4807 | 0.9443 | 0.9717 |
| No log | 3.6552 | 212 | 0.9715 | 0.4369 | 0.9715 | 0.9856 |
| No log | 3.6897 | 214 | 0.9362 | 0.3792 | 0.9362 | 0.9676 |
| No log | 3.7241 | 216 | 0.9121 | 0.3742 | 0.9121 | 0.9550 |
| No log | 3.7586 | 218 | 0.9766 | 0.4585 | 0.9766 | 0.9882 |
| No log | 3.7931 | 220 | 1.1081 | 0.4787 | 1.1081 | 1.0527 |
| No log | 3.8276 | 222 | 1.1726 | 0.4301 | 1.1726 | 1.0829 |
| No log | 3.8621 | 224 | 1.0732 | 0.4585 | 1.0732 | 1.0360 |
| No log | 3.8966 | 226 | 0.9207 | 0.4230 | 0.9207 | 0.9595 |
| No log | 3.9310 | 228 | 0.8838 | 0.4114 | 0.8838 | 0.9401 |
| No log | 3.9655 | 230 | 0.9041 | 0.4230 | 0.9041 | 0.9508 |
| No log | 4.0 | 232 | 0.9400 | 0.4745 | 0.9400 | 0.9696 |
| No log | 4.0345 | 234 | 0.9597 | 0.4930 | 0.9597 | 0.9796 |
| No log | 4.0690 | 236 | 0.9359 | 0.4306 | 0.9359 | 0.9674 |
| No log | 4.1034 | 238 | 0.8728 | 0.4069 | 0.8728 | 0.9342 |
| No log | 4.1379 | 240 | 0.8607 | 0.3981 | 0.8607 | 0.9277 |
| No log | 4.1724 | 242 | 0.8876 | 0.4405 | 0.8876 | 0.9421 |
| No log | 4.2069 | 244 | 0.8922 | 0.4160 | 0.8922 | 0.9445 |
| No log | 4.2414 | 246 | 0.9119 | 0.5073 | 0.9119 | 0.9549 |
| No log | 4.2759 | 248 | 0.9036 | 0.4364 | 0.9036 | 0.9506 |
| No log | 4.3103 | 250 | 0.9250 | 0.4760 | 0.9250 | 0.9618 |
| No log | 4.3448 | 252 | 0.9678 | 0.4076 | 0.9678 | 0.9838 |
| No log | 4.3793 | 254 | 0.9431 | 0.4063 | 0.9431 | 0.9711 |
| No log | 4.4138 | 256 | 0.9360 | 0.3705 | 0.9360 | 0.9675 |
| No log | 4.4483 | 258 | 0.9044 | 0.4186 | 0.9044 | 0.9510 |
| No log | 4.4828 | 260 | 0.9198 | 0.3705 | 0.9198 | 0.9590 |
| No log | 4.5172 | 262 | 1.0117 | 0.4058 | 1.0117 | 1.0058 |
| No log | 4.5517 | 264 | 1.0365 | 0.4286 | 1.0365 | 1.0181 |
| No log | 4.5862 | 266 | 0.9587 | 0.4033 | 0.9587 | 0.9791 |
| No log | 4.6207 | 268 | 0.9625 | 0.3842 | 0.9625 | 0.9811 |
| No log | 4.6552 | 270 | 1.0778 | 0.4405 | 1.0778 | 1.0382 |
| No log | 4.6897 | 272 | 1.2736 | 0.3972 | 1.2736 | 1.1286 |
| No log | 4.7241 | 274 | 1.2229 | 0.4874 | 1.2229 | 1.1059 |
| No log | 4.7586 | 276 | 1.0118 | 0.4510 | 1.0118 | 1.0059 |
| No log | 4.7931 | 278 | 0.8990 | 0.3640 | 0.8990 | 0.9482 |
| No log | 4.8276 | 280 | 0.8903 | 0.3640 | 0.8903 | 0.9435 |
| No log | 4.8621 | 282 | 0.9633 | 0.4171 | 0.9633 | 0.9815 |
| No log | 4.8966 | 284 | 1.1899 | 0.4491 | 1.1899 | 1.0908 |
| No log | 4.9310 | 286 | 1.1471 | 0.4487 | 1.1471 | 1.0710 |
| No log | 4.9655 | 288 | 0.9659 | 0.3512 | 0.9659 | 0.9828 |
| No log | 5.0 | 290 | 0.9598 | 0.3607 | 0.9598 | 0.9797 |
| No log | 5.0345 | 292 | 1.0185 | 0.3629 | 1.0185 | 1.0092 |
| No log | 5.0690 | 294 | 1.0927 | 0.3766 | 1.0927 | 1.0453 |
| No log | 5.1034 | 296 | 1.2132 | 0.4186 | 1.2132 | 1.1015 |
| No log | 5.1379 | 298 | 1.1753 | 0.3972 | 1.1753 | 1.0841 |
| No log | 5.1724 | 300 | 1.0267 | 0.3436 | 1.0267 | 1.0132 |
| No log | 5.2069 | 302 | 0.9602 | 0.3144 | 0.9602 | 0.9799 |
| No log | 5.2414 | 304 | 0.9394 | 0.3308 | 0.9394 | 0.9692 |
| No log | 5.2759 | 306 | 0.9721 | 0.3463 | 0.9721 | 0.9859 |
| No log | 5.3103 | 308 | 1.0627 | 0.3972 | 1.0627 | 1.0309 |
| No log | 5.3448 | 310 | 1.0346 | 0.4589 | 1.0346 | 1.0172 |
| No log | 5.3793 | 312 | 0.8910 | 0.3809 | 0.8910 | 0.9439 |
| No log | 5.4138 | 314 | 0.8535 | 0.3437 | 0.8535 | 0.9238 |
| No log | 5.4483 | 316 | 0.8935 | 0.3447 | 0.8935 | 0.9453 |
| No log | 5.4828 | 318 | 0.9043 | 0.3842 | 0.9043 | 0.9510 |
| No log | 5.5172 | 320 | 1.0219 | 0.4613 | 1.0219 | 1.0109 |
| No log | 5.5517 | 322 | 1.1213 | 0.4693 | 1.1213 | 1.0589 |
| No log | 5.5862 | 324 | 1.0448 | 0.4615 | 1.0448 | 1.0222 |
| No log | 5.6207 | 326 | 0.9554 | 0.4376 | 0.9554 | 0.9775 |
| No log | 5.6552 | 328 | 0.9215 | 0.4273 | 0.9215 | 0.9600 |
| No log | 5.6897 | 330 | 0.9059 | 0.3678 | 0.9059 | 0.9518 |
| No log | 5.7241 | 332 | 0.9500 | 0.3728 | 0.9500 | 0.9747 |
| No log | 5.7586 | 334 | 0.9382 | 0.3688 | 0.9382 | 0.9686 |
| No log | 5.7931 | 336 | 0.8925 | 0.2939 | 0.8925 | 0.9447 |
| No log | 5.8276 | 338 | 0.8878 | 0.4229 | 0.8878 | 0.9422 |
| No log | 5.8621 | 340 | 0.8877 | 0.4013 | 0.8877 | 0.9422 |
| No log | 5.8966 | 342 | 0.9140 | 0.3923 | 0.9140 | 0.9560 |
| No log | 5.9310 | 344 | 0.9673 | 0.4822 | 0.9673 | 0.9835 |
| No log | 5.9655 | 346 | 1.0367 | 0.4589 | 1.0367 | 1.0182 |
| No log | 6.0 | 348 | 0.9746 | 0.4792 | 0.9746 | 0.9872 |
| No log | 6.0345 | 350 | 0.8739 | 0.3535 | 0.8739 | 0.9348 |
| No log | 6.0690 | 352 | 0.8633 | 0.3572 | 0.8633 | 0.9291 |
| No log | 6.1034 | 354 | 0.8985 | 0.4677 | 0.8985 | 0.9479 |
| No log | 6.1379 | 356 | 0.9457 | 0.4449 | 0.9457 | 0.9725 |
| No log | 6.1724 | 358 | 1.0018 | 0.3975 | 1.0018 | 1.0009 |
| No log | 6.2069 | 360 | 1.0198 | 0.3728 | 1.0198 | 1.0099 |
| No log | 6.2414 | 362 | 0.9638 | 0.3624 | 0.9638 | 0.9817 |
| No log | 6.2759 | 364 | 0.9300 | 0.3772 | 0.9300 | 0.9644 |
| No log | 6.3103 | 366 | 0.9246 | 0.3860 | 0.9246 | 0.9616 |
| No log | 6.3448 | 368 | 0.9481 | 0.3877 | 0.9481 | 0.9737 |
| No log | 6.3793 | 370 | 1.0751 | 0.4497 | 1.0751 | 1.0369 |
| No log | 6.4138 | 372 | 1.0104 | 0.4400 | 1.0104 | 1.0052 |
| No log | 6.4483 | 374 | 0.8589 | 0.4003 | 0.8589 | 0.9267 |
| No log | 6.4828 | 376 | 0.8292 | 0.5179 | 0.8292 | 0.9106 |
| No log | 6.5172 | 378 | 0.8432 | 0.5071 | 0.8432 | 0.9183 |
| No log | 6.5517 | 380 | 0.8499 | 0.3601 | 0.8499 | 0.9219 |
| No log | 6.5862 | 382 | 1.0001 | 0.4694 | 1.0001 | 1.0001 |
| No log | 6.6207 | 384 | 1.1535 | 0.4681 | 1.1535 | 1.0740 |
| No log | 6.6552 | 386 | 1.1194 | 0.4592 | 1.1194 | 1.0580 |
| No log | 6.6897 | 388 | 0.9916 | 0.4228 | 0.9916 | 0.9958 |
| No log | 6.7241 | 390 | 0.9721 | 0.3065 | 0.9721 | 0.9859 |
| No log | 6.7586 | 392 | 0.9257 | 0.3159 | 0.9257 | 0.9621 |
| No log | 6.7931 | 394 | 0.8857 | 0.2416 | 0.8857 | 0.9411 |
| No log | 6.8276 | 396 | 0.8897 | 0.2951 | 0.8897 | 0.9432 |
| No log | 6.8621 | 398 | 0.9185 | 0.3029 | 0.9185 | 0.9584 |
| No log | 6.8966 | 400 | 0.9333 | 0.3134 | 0.9333 | 0.9661 |
| No log | 6.9310 | 402 | 0.9419 | 0.3207 | 0.9419 | 0.9705 |
| No log | 6.9655 | 404 | 1.0028 | 0.3917 | 1.0028 | 1.0014 |
| No log | 7.0 | 406 | 1.0369 | 0.3946 | 1.0369 | 1.0183 |
| No log | 7.0345 | 408 | 1.0417 | 0.4116 | 1.0417 | 1.0206 |
| No log | 7.0690 | 410 | 1.0446 | 0.3348 | 1.0446 | 1.0221 |
| No log | 7.1034 | 412 | 1.0095 | 0.2742 | 1.0095 | 1.0047 |
| No log | 7.1379 | 414 | 1.0162 | 0.3430 | 1.0162 | 1.0081 |
| No log | 7.1724 | 416 | 0.9666 | 0.3404 | 0.9666 | 0.9832 |
| No log | 7.2069 | 418 | 0.9279 | 0.2786 | 0.9279 | 0.9633 |
| No log | 7.2414 | 420 | 0.9377 | 0.3457 | 0.9377 | 0.9683 |
| No log | 7.2759 | 422 | 0.9122 | 0.3457 | 0.9122 | 0.9551 |
| No log | 7.3103 | 424 | 0.8928 | 0.2899 | 0.8928 | 0.9449 |
| No log | 7.3448 | 426 | 0.9117 | 0.3601 | 0.9117 | 0.9549 |
| No log | 7.3793 | 428 | 0.8795 | 0.3289 | 0.8795 | 0.9378 |
| No log | 7.4138 | 430 | 0.8721 | 0.3357 | 0.8721 | 0.9339 |
| No log | 7.4483 | 432 | 0.8763 | 0.3224 | 0.8763 | 0.9361 |
| No log | 7.4828 | 434 | 0.9084 | 0.5222 | 0.9084 | 0.9531 |
| No log | 7.5172 | 436 | 0.8770 | 0.4268 | 0.8770 | 0.9365 |
| No log | 7.5517 | 438 | 0.8508 | 0.3980 | 0.8508 | 0.9224 |
| No log | 7.5862 | 440 | 0.9223 | 0.4205 | 0.9223 | 0.9604 |
| No log | 7.6207 | 442 | 1.0067 | 0.4400 | 1.0067 | 1.0034 |
| No log | 7.6552 | 444 | 0.9343 | 0.4822 | 0.9343 | 0.9666 |
| No log | 7.6897 | 446 | 0.8419 | 0.4247 | 0.8419 | 0.9175 |
| No log | 7.7241 | 448 | 0.8061 | 0.4463 | 0.8061 | 0.8979 |
| No log | 7.7586 | 450 | 0.8104 | 0.4221 | 0.8104 | 0.9002 |
| No log | 7.7931 | 452 | 0.8290 | 0.4327 | 0.8290 | 0.9105 |
| No log | 7.8276 | 454 | 0.8036 | 0.4088 | 0.8036 | 0.8964 |
| No log | 7.8621 | 456 | 0.7561 | 0.5431 | 0.7561 | 0.8695 |
| No log | 7.8966 | 458 | 0.7412 | 0.5057 | 0.7412 | 0.8609 |
| No log | 7.9310 | 460 | 0.7442 | 0.4706 | 0.7442 | 0.8627 |
| No log | 7.9655 | 462 | 0.7870 | 0.4421 | 0.7870 | 0.8871 |
| No log | 8.0 | 464 | 0.8435 | 0.4103 | 0.8435 | 0.9184 |
| No log | 8.0345 | 466 | 0.8581 | 0.4306 | 0.8581 | 0.9263 |
| No log | 8.0690 | 468 | 0.8054 | 0.4425 | 0.8054 | 0.8974 |
| No log | 8.1034 | 470 | 0.7978 | 0.4444 | 0.7978 | 0.8932 |
| No log | 8.1379 | 472 | 0.7923 | 0.3985 | 0.7923 | 0.8901 |
| No log | 8.1724 | 474 | 0.7926 | 0.4235 | 0.7926 | 0.8903 |
| No log | 8.2069 | 476 | 0.7973 | 0.4297 | 0.7973 | 0.8929 |
| No log | 8.2414 | 478 | 0.7842 | 0.3967 | 0.7842 | 0.8856 |
| No log | 8.2759 | 480 | 0.7854 | 0.3970 | 0.7854 | 0.8862 |
| No log | 8.3103 | 482 | 0.8013 | 0.4622 | 0.8013 | 0.8952 |
| No log | 8.3448 | 484 | 0.8078 | 0.4622 | 0.8078 | 0.8988 |
| No log | 8.3793 | 486 | 0.7642 | 0.5328 | 0.7642 | 0.8742 |
| No log | 8.4138 | 488 | 0.8224 | 0.4719 | 0.8224 | 0.9069 |
| No log | 8.4483 | 490 | 1.0762 | 0.5161 | 1.0762 | 1.0374 |
| No log | 8.4828 | 492 | 1.1014 | 0.5145 | 1.1014 | 1.0495 |
| No log | 8.5172 | 494 | 0.9148 | 0.4810 | 0.9148 | 0.9565 |
| No log | 8.5517 | 496 | 0.7722 | 0.4577 | 0.7722 | 0.8787 |
| No log | 8.5862 | 498 | 0.8145 | 0.5435 | 0.8145 | 0.9025 |
| 0.312 | 8.6207 | 500 | 0.8249 | 0.4903 | 0.8249 | 0.9083 |
| 0.312 | 8.6552 | 502 | 0.8091 | 0.4072 | 0.8091 | 0.8995 |
| 0.312 | 8.6897 | 504 | 0.8141 | 0.3455 | 0.8141 | 0.9023 |
| 0.312 | 8.7241 | 506 | 0.8404 | 0.3563 | 0.8404 | 0.9167 |
| 0.312 | 8.7586 | 508 | 0.9525 | 0.4156 | 0.9525 | 0.9759 |
| 0.312 | 8.7931 | 510 | 1.0523 | 0.4790 | 1.0523 | 1.0258 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
dimasik2987/c4fd30f8-9bed-4199-bfca-cc8c495b627b | dimasik2987 | 2025-01-20T23:52:01Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | 2025-01-20T23:48:29Z | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4fd30f8-9bed-4199-bfca-cc8c495b627b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 09f295eb7fd803c2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09f295eb7fd803c2_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik2987/c4fd30f8-9bed-4199-bfca-cc8c495b627b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/09f295eb7fd803c2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a727ab38-5312-4e68-8885-5980a4cae8a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a727ab38-5312-4e68-8885-5980a4cae8a9
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# c4fd30f8-9bed-4199-bfca-cc8c495b627b
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0042 | 5 | nan |
| 0.0 | 0.0085 | 10 | nan |
| 0.0 | 0.0127 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/8e83e5ba-b92a-4907-bf39-218caf42228a | VERSIL91 | 2025-01-20T23:51:49Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-20T23:51:46Z | ---
library_name: peft
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bef97220-cdcf-4144-9f98-4582cf4a902b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7da10487b55868a6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7da10487b55868a6_train_data.json
type:
field_instruction: hyps
field_output: ref
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: null
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7da10487b55868a6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bef97220-cdcf-4144-9f98-4582cf4a902b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bef97220-cdcf-4144-9f98-4582cf4a902b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bef97220-cdcf-4144-9f98-4582cf4a902b
This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3772 | 0.0002 | 1 | 10.3803 |
| 10.3751 | 0.0031 | 13 | 10.3758 |
| 10.3722 | 0.0062 | 26 | 10.3686 |
| 10.3688 | 0.0093 | 39 | 10.3648 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk1205/41925306-0ef0-4197-b10b-98e1dc6ca5d3 | kostiantynk1205 | 2025-01-20T23:51:38Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T23:51:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 41925306-0ef0-4197-b10b-98e1dc6ca5d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd06633aceb12410_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd06633aceb12410_train_data.json
type:
field_instruction: tests
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/41925306-0ef0-4197-b10b-98e1dc6ca5d3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/dd06633aceb12410_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 370ef635-02c6-4a8f-be9e-f46f2205d9d9
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 370ef635-02c6-4a8f-be9e-f46f2205d9d9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 41925306-0ef0-4197-b10b-98e1dc6ca5d3
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.1905 | 1 | nan |
| 0.0 | 0.3810 | 2 | nan |
| 0.0 | 0.7619 | 4 | nan |
| 0.0 | 1.1429 | 6 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik87/150164aa-8220-47b5-a42e-e3e1914c42b8 | dimasik87 | 2025-01-20T23:50:30Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | 2025-01-20T23:48:18Z | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 150164aa-8220-47b5-a42e-e3e1914c42b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 09f295eb7fd803c2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09f295eb7fd803c2_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/150164aa-8220-47b5-a42e-e3e1914c42b8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/09f295eb7fd803c2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a727ab38-5312-4e68-8885-5980a4cae8a9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a727ab38-5312-4e68-8885-5980a4cae8a9
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 150164aa-8220-47b5-a42e-e3e1914c42b8
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | nan |
| 0.0 | 0.0042 | 5 | nan |
| 0.0 | 0.0085 | 10 | nan |
| 0.0 | 0.0127 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ardaspear/32cc01b9-8751-4d53-b346-be211e673aa6 | ardaspear | 2025-01-20T23:49:47Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-01-20T22:33:09Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 32cc01b9-8751-4d53-b346-be211e673aa6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 25670fa5d5514c5b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/25670fa5d5514c5b_train_data.json
type:
field_input: facts
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/32cc01b9-8751-4d53-b346-be211e673aa6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/25670fa5d5514c5b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: ac3b7fa4-8d2e-4dc9-b139-aaeea5f132ac
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: ac3b7fa4-8d2e-4dc9-b139-aaeea5f132ac
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 32cc01b9-8751-4d53-b346-be211e673aa6
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 0.7974 |
| 0.7688 | 0.0076 | 9 | 0.7061 |
| 0.3383 | 0.0152 | 18 | 0.3166 |
| 0.174 | 0.0228 | 27 | 0.1894 |
| 0.147 | 0.0305 | 36 | 0.1512 |
| 0.1256 | 0.0381 | 45 | 0.1198 |
| 0.1727 | 0.0457 | 54 | 0.0931 |
| 0.0712 | 0.0533 | 63 | 0.0830 |
| 0.0857 | 0.0609 | 72 | 0.0793 |
| 0.0679 | 0.0685 | 81 | 0.0780 |
| 0.039 | 0.0762 | 90 | 0.0777 |
| 0.0732 | 0.0838 | 99 | 0.0776 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tuanna08go/db6fb768-4f34-45e9-a10c-21c12445199e | tuanna08go | 2025-01-20T23:49:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-01-20T23:09:11Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: db6fb768-4f34-45e9-a10c-21c12445199e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 25670fa5d5514c5b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/25670fa5d5514c5b_train_data.json
type:
field_input: facts
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/db6fb768-4f34-45e9-a10c-21c12445199e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/25670fa5d5514c5b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ac3b7fa4-8d2e-4dc9-b139-aaeea5f132ac
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ac3b7fa4-8d2e-4dc9-b139-aaeea5f132ac
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# db6fb768-4f34-45e9-a10c-21c12445199e
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.2075 |
| 1.0753 | 0.0021 | 10 | 1.0732 |
| 0.7292 | 0.0042 | 20 | 0.6758 |
| 0.5962 | 0.0063 | 30 | 0.5538 |
| 0.4682 | 0.0085 | 40 | 0.5197 |
| 0.4209 | 0.0106 | 50 | 0.5158 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
charlesniswander/fruit_freshness_demo | charlesniswander | 2025-01-20T23:49:26Z | 28 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | 2025-01-20T23:49:14Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fruit_freshness_demo
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.3483146131038666
---
# fruit_freshness_demo
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fresh apple

#### fresh banana

#### rotten apple

#### rotten banana
 |
taopanda-2/a31b520a-ea45-423d-a41b-09df0cfc856c | taopanda-2 | 2025-01-20T23:49:24Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:06:04Z | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: Qwen/Qwen2.5-0.5B-Instruct
model-index:
- name: a31b520a-ea45-423d-a41b-09df0cfc856c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
dataset_prepared_path: null
datasets:
- data_files:
- a7412d8c8f805ddf_train_data.json
ds_type: json
format: custom
path: a7412d8c8f805ddf_train_data.json
type:
field: null
field_input: null
field_instruction: premise
field_output: hypothesis
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 2
flash_attention: null
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: taopanda-2/a31b520a-ea45-423d-a41b-09df0cfc856c
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: ./outputs/lora-out/taopanda-2_ba28db81-f399-44e5-bdef-7af8dcf5a4ca
pad_to_sequence_len: null
resume_from_checkpoint: null
sample_packing: false
saves_per_epoch: 1
seed: 91813
sequence_len: 2048
special_tokens: null
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: fatcat87-taopanda
wandb_log_model: null
wandb_mode: online
wandb_name: taopanda-2_ba28db81-f399-44e5-bdef-7af8dcf5a4ca
wandb_project: subnet56
wandb_runid: taopanda-2_ba28db81-f399-44e5-bdef-7af8dcf5a4ca
wandb_watch: null
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/sk5fvbjg)
# a31b520a-ea45-423d-a41b-09df0cfc856c
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 91813
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4358 | 0.0006 | 1 | 4.7686 |
| 0.8153 | 0.4998 | 823 | 1.0209 |
| 0.9222 | 0.9997 | 1646 | 0.9854 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
nat-hunt/13925d41-82bd-407e-b7bd-93b7227d91e6 | nat-hunt | 2025-01-20T23:49:00Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-20T23:45:49Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 13925d41-82bd-407e-b7bd-93b7227d91e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 91e193d3dca1611f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/91e193d3dca1611f_train_data.json
type:
field_input: parent_id
field_instruction: role
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/13925d41-82bd-407e-b7bd-93b7227d91e6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/91e193d3dca1611f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 856a9aac-189f-40f7-b27c-c5616995b0d1
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 856a9aac-189f-40f7-b27c-c5616995b0d1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 13925d41-82bd-407e-b7bd-93b7227d91e6
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.0 | 0.0011 | 3 | nan |
| 0.0 | 0.0021 | 6 | nan |
| 0.0 | 0.0032 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/doubleexpo | LHRuig | 2025-01-20T23:48:34Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:48:28Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# doubleexpo
<Gallery />
## Model description
doubleexpo lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/doubleexpo/tree/main) them in the Files & versions tab.
|
mradermacher/QwQ-56B-Ghost-i1-GGUF | mradermacher | 2025-01-20T23:48:26Z | 513 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"uncensored",
"abliterated",
"chat",
"en",
"base_model:JackCloudman/QwQ-56B-Ghost",
"base_model:quantized:JackCloudman/QwQ-56B-Ghost",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-20T12:39:13Z | ---
base_model: JackCloudman/QwQ-56B-Ghost
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- uncensored
- abliterated
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/JackCloudman/QwQ-56B-Ghost
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QwQ-56B-Ghost-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ1_S.gguf) | i1-IQ1_S | 12.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ1_M.gguf) | i1-IQ1_M | 13.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 15.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ2_XS.gguf) | i1-IQ2_XS | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ2_S.gguf) | i1-IQ2_S | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ2_M.gguf) | i1-IQ2_M | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q2_K_S.gguf) | i1-Q2_K_S | 19.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q2_K.gguf) | i1-Q2_K | 21.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 21.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ3_XS.gguf) | i1-IQ3_XS | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q3_K_S.gguf) | i1-Q3_K_S | 24.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ3_S.gguf) | i1-IQ3_S | 24.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ3_M.gguf) | i1-IQ3_M | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q3_K_M.gguf) | i1-Q3_K_M | 27.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q3_K_L.gguf) | i1-Q3_K_L | 29.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-IQ4_XS.gguf) | i1-IQ4_XS | 30.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q4_0.gguf) | i1-Q4_0 | 32.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q4_K_S.gguf) | i1-Q4_K_S | 32.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q4_K_M.gguf) | i1-Q4_K_M | 34.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q4_1.gguf) | i1-Q4_1 | 35.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q5_K_S.gguf) | i1-Q5_K_S | 38.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q5_K_M.gguf) | i1-Q5_K_M | 39.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwQ-56B-Ghost-i1-GGUF/resolve/main/QwQ-56B-Ghost.i1-Q6_K.gguf) | i1-Q6_K | 46.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/LwQ-Reasoner-10B-i1-GGUF | mradermacher | 2025-01-20T23:48:26Z | 580 | 0 | transformers | [
"transformers",
"gguf",
"LlamaWithQuestions",
"CoT",
"Reasoner",
"LWQ",
"en",
"base_model:prithivMLmods/LwQ-Reasoner-10B",
"base_model:quantized:prithivMLmods/LwQ-Reasoner-10B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-01-20T22:25:16Z | ---
base_model: prithivMLmods/LwQ-Reasoner-10B
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- LlamaWithQuestions
- CoT
- Reasoner
- LWQ
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/LwQ-Reasoner-10B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q2_K.gguf) | i1-Q2_K | 4.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q4_0.gguf) | i1-Q4_0 | 6.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q4_1.gguf) | i1-Q4_1 | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF/resolve/main/LwQ-Reasoner-10B.i1-Q6_K.gguf) | i1-Q6_K | 8.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jncraton/DeepSeek-R1-Distill-Llama-8B-ct2-int8 | jncraton | 2025-01-20T23:47:44Z | 12 | 0 | null | [
"region:us"
] | null | 2025-01-20T23:33:07Z | # DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
Sakalti/SJT-4.5B | Sakalti | 2025-01-20T23:47:23Z | 152 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-12-23T08:33:51Z | ---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
inference: true
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
- ja
widget:
- messages:
- role: user
content: こんにちは!
- messages:
- role: user
content: 魚を捌くのは難しいですか?
- messages:
- role: user
content: 日本の首都はどこですか?
- messages:
- role: user
content: hello!
- messages:
- role: user
content: こんにちは!
- messages:
- role: user
content: whats is the capital of japan?
- messages:
- role: user
content: Who are you?
- messages:
- role: user
content: 你好
---
# Uploaded model
- **Developed by:** Sakalti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JacksonBrune/bf94c46f-78a2-4147-b4e1-1ca46e6cb0ef | JacksonBrune | 2025-01-20T23:45:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-20T22:29:38Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf94c46f-78a2-4147-b4e1-1ca46e6cb0ef
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 41d403c8b37c92fc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/41d403c8b37c92fc_train_data.json
type:
field_input: mesh_terms
field_instruction: title
field_output: abstract
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/bf94c46f-78a2-4147-b4e1-1ca46e6cb0ef
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/41d403c8b37c92fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075ea541-bd04-429e-a989-c49dabc36fc3
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075ea541-bd04-429e-a989-c49dabc36fc3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bf94c46f-78a2-4147-b4e1-1ca46e6cb0ef
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
marialvsantiago/7b9a7390-1246-446a-87dd-03f5c20759c1 | marialvsantiago | 2025-01-20T23:45:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B",
"base_model:adapter:unsloth/SmolLM-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T23:15:21Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b9a7390-1246-446a-87dd-03f5c20759c1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b50bb242ea24ad3f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b50bb242ea24ad3f_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: marialvsantiago/7b9a7390-1246-446a-87dd-03f5c20759c1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/b50bb242ea24ad3f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8caf838c-eaff-45bc-b751-9573db70c518
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8caf838c-eaff-45bc-b751-9573db70c518
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 7b9a7390-1246-446a-87dd-03f5c20759c1
This model is a fine-tuned version of [unsloth/SmolLM-1.7B](https://huggingface.co/unsloth/SmolLM-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0005 | 5 | nan |
| 0.0 | 0.0010 | 10 | nan |
| 0.0 | 0.0015 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mmnga/DeepSeek-R1-Distill-Qwen-32B-gguf | mmnga | 2025-01-20T23:44:54Z | 3,380 | 3 | null | [
"gguf",
"qwen2",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-20T16:54:59Z | ---
license: mit
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
tags:
- qwen2
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
---
# DeepSeek-R1-Distill-Qwen-32B-gguf
[deepseek-aiさんが公開しているDeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'DeepSeek-R1-Distill-Qwen-32B-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて' -cnv
``` |
Tejasvisudugureddy/distilled_dpo_chatbot | Tejasvisudugureddy | 2025-01-20T23:44:50Z | 213 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-20T23:44:34Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: distilled_dpo_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled_dpo_chatbot
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
LHRuig/proultra | LHRuig | 2025-01-20T23:44:49Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:44:29Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# proultra
<Gallery />
## Model description
proultra lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/proultra/tree/main) them in the Files & versions tab.
|
LHRuig/reallora | LHRuig | 2025-01-20T23:41:59Z | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:41:55Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# reallora
<Gallery />
## Model description
reallora lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/reallora/tree/main) them in the Files & versions tab.
|
LHRuig/db0real | LHRuig | 2025-01-20T23:41:19Z | 16 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:41:17Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# db0real
<Gallery />
## Model description
db0real lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/db0real/tree/main) them in the Files & versions tab.
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k19_task5_organization | MayBashendy | 2025-01-20T23:41:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-20T23:22:32Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k19_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k19_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3225
- Qwk: 0.0
- Mse: 1.3225
- Rmse: 1.1500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0435 | 2 | 3.9365 | -0.0232 | 3.9365 | 1.9841 |
| No log | 0.0870 | 4 | 2.7213 | -0.0305 | 2.7213 | 1.6496 |
| No log | 0.1304 | 6 | 2.3416 | -0.0372 | 2.3416 | 1.5302 |
| No log | 0.1739 | 8 | 1.4991 | 0.0854 | 1.4991 | 1.2244 |
| No log | 0.2174 | 10 | 1.1113 | 0.2268 | 1.1113 | 1.0542 |
| No log | 0.2609 | 12 | 1.1950 | 0.0731 | 1.1950 | 1.0932 |
| No log | 0.3043 | 14 | 1.1082 | 0.2094 | 1.1082 | 1.0527 |
| No log | 0.3478 | 16 | 1.2264 | 0.0884 | 1.2264 | 1.1074 |
| No log | 0.3913 | 18 | 1.6641 | -0.0661 | 1.6641 | 1.2900 |
| No log | 0.4348 | 20 | 1.5899 | -0.1122 | 1.5899 | 1.2609 |
| No log | 0.4783 | 22 | 1.3669 | 0.0519 | 1.3669 | 1.1692 |
| No log | 0.5217 | 24 | 1.3565 | -0.0417 | 1.3565 | 1.1647 |
| No log | 0.5652 | 26 | 1.3698 | -0.0032 | 1.3698 | 1.1704 |
| No log | 0.6087 | 28 | 1.3109 | -0.0497 | 1.3109 | 1.1450 |
| No log | 0.6522 | 30 | 1.0949 | 0.1589 | 1.0949 | 1.0464 |
| No log | 0.6957 | 32 | 1.0503 | 0.2114 | 1.0503 | 1.0248 |
| No log | 0.7391 | 34 | 1.1274 | 0.1160 | 1.1274 | 1.0618 |
| No log | 0.7826 | 36 | 1.4414 | -0.0881 | 1.4414 | 1.2006 |
| No log | 0.8261 | 38 | 1.6338 | -0.1445 | 1.6338 | 1.2782 |
| No log | 0.8696 | 40 | 1.2646 | 0.0622 | 1.2646 | 1.1245 |
| No log | 0.9130 | 42 | 1.0736 | 0.1864 | 1.0736 | 1.0361 |
| No log | 0.9565 | 44 | 1.0625 | 0.2911 | 1.0625 | 1.0308 |
| No log | 1.0 | 46 | 1.1263 | 0.2647 | 1.1263 | 1.0613 |
| No log | 1.0435 | 48 | 1.3547 | -0.0249 | 1.3547 | 1.1639 |
| No log | 1.0870 | 50 | 1.6222 | -0.1135 | 1.6222 | 1.2737 |
| No log | 1.1304 | 52 | 1.4905 | -0.0623 | 1.4905 | 1.2209 |
| No log | 1.1739 | 54 | 1.2550 | 0.0911 | 1.2550 | 1.1203 |
| No log | 1.2174 | 56 | 1.0641 | 0.1857 | 1.0641 | 1.0315 |
| No log | 1.2609 | 58 | 1.0561 | 0.2179 | 1.0561 | 1.0277 |
| No log | 1.3043 | 60 | 1.1071 | 0.1761 | 1.1071 | 1.0522 |
| No log | 1.3478 | 62 | 1.2148 | 0.2021 | 1.2148 | 1.1022 |
| No log | 1.3913 | 64 | 1.3043 | 0.1489 | 1.3043 | 1.1421 |
| No log | 1.4348 | 66 | 1.2530 | 0.1609 | 1.2530 | 1.1194 |
| No log | 1.4783 | 68 | 1.1703 | 0.2312 | 1.1703 | 1.0818 |
| No log | 1.5217 | 70 | 1.1921 | 0.1801 | 1.1921 | 1.0918 |
| No log | 1.5652 | 72 | 1.2234 | 0.1370 | 1.2234 | 1.1061 |
| No log | 1.6087 | 74 | 1.4115 | 0.0939 | 1.4115 | 1.1881 |
| No log | 1.6522 | 76 | 1.5486 | 0.1111 | 1.5486 | 1.2444 |
| No log | 1.6957 | 78 | 1.4303 | 0.0861 | 1.4303 | 1.1959 |
| No log | 1.7391 | 80 | 1.2314 | 0.1500 | 1.2314 | 1.1097 |
| No log | 1.7826 | 82 | 1.0416 | 0.2226 | 1.0416 | 1.0206 |
| No log | 1.8261 | 84 | 0.9985 | 0.2818 | 0.9985 | 0.9993 |
| No log | 1.8696 | 86 | 1.0136 | 0.2251 | 1.0136 | 1.0068 |
| No log | 1.9130 | 88 | 1.1889 | 0.0961 | 1.1889 | 1.0904 |
| No log | 1.9565 | 90 | 1.3523 | 0.0541 | 1.3523 | 1.1629 |
| No log | 2.0 | 92 | 1.3806 | 0.0541 | 1.3806 | 1.1750 |
| No log | 2.0435 | 94 | 1.2957 | 0.1170 | 1.2957 | 1.1383 |
| No log | 2.0870 | 96 | 1.2446 | 0.2577 | 1.2446 | 1.1156 |
| No log | 2.1304 | 98 | 1.2931 | 0.2934 | 1.2931 | 1.1371 |
| No log | 2.1739 | 100 | 1.3752 | 0.2647 | 1.3752 | 1.1727 |
| No log | 2.2174 | 102 | 1.5603 | 0.2239 | 1.5603 | 1.2491 |
| No log | 2.2609 | 104 | 1.5673 | 0.1943 | 1.5673 | 1.2519 |
| No log | 2.3043 | 106 | 1.3581 | 0.1880 | 1.3581 | 1.1654 |
| No log | 2.3478 | 108 | 1.2639 | 0.2203 | 1.2639 | 1.1242 |
| No log | 2.3913 | 110 | 1.3721 | 0.1744 | 1.3721 | 1.1713 |
| No log | 2.4348 | 112 | 1.3828 | 0.1744 | 1.3828 | 1.1759 |
| No log | 2.4783 | 114 | 1.3706 | 0.0931 | 1.3706 | 1.1707 |
| No log | 2.5217 | 116 | 1.3150 | 0.0278 | 1.3150 | 1.1467 |
| No log | 2.5652 | 118 | 1.3244 | 0.0278 | 1.3244 | 1.1508 |
| No log | 2.6087 | 120 | 1.3610 | 0.0571 | 1.3610 | 1.1666 |
| No log | 2.6522 | 122 | 1.4379 | 0.1438 | 1.4379 | 1.1991 |
| No log | 2.6957 | 124 | 1.5829 | 0.2084 | 1.5829 | 1.2581 |
| No log | 2.7391 | 126 | 1.6933 | 0.1630 | 1.6933 | 1.3013 |
| No log | 2.7826 | 128 | 1.7564 | 0.1565 | 1.7564 | 1.3253 |
| No log | 2.8261 | 130 | 1.6753 | 0.1531 | 1.6753 | 1.2943 |
| No log | 2.8696 | 132 | 1.5731 | 0.2084 | 1.5731 | 1.2542 |
| No log | 2.9130 | 134 | 1.3438 | 0.1935 | 1.3438 | 1.1592 |
| No log | 2.9565 | 136 | 1.1707 | 0.3322 | 1.1707 | 1.0820 |
| No log | 3.0 | 138 | 1.1864 | 0.2986 | 1.1864 | 1.0892 |
| No log | 3.0435 | 140 | 1.3843 | 0.1727 | 1.3843 | 1.1765 |
| No log | 3.0870 | 142 | 1.3959 | 0.2026 | 1.3959 | 1.1815 |
| No log | 3.1304 | 144 | 1.2520 | 0.1750 | 1.2520 | 1.1189 |
| No log | 3.1739 | 146 | 1.0363 | 0.2836 | 1.0363 | 1.0180 |
| No log | 3.2174 | 148 | 1.0039 | 0.2814 | 1.0039 | 1.0020 |
| No log | 3.2609 | 150 | 1.0200 | 0.2747 | 1.0200 | 1.0099 |
| No log | 3.3043 | 152 | 1.1611 | 0.1751 | 1.1611 | 1.0775 |
| No log | 3.3478 | 154 | 1.2712 | 0.2241 | 1.2712 | 1.1275 |
| No log | 3.3913 | 156 | 1.3275 | 0.2026 | 1.3275 | 1.1522 |
| No log | 3.4348 | 158 | 1.3712 | 0.2015 | 1.3712 | 1.1710 |
| No log | 3.4783 | 160 | 1.4292 | 0.2126 | 1.4292 | 1.1955 |
| No log | 3.5217 | 162 | 1.3521 | 0.1744 | 1.3521 | 1.1628 |
| No log | 3.5652 | 164 | 1.2403 | 0.1052 | 1.2403 | 1.1137 |
| No log | 3.6087 | 166 | 1.2213 | 0.1052 | 1.2213 | 1.1051 |
| No log | 3.6522 | 168 | 1.2553 | 0.1407 | 1.2553 | 1.1204 |
| No log | 3.6957 | 170 | 1.3345 | 0.1744 | 1.3345 | 1.1552 |
| No log | 3.7391 | 172 | 1.3148 | 0.1407 | 1.3148 | 1.1467 |
| No log | 3.7826 | 174 | 1.2434 | 0.1407 | 1.2434 | 1.1151 |
| No log | 3.8261 | 176 | 1.1953 | 0.1744 | 1.1953 | 1.0933 |
| No log | 3.8696 | 178 | 1.1782 | 0.1838 | 1.1782 | 1.0855 |
| No log | 3.9130 | 180 | 1.2119 | 0.2372 | 1.2119 | 1.1009 |
| No log | 3.9565 | 182 | 1.2574 | 0.1952 | 1.2574 | 1.1213 |
| No log | 4.0 | 184 | 1.2950 | 0.1288 | 1.2950 | 1.1380 |
| No log | 4.0435 | 186 | 1.2514 | 0.1952 | 1.2514 | 1.1187 |
| No log | 4.0870 | 188 | 1.0793 | 0.1628 | 1.0793 | 1.0389 |
| No log | 4.1304 | 190 | 0.9864 | 0.0990 | 0.9864 | 0.9932 |
| No log | 4.1739 | 192 | 1.0281 | 0.1654 | 1.0281 | 1.0139 |
| No log | 4.2174 | 194 | 1.2053 | 0.1943 | 1.2053 | 1.0979 |
| No log | 4.2609 | 196 | 1.4822 | 0.2733 | 1.4822 | 1.2174 |
| No log | 4.3043 | 198 | 1.6190 | 0.2733 | 1.6190 | 1.2724 |
| No log | 4.3478 | 200 | 1.5802 | 0.2568 | 1.5802 | 1.2571 |
| No log | 4.3913 | 202 | 1.4305 | 0.2089 | 1.4305 | 1.1961 |
| No log | 4.4348 | 204 | 1.2446 | 0.1654 | 1.2446 | 1.1156 |
| No log | 4.4783 | 206 | 1.1579 | 0.1226 | 1.1579 | 1.0761 |
| No log | 4.5217 | 208 | 1.2399 | -0.0929 | 1.2399 | 1.1135 |
| No log | 4.5652 | 210 | 1.3674 | -0.0263 | 1.3674 | 1.1693 |
| No log | 4.6087 | 212 | 1.4571 | 0.0935 | 1.4571 | 1.2071 |
| No log | 4.6522 | 214 | 1.6547 | 0.1565 | 1.6547 | 1.2864 |
| No log | 4.6957 | 216 | 1.7420 | 0.1476 | 1.7420 | 1.3198 |
| No log | 4.7391 | 218 | 1.6654 | 0.1423 | 1.6654 | 1.2905 |
| No log | 4.7826 | 220 | 1.4261 | 0.2062 | 1.4261 | 1.1942 |
| No log | 4.8261 | 222 | 1.1989 | 0.2038 | 1.1989 | 1.0950 |
| No log | 4.8696 | 224 | 1.1364 | 0.1976 | 1.1364 | 1.0660 |
| No log | 4.9130 | 226 | 1.1866 | 0.2126 | 1.1866 | 1.0893 |
| No log | 4.9565 | 228 | 1.3037 | 0.2126 | 1.3037 | 1.1418 |
| No log | 5.0 | 230 | 1.4564 | 0.2342 | 1.4564 | 1.2068 |
| No log | 5.0435 | 232 | 1.5256 | 0.2694 | 1.5256 | 1.2352 |
| No log | 5.0870 | 234 | 1.5762 | 0.2940 | 1.5762 | 1.2555 |
| No log | 5.1304 | 236 | 1.5906 | 0.2940 | 1.5906 | 1.2612 |
| No log | 5.1739 | 238 | 1.6182 | 0.2974 | 1.6182 | 1.2721 |
| No log | 5.2174 | 240 | 1.6432 | 0.2974 | 1.6432 | 1.2819 |
| No log | 5.2609 | 242 | 1.5135 | 0.2694 | 1.5135 | 1.2302 |
| No log | 5.3043 | 244 | 1.3818 | 0.2170 | 1.3818 | 1.1755 |
| No log | 5.3478 | 246 | 1.2951 | 0.2123 | 1.2951 | 1.1380 |
| No log | 5.3913 | 248 | 1.2601 | 0.1850 | 1.2601 | 1.1225 |
| No log | 5.4348 | 250 | 1.2575 | 0.2170 | 1.2574 | 1.1214 |
| No log | 5.4783 | 252 | 1.2783 | 0.2117 | 1.2783 | 1.1306 |
| No log | 5.5217 | 254 | 1.3535 | 0.2117 | 1.3535 | 1.1634 |
| No log | 5.5652 | 256 | 1.4060 | 0.2391 | 1.4060 | 1.1857 |
| No log | 5.6087 | 258 | 1.3457 | 0.2117 | 1.3457 | 1.1600 |
| No log | 5.6522 | 260 | 1.2294 | 0.2170 | 1.2294 | 1.1088 |
| No log | 5.6957 | 262 | 1.2760 | 0.2170 | 1.2760 | 1.1296 |
| No log | 5.7391 | 264 | 1.4787 | 0.2653 | 1.4787 | 1.2160 |
| No log | 5.7826 | 266 | 1.5640 | 0.2391 | 1.5640 | 1.2506 |
| No log | 5.8261 | 268 | 1.5706 | 0.2062 | 1.5706 | 1.2532 |
| No log | 5.8696 | 270 | 1.4463 | 0.2239 | 1.4463 | 1.2026 |
| No log | 5.9130 | 272 | 1.2451 | 0.1407 | 1.2451 | 1.1158 |
| No log | 5.9565 | 274 | 1.1629 | 0.0401 | 1.1629 | 1.0784 |
| No log | 6.0 | 276 | 1.1448 | 0.0401 | 1.1448 | 1.0699 |
| No log | 6.0435 | 278 | 1.1363 | 0.0 | 1.1363 | 1.0660 |
| No log | 6.0870 | 280 | 1.2335 | 0.0781 | 1.2335 | 1.1106 |
| No log | 6.1304 | 282 | 1.4538 | 0.1814 | 1.4538 | 1.2057 |
| No log | 6.1739 | 284 | 1.6059 | 0.2694 | 1.6059 | 1.2673 |
| No log | 6.2174 | 286 | 1.5940 | 0.2315 | 1.5940 | 1.2625 |
| No log | 6.2609 | 288 | 1.5500 | 0.2270 | 1.5500 | 1.2450 |
| No log | 6.3043 | 290 | 1.4412 | 0.1850 | 1.4412 | 1.2005 |
| No log | 6.3478 | 292 | 1.3873 | 0.1769 | 1.3873 | 1.1778 |
| No log | 6.3913 | 294 | 1.2601 | 0.0 | 1.2601 | 1.1226 |
| No log | 6.4348 | 296 | 1.2080 | 0.0 | 1.2080 | 1.0991 |
| No log | 6.4783 | 298 | 1.2249 | 0.0 | 1.2249 | 1.1067 |
| No log | 6.5217 | 300 | 1.2305 | 0.0 | 1.2305 | 1.1093 |
| No log | 6.5652 | 302 | 1.2659 | 0.0033 | 1.2659 | 1.1251 |
| No log | 6.6087 | 304 | 1.3705 | 0.1255 | 1.3705 | 1.1707 |
| No log | 6.6522 | 306 | 1.5111 | 0.1370 | 1.5111 | 1.2293 |
| No log | 6.6957 | 308 | 1.6192 | 0.2342 | 1.6192 | 1.2725 |
| No log | 6.7391 | 310 | 1.5472 | 0.2424 | 1.5472 | 1.2439 |
| No log | 6.7826 | 312 | 1.4260 | 0.1113 | 1.4260 | 1.1942 |
| No log | 6.8261 | 314 | 1.3601 | 0.0 | 1.3601 | 1.1662 |
| No log | 6.8696 | 316 | 1.4354 | 0.0 | 1.4354 | 1.1981 |
| No log | 6.9130 | 318 | 1.4593 | 0.0 | 1.4593 | 1.2080 |
| No log | 6.9565 | 320 | 1.5025 | 0.0661 | 1.5025 | 1.2258 |
| No log | 7.0 | 322 | 1.5379 | 0.1052 | 1.5379 | 1.2401 |
| No log | 7.0435 | 324 | 1.5436 | 0.1744 | 1.5436 | 1.2424 |
| No log | 7.0870 | 326 | 1.5589 | 0.0661 | 1.5589 | 1.2486 |
| No log | 7.1304 | 328 | 1.5395 | 0.0883 | 1.5395 | 1.2408 |
| No log | 7.1739 | 330 | 1.5589 | 0.1734 | 1.5589 | 1.2485 |
| No log | 7.2174 | 332 | 1.7114 | 0.1808 | 1.7114 | 1.3082 |
| No log | 7.2609 | 334 | 1.8738 | 0.1002 | 1.8738 | 1.3689 |
| No log | 7.3043 | 336 | 1.9387 | 0.1002 | 1.9387 | 1.3924 |
| No log | 7.3478 | 338 | 1.8955 | 0.1807 | 1.8955 | 1.3768 |
| No log | 7.3913 | 340 | 1.6579 | 0.2117 | 1.6579 | 1.2876 |
| No log | 7.4348 | 342 | 1.4418 | 0.1288 | 1.4418 | 1.2008 |
| No log | 7.4783 | 344 | 1.3060 | 0.0931 | 1.3060 | 1.1428 |
| No log | 7.5217 | 346 | 1.3268 | 0.0541 | 1.3268 | 1.1519 |
| No log | 7.5652 | 348 | 1.3956 | 0.1744 | 1.3956 | 1.1813 |
| No log | 7.6087 | 350 | 1.5431 | 0.2474 | 1.5431 | 1.2422 |
| No log | 7.6522 | 352 | 1.6069 | 0.2611 | 1.6069 | 1.2676 |
| No log | 7.6957 | 354 | 1.6304 | 0.2174 | 1.6304 | 1.2769 |
| No log | 7.7391 | 356 | 1.5382 | 0.2123 | 1.5382 | 1.2402 |
| No log | 7.7826 | 358 | 1.3799 | 0.1727 | 1.3799 | 1.1747 |
| No log | 7.8261 | 360 | 1.2919 | 0.1316 | 1.2919 | 1.1366 |
| No log | 7.8696 | 362 | 1.3619 | 0.1024 | 1.3619 | 1.1670 |
| No log | 7.9130 | 364 | 1.4500 | 0.2239 | 1.4500 | 1.2041 |
| No log | 7.9565 | 366 | 1.4819 | 0.2342 | 1.4819 | 1.2173 |
| No log | 8.0 | 368 | 1.3776 | 0.1197 | 1.3776 | 1.1737 |
| No log | 8.0435 | 370 | 1.1953 | 0.1019 | 1.1953 | 1.0933 |
| No log | 8.0870 | 372 | 1.1063 | 0.0406 | 1.1063 | 1.0518 |
| No log | 8.1304 | 374 | 1.1168 | 0.0406 | 1.1168 | 1.0568 |
| No log | 8.1739 | 376 | 1.2230 | 0.1705 | 1.2230 | 1.1059 |
| No log | 8.2174 | 378 | 1.4200 | 0.1370 | 1.4200 | 1.1916 |
| No log | 8.2609 | 380 | 1.4560 | 0.0931 | 1.4560 | 1.2066 |
| No log | 8.3043 | 382 | 1.4686 | 0.0931 | 1.4686 | 1.2119 |
| No log | 8.3478 | 384 | 1.4976 | 0.1744 | 1.4976 | 1.2237 |
| No log | 8.3913 | 386 | 1.4849 | 0.1744 | 1.4849 | 1.2186 |
| No log | 8.4348 | 388 | 1.5288 | 0.2665 | 1.5288 | 1.2365 |
| No log | 8.4783 | 390 | 1.5550 | 0.2752 | 1.5550 | 1.2470 |
| No log | 8.5217 | 392 | 1.4539 | 0.2372 | 1.4539 | 1.2058 |
| No log | 8.5652 | 394 | 1.3677 | 0.2065 | 1.3677 | 1.1695 |
| No log | 8.6087 | 396 | 1.3685 | 0.1407 | 1.3685 | 1.1698 |
| No log | 8.6522 | 398 | 1.4109 | 0.1744 | 1.4109 | 1.1878 |
| No log | 8.6957 | 400 | 1.5116 | 0.2424 | 1.5116 | 1.2295 |
| No log | 8.7391 | 402 | 1.5118 | 0.2474 | 1.5118 | 1.2295 |
| No log | 8.7826 | 404 | 1.5575 | 0.2522 | 1.5575 | 1.2480 |
| No log | 8.8261 | 406 | 1.5527 | 0.2126 | 1.5527 | 1.2461 |
| No log | 8.8696 | 408 | 1.4464 | 0.1407 | 1.4464 | 1.2027 |
| No log | 8.9130 | 410 | 1.4063 | 0.0 | 1.4063 | 1.1859 |
| No log | 8.9565 | 412 | 1.3768 | 0.0 | 1.3768 | 1.1734 |
| No log | 9.0 | 414 | 1.3550 | -0.0411 | 1.3550 | 1.1641 |
| No log | 9.0435 | 416 | 1.3643 | 0.0 | 1.3643 | 1.1680 |
| No log | 9.0870 | 418 | 1.4810 | 0.0401 | 1.4810 | 1.2170 |
| No log | 9.1304 | 420 | 1.6359 | 0.2126 | 1.6359 | 1.2790 |
| No log | 9.1739 | 422 | 1.7412 | 0.1892 | 1.7412 | 1.3196 |
| No log | 9.2174 | 424 | 1.8050 | 0.1487 | 1.8050 | 1.3435 |
| No log | 9.2609 | 426 | 1.7355 | 0.1955 | 1.7355 | 1.3174 |
| No log | 9.3043 | 428 | 1.5760 | 0.2653 | 1.5760 | 1.2554 |
| No log | 9.3478 | 430 | 1.4390 | 0.2015 | 1.4390 | 1.1996 |
| No log | 9.3913 | 432 | 1.3894 | 0.1700 | 1.3894 | 1.1787 |
| No log | 9.4348 | 434 | 1.3953 | 0.1744 | 1.3953 | 1.1812 |
| No log | 9.4783 | 436 | 1.4498 | 0.1744 | 1.4498 | 1.2041 |
| No log | 9.5217 | 438 | 1.4705 | 0.1744 | 1.4705 | 1.2126 |
| No log | 9.5652 | 440 | 1.3951 | 0.1744 | 1.3951 | 1.1811 |
| No log | 9.6087 | 442 | 1.2963 | 0.1052 | 1.2963 | 1.1385 |
| No log | 9.6522 | 444 | 1.2880 | 0.1052 | 1.2880 | 1.1349 |
| No log | 9.6957 | 446 | 1.3230 | 0.1407 | 1.3230 | 1.1502 |
| No log | 9.7391 | 448 | 1.3931 | 0.1407 | 1.3931 | 1.1803 |
| No log | 9.7826 | 450 | 1.4239 | 0.1052 | 1.4239 | 1.1933 |
| No log | 9.8261 | 452 | 1.4655 | 0.1744 | 1.4655 | 1.2106 |
| No log | 9.8696 | 454 | 1.4982 | 0.2372 | 1.4982 | 1.2240 |
| No log | 9.9130 | 456 | 1.5002 | 0.2372 | 1.5002 | 1.2248 |
| No log | 9.9565 | 458 | 1.4132 | 0.2372 | 1.4132 | 1.1888 |
| No log | 10.0 | 460 | 1.2400 | 0.0 | 1.2400 | 1.1135 |
| No log | 10.0435 | 462 | 1.1297 | 0.0155 | 1.1297 | 1.0629 |
| No log | 10.0870 | 464 | 1.1569 | 0.0155 | 1.1569 | 1.0756 |
| No log | 10.1304 | 466 | 1.2550 | 0.0 | 1.2550 | 1.1203 |
| No log | 10.1739 | 468 | 1.2879 | 0.1113 | 1.2879 | 1.1348 |
| No log | 10.2174 | 470 | 1.3146 | 0.0390 | 1.3146 | 1.1466 |
| No log | 10.2609 | 472 | 1.3950 | 0.0781 | 1.3950 | 1.1811 |
| No log | 10.3043 | 474 | 1.4311 | 0.0781 | 1.4311 | 1.1963 |
| No log | 10.3478 | 476 | 1.4025 | 0.0781 | 1.4025 | 1.1843 |
| No log | 10.3913 | 478 | 1.3441 | 0.0 | 1.3441 | 1.1593 |
| No log | 10.4348 | 480 | 1.3662 | 0.0 | 1.3662 | 1.1689 |
| No log | 10.4783 | 482 | 1.4121 | 0.1370 | 1.4121 | 1.1883 |
| No log | 10.5217 | 484 | 1.5203 | 0.2015 | 1.5203 | 1.2330 |
| No log | 10.5652 | 486 | 1.7276 | 0.1963 | 1.7276 | 1.3144 |
| No log | 10.6087 | 488 | 1.8236 | 0.2056 | 1.8236 | 1.3504 |
| No log | 10.6522 | 490 | 1.7907 | 0.2317 | 1.7907 | 1.3382 |
| No log | 10.6957 | 492 | 1.6974 | 0.2391 | 1.6974 | 1.3028 |
| No log | 10.7391 | 494 | 1.5156 | 0.1744 | 1.5156 | 1.2311 |
| No log | 10.7826 | 496 | 1.4285 | 0.0390 | 1.4285 | 1.1952 |
| No log | 10.8261 | 498 | 1.3874 | 0.0390 | 1.3874 | 1.1779 |
| 0.2613 | 10.8696 | 500 | 1.3506 | 0.0 | 1.3506 | 1.1622 |
| 0.2613 | 10.9130 | 502 | 1.3637 | 0.0661 | 1.3637 | 1.1678 |
| 0.2613 | 10.9565 | 504 | 1.4279 | 0.1370 | 1.4279 | 1.1950 |
| 0.2613 | 11.0 | 506 | 1.5500 | 0.2315 | 1.5500 | 1.2450 |
| 0.2613 | 11.0435 | 508 | 1.6369 | 0.2465 | 1.6369 | 1.2794 |
| 0.2613 | 11.0870 | 510 | 1.7018 | 0.2832 | 1.7018 | 1.3045 |
| 0.2613 | 11.1304 | 512 | 1.6085 | 0.2665 | 1.6085 | 1.2683 |
| 0.2613 | 11.1739 | 514 | 1.4327 | 0.1744 | 1.4327 | 1.1969 |
| 0.2613 | 11.2174 | 516 | 1.3146 | 0.0401 | 1.3146 | 1.1466 |
| 0.2613 | 11.2609 | 518 | 1.2902 | 0.0401 | 1.2902 | 1.1359 |
| 0.2613 | 11.3043 | 520 | 1.3147 | 0.0401 | 1.3147 | 1.1466 |
| 0.2613 | 11.3478 | 522 | 1.3176 | 0.1407 | 1.3176 | 1.1479 |
| 0.2613 | 11.3913 | 524 | 1.3261 | 0.1744 | 1.3261 | 1.1516 |
| 0.2613 | 11.4348 | 526 | 1.2363 | 0.2206 | 1.2363 | 1.1119 |
| 0.2613 | 11.4783 | 528 | 1.2522 | 0.2313 | 1.2522 | 1.1190 |
| 0.2613 | 11.5217 | 530 | 1.4130 | 0.2431 | 1.4130 | 1.1887 |
| 0.2613 | 11.5652 | 532 | 1.7543 | 0.1911 | 1.7543 | 1.3245 |
| 0.2613 | 11.6087 | 534 | 1.8516 | 0.2406 | 1.8516 | 1.3607 |
| 0.2613 | 11.6522 | 536 | 1.7143 | 0.1892 | 1.7143 | 1.3093 |
| 0.2613 | 11.6957 | 538 | 1.5454 | 0.1288 | 1.5454 | 1.2431 |
| 0.2613 | 11.7391 | 540 | 1.4371 | 0.0781 | 1.4371 | 1.1988 |
| 0.2613 | 11.7826 | 542 | 1.3778 | 0.1370 | 1.3778 | 1.1738 |
| 0.2613 | 11.8261 | 544 | 1.3337 | 0.1370 | 1.3337 | 1.1549 |
| 0.2613 | 11.8696 | 546 | 1.3540 | 0.1700 | 1.3540 | 1.1636 |
| 0.2613 | 11.9130 | 548 | 1.4217 | 0.2126 | 1.4217 | 1.1924 |
| 0.2613 | 11.9565 | 550 | 1.3881 | 0.2065 | 1.3881 | 1.1782 |
| 0.2613 | 12.0 | 552 | 1.2795 | 0.0781 | 1.2795 | 1.1312 |
| 0.2613 | 12.0435 | 554 | 1.1736 | 0.0155 | 1.1736 | 1.0834 |
| 0.2613 | 12.0870 | 556 | 1.1141 | 0.0587 | 1.1141 | 1.0555 |
| 0.2613 | 12.1304 | 558 | 1.0583 | 0.0618 | 1.0583 | 1.0288 |
| 0.2613 | 12.1739 | 560 | 1.0693 | 0.1407 | 1.0693 | 1.0341 |
| 0.2613 | 12.2174 | 562 | 1.1314 | 0.0961 | 1.1314 | 1.0637 |
| 0.2613 | 12.2609 | 564 | 1.2606 | 0.1769 | 1.2606 | 1.1228 |
| 0.2613 | 12.3043 | 566 | 1.3739 | 0.2126 | 1.3739 | 1.1721 |
| 0.2613 | 12.3478 | 568 | 1.4466 | 0.2126 | 1.4466 | 1.2028 |
| 0.2613 | 12.3913 | 570 | 1.4543 | 0.2126 | 1.4543 | 1.2060 |
| 0.2613 | 12.4348 | 572 | 1.5068 | 0.2424 | 1.5068 | 1.2275 |
| 0.2613 | 12.4783 | 574 | 1.6092 | 0.2832 | 1.6092 | 1.2685 |
| 0.2613 | 12.5217 | 576 | 1.7369 | 0.2940 | 1.7369 | 1.3179 |
| 0.2613 | 12.5652 | 578 | 1.7533 | 0.2940 | 1.7533 | 1.3241 |
| 0.2613 | 12.6087 | 580 | 1.6560 | 0.2474 | 1.6560 | 1.2868 |
| 0.2613 | 12.6522 | 582 | 1.5699 | 0.1228 | 1.5699 | 1.2530 |
| 0.2613 | 12.6957 | 584 | 1.5091 | 0.0781 | 1.5091 | 1.2285 |
| 0.2613 | 12.7391 | 586 | 1.4348 | 0.0401 | 1.4348 | 1.1978 |
| 0.2613 | 12.7826 | 588 | 1.3822 | 0.0401 | 1.3822 | 1.1757 |
| 0.2613 | 12.8261 | 590 | 1.3843 | 0.0401 | 1.3843 | 1.1766 |
| 0.2613 | 12.8696 | 592 | 1.4302 | 0.1407 | 1.4302 | 1.1959 |
| 0.2613 | 12.9130 | 594 | 1.4860 | 0.2424 | 1.4860 | 1.2190 |
| 0.2613 | 12.9565 | 596 | 1.5159 | 0.2474 | 1.5159 | 1.2312 |
| 0.2613 | 13.0 | 598 | 1.5296 | 0.2793 | 1.5296 | 1.2368 |
| 0.2613 | 13.0435 | 600 | 1.6018 | 0.2832 | 1.6018 | 1.2656 |
| 0.2613 | 13.0870 | 602 | 1.5336 | 0.2832 | 1.5336 | 1.2384 |
| 0.2613 | 13.1304 | 604 | 1.4550 | 0.2474 | 1.4550 | 1.2062 |
| 0.2613 | 13.1739 | 606 | 1.4058 | 0.2424 | 1.4058 | 1.1856 |
| 0.2613 | 13.2174 | 608 | 1.3353 | 0.2075 | 1.3353 | 1.1556 |
| 0.2613 | 13.2609 | 610 | 1.3624 | 0.2075 | 1.3624 | 1.1672 |
| 0.2613 | 13.3043 | 612 | 1.3326 | 0.1769 | 1.3326 | 1.1544 |
| 0.2613 | 13.3478 | 614 | 1.3241 | 0.1769 | 1.3241 | 1.1507 |
| 0.2613 | 13.3913 | 616 | 1.3316 | 0.1769 | 1.3316 | 1.1539 |
| 0.2613 | 13.4348 | 618 | 1.3209 | 0.1769 | 1.3209 | 1.1493 |
| 0.2613 | 13.4783 | 620 | 1.3151 | 0.1769 | 1.3151 | 1.1468 |
| 0.2613 | 13.5217 | 622 | 1.3106 | 0.1769 | 1.3106 | 1.1448 |
| 0.2613 | 13.5652 | 624 | 1.3274 | 0.2424 | 1.3274 | 1.1521 |
| 0.2613 | 13.6087 | 626 | 1.4479 | 0.2752 | 1.4479 | 1.2033 |
| 0.2613 | 13.6522 | 628 | 1.5377 | 0.2568 | 1.5377 | 1.2400 |
| 0.2613 | 13.6957 | 630 | 1.5216 | 0.2568 | 1.5216 | 1.2336 |
| 0.2613 | 13.7391 | 632 | 1.5517 | 0.2568 | 1.5517 | 1.2457 |
| 0.2613 | 13.7826 | 634 | 1.6063 | 0.2568 | 1.6063 | 1.2674 |
| 0.2613 | 13.8261 | 636 | 1.5487 | 0.2752 | 1.5487 | 1.2445 |
| 0.2613 | 13.8696 | 638 | 1.4303 | 0.2065 | 1.4303 | 1.1960 |
| 0.2613 | 13.9130 | 640 | 1.3684 | 0.2065 | 1.3684 | 1.1698 |
| 0.2613 | 13.9565 | 642 | 1.3211 | 0.0401 | 1.3211 | 1.1494 |
| 0.2613 | 14.0 | 644 | 1.3409 | 0.0401 | 1.3409 | 1.1580 |
| 0.2613 | 14.0435 | 646 | 1.4071 | 0.0781 | 1.4071 | 1.1862 |
| 0.2613 | 14.0870 | 648 | 1.4992 | 0.2065 | 1.4992 | 1.2244 |
| 0.2613 | 14.1304 | 650 | 1.6000 | 0.2709 | 1.6000 | 1.2649 |
| 0.2613 | 14.1739 | 652 | 1.6450 | 0.2752 | 1.6450 | 1.2826 |
| 0.2613 | 14.2174 | 654 | 1.6421 | 0.2752 | 1.6421 | 1.2814 |
| 0.2613 | 14.2609 | 656 | 1.6102 | 0.2417 | 1.6102 | 1.2689 |
| 0.2613 | 14.3043 | 658 | 1.5413 | 0.2372 | 1.5413 | 1.2415 |
| 0.2613 | 14.3478 | 660 | 1.5115 | 0.2372 | 1.5115 | 1.2294 |
| 0.2613 | 14.3913 | 662 | 1.5279 | 0.2372 | 1.5279 | 1.2361 |
| 0.2613 | 14.4348 | 664 | 1.5128 | 0.2372 | 1.5128 | 1.2300 |
| 0.2613 | 14.4783 | 666 | 1.5337 | 0.2424 | 1.5337 | 1.2384 |
| 0.2613 | 14.5217 | 668 | 1.5367 | 0.2752 | 1.5367 | 1.2397 |
| 0.2613 | 14.5652 | 670 | 1.5846 | 0.2752 | 1.5846 | 1.2588 |
| 0.2613 | 14.6087 | 672 | 1.6122 | 0.2832 | 1.6122 | 1.2697 |
| 0.2613 | 14.6522 | 674 | 1.5983 | 0.2752 | 1.5983 | 1.2642 |
| 0.2613 | 14.6957 | 676 | 1.5849 | 0.2752 | 1.5849 | 1.2589 |
| 0.2613 | 14.7391 | 678 | 1.5418 | 0.2752 | 1.5418 | 1.2417 |
| 0.2613 | 14.7826 | 680 | 1.4476 | 0.1744 | 1.4476 | 1.2032 |
| 0.2613 | 14.8261 | 682 | 1.3866 | 0.1142 | 1.3866 | 1.1776 |
| 0.2613 | 14.8696 | 684 | 1.3755 | 0.0401 | 1.3755 | 1.1728 |
| 0.2613 | 14.9130 | 686 | 1.3934 | 0.0781 | 1.3934 | 1.1804 |
| 0.2613 | 14.9565 | 688 | 1.4456 | 0.1142 | 1.4456 | 1.2023 |
| 0.2613 | 15.0 | 690 | 1.5034 | 0.1769 | 1.5034 | 1.2261 |
| 0.2613 | 15.0435 | 692 | 1.5961 | 0.2752 | 1.5961 | 1.2634 |
| 0.2613 | 15.0870 | 694 | 1.7191 | 0.2832 | 1.7191 | 1.3111 |
| 0.2613 | 15.1304 | 696 | 1.8074 | 0.2406 | 1.8074 | 1.3444 |
| 0.2613 | 15.1739 | 698 | 1.7995 | 0.2770 | 1.7995 | 1.3414 |
| 0.2613 | 15.2174 | 700 | 1.6816 | 0.2752 | 1.6816 | 1.2968 |
| 0.2613 | 15.2609 | 702 | 1.5217 | 0.1142 | 1.5217 | 1.2336 |
| 0.2613 | 15.3043 | 704 | 1.3637 | 0.0401 | 1.3637 | 1.1678 |
| 0.2613 | 15.3478 | 706 | 1.2650 | 0.0 | 1.2650 | 1.1247 |
| 0.2613 | 15.3913 | 708 | 1.2534 | 0.0 | 1.2534 | 1.1196 |
| 0.2613 | 15.4348 | 710 | 1.2924 | 0.0 | 1.2924 | 1.1368 |
| 0.2613 | 15.4783 | 712 | 1.3811 | 0.0401 | 1.3811 | 1.1752 |
| 0.2613 | 15.5217 | 714 | 1.5260 | 0.0781 | 1.5260 | 1.2353 |
| 0.2613 | 15.5652 | 716 | 1.6227 | 0.2065 | 1.6227 | 1.2738 |
| 0.2613 | 15.6087 | 718 | 1.6985 | 0.2709 | 1.6985 | 1.3033 |
| 0.2613 | 15.6522 | 720 | 1.7279 | 0.2709 | 1.7279 | 1.3145 |
| 0.2613 | 15.6957 | 722 | 1.6787 | 0.0781 | 1.6787 | 1.2956 |
| 0.2613 | 15.7391 | 724 | 1.5888 | 0.0401 | 1.5888 | 1.2605 |
| 0.2613 | 15.7826 | 726 | 1.5083 | 0.0 | 1.5083 | 1.2281 |
| 0.2613 | 15.8261 | 728 | 1.4668 | 0.0 | 1.4668 | 1.2111 |
| 0.2613 | 15.8696 | 730 | 1.4395 | 0.0 | 1.4395 | 1.1998 |
| 0.2613 | 15.9130 | 732 | 1.4284 | 0.0401 | 1.4284 | 1.1952 |
| 0.2613 | 15.9565 | 734 | 1.4698 | 0.0401 | 1.4698 | 1.2123 |
| 0.2613 | 16.0 | 736 | 1.4808 | 0.0781 | 1.4808 | 1.2169 |
| 0.2613 | 16.0435 | 738 | 1.4348 | 0.0401 | 1.4348 | 1.1978 |
| 0.2613 | 16.0870 | 740 | 1.3826 | 0.0401 | 1.3826 | 1.1758 |
| 0.2613 | 16.1304 | 742 | 1.3705 | 0.0401 | 1.3705 | 1.1707 |
| 0.2613 | 16.1739 | 744 | 1.4221 | 0.0401 | 1.4221 | 1.1925 |
| 0.2613 | 16.2174 | 746 | 1.5149 | 0.0781 | 1.5149 | 1.2308 |
| 0.2613 | 16.2609 | 748 | 1.6344 | 0.1142 | 1.6344 | 1.2785 |
| 0.2613 | 16.3043 | 750 | 1.6958 | 0.1142 | 1.6958 | 1.3022 |
| 0.2613 | 16.3478 | 752 | 1.7174 | 0.1744 | 1.7174 | 1.3105 |
| 0.2613 | 16.3913 | 754 | 1.6890 | 0.1744 | 1.6890 | 1.2996 |
| 0.2613 | 16.4348 | 756 | 1.6066 | 0.1142 | 1.6066 | 1.2675 |
| 0.2613 | 16.4783 | 758 | 1.5513 | 0.0390 | 1.5513 | 1.2455 |
| 0.2613 | 16.5217 | 760 | 1.5064 | 0.0390 | 1.5064 | 1.2273 |
| 0.2613 | 16.5652 | 762 | 1.4922 | 0.0390 | 1.4922 | 1.2215 |
| 0.2613 | 16.6087 | 764 | 1.4445 | 0.0390 | 1.4445 | 1.2019 |
| 0.2613 | 16.6522 | 766 | 1.4327 | 0.1407 | 1.4327 | 1.1970 |
| 0.2613 | 16.6957 | 768 | 1.4487 | 0.2065 | 1.4487 | 1.2036 |
| 0.2613 | 16.7391 | 770 | 1.3882 | 0.0781 | 1.3882 | 1.1782 |
| 0.2613 | 16.7826 | 772 | 1.2968 | 0.0781 | 1.2968 | 1.1388 |
| 0.2613 | 16.8261 | 774 | 1.2913 | 0.0781 | 1.2913 | 1.1363 |
| 0.2613 | 16.8696 | 776 | 1.3576 | 0.2372 | 1.3576 | 1.1652 |
| 0.2613 | 16.9130 | 778 | 1.4646 | 0.2424 | 1.4646 | 1.2102 |
| 0.2613 | 16.9565 | 780 | 1.4935 | 0.2522 | 1.4935 | 1.2221 |
| 0.2613 | 17.0 | 782 | 1.4955 | 0.2709 | 1.4955 | 1.2229 |
| 0.2613 | 17.0435 | 784 | 1.4958 | 0.2075 | 1.4958 | 1.2230 |
| 0.2613 | 17.0870 | 786 | 1.4286 | 0.1113 | 1.4286 | 1.1952 |
| 0.2613 | 17.1304 | 788 | 1.3892 | 0.0155 | 1.3892 | 1.1786 |
| 0.2613 | 17.1739 | 790 | 1.3827 | 0.0155 | 1.3827 | 1.1759 |
| 0.2613 | 17.2174 | 792 | 1.3759 | 0.0155 | 1.3759 | 1.1730 |
| 0.2613 | 17.2609 | 794 | 1.4105 | 0.0155 | 1.4105 | 1.1876 |
| 0.2613 | 17.3043 | 796 | 1.4572 | 0.1113 | 1.4572 | 1.2071 |
| 0.2613 | 17.3478 | 798 | 1.4896 | 0.2424 | 1.4896 | 1.2205 |
| 0.2613 | 17.3913 | 800 | 1.4728 | 0.2424 | 1.4728 | 1.2136 |
| 0.2613 | 17.4348 | 802 | 1.4114 | 0.2126 | 1.4114 | 1.1880 |
| 0.2613 | 17.4783 | 804 | 1.3548 | 0.0781 | 1.3548 | 1.1640 |
| 0.2613 | 17.5217 | 806 | 1.3085 | 0.0401 | 1.3085 | 1.1439 |
| 0.2613 | 17.5652 | 808 | 1.2499 | 0.0 | 1.2499 | 1.1180 |
| 0.2613 | 17.6087 | 810 | 1.2468 | 0.0 | 1.2468 | 1.1166 |
| 0.2613 | 17.6522 | 812 | 1.3334 | 0.0401 | 1.3334 | 1.1547 |
| 0.2613 | 17.6957 | 814 | 1.4046 | 0.2424 | 1.4046 | 1.1852 |
| 0.2613 | 17.7391 | 816 | 1.4401 | 0.2424 | 1.4401 | 1.2000 |
| 0.2613 | 17.7826 | 818 | 1.4071 | 0.2075 | 1.4071 | 1.1862 |
| 0.2613 | 17.8261 | 820 | 1.4065 | 0.2367 | 1.4065 | 1.1860 |
| 0.2613 | 17.8696 | 822 | 1.3824 | 0.2367 | 1.3824 | 1.1758 |
| 0.2613 | 17.9130 | 824 | 1.3945 | 0.2367 | 1.3945 | 1.1809 |
| 0.2613 | 17.9565 | 826 | 1.4377 | 0.2647 | 1.4377 | 1.1990 |
| 0.2613 | 18.0 | 828 | 1.4406 | 0.2015 | 1.4406 | 1.2002 |
| 0.2613 | 18.0435 | 830 | 1.4659 | 0.1370 | 1.4659 | 1.2108 |
| 0.2613 | 18.0870 | 832 | 1.4324 | 0.1370 | 1.4324 | 1.1968 |
| 0.2613 | 18.1304 | 834 | 1.3917 | 0.0390 | 1.3917 | 1.1797 |
| 0.2613 | 18.1739 | 836 | 1.3690 | 0.1024 | 1.3690 | 1.1700 |
| 0.2613 | 18.2174 | 838 | 1.3763 | 0.1370 | 1.3763 | 1.1732 |
| 0.2613 | 18.2609 | 840 | 1.4114 | 0.1449 | 1.4114 | 1.1880 |
| 0.2613 | 18.3043 | 842 | 1.4059 | 0.1769 | 1.4059 | 1.1857 |
| 0.2613 | 18.3478 | 844 | 1.4071 | 0.2038 | 1.4071 | 1.1862 |
| 0.2613 | 18.3913 | 846 | 1.4425 | 0.1663 | 1.4425 | 1.2010 |
| 0.2613 | 18.4348 | 848 | 1.5222 | 0.2291 | 1.5222 | 1.2338 |
| 0.2613 | 18.4783 | 850 | 1.4608 | 0.1769 | 1.4608 | 1.2086 |
| 0.2613 | 18.5217 | 852 | 1.3935 | 0.0390 | 1.3935 | 1.1805 |
| 0.2613 | 18.5652 | 854 | 1.3725 | 0.0 | 1.3725 | 1.1715 |
| 0.2613 | 18.6087 | 856 | 1.3666 | 0.0390 | 1.3666 | 1.1690 |
| 0.2613 | 18.6522 | 858 | 1.3549 | 0.0760 | 1.3549 | 1.1640 |
| 0.2613 | 18.6957 | 860 | 1.3186 | 0.0760 | 1.3186 | 1.1483 |
| 0.2613 | 18.7391 | 862 | 1.2866 | 0.0760 | 1.2866 | 1.1343 |
| 0.2613 | 18.7826 | 864 | 1.2285 | 0.0390 | 1.2285 | 1.1084 |
| 0.2613 | 18.8261 | 866 | 1.2261 | 0.0 | 1.2261 | 1.1073 |
| 0.2613 | 18.8696 | 868 | 1.2744 | 0.0 | 1.2744 | 1.1289 |
| 0.2613 | 18.9130 | 870 | 1.3796 | 0.2065 | 1.3796 | 1.1746 |
| 0.2613 | 18.9565 | 872 | 1.4450 | 0.2065 | 1.4450 | 1.2021 |
| 0.2613 | 19.0 | 874 | 1.4751 | 0.2126 | 1.4751 | 1.2146 |
| 0.2613 | 19.0435 | 876 | 1.5427 | 0.2424 | 1.5427 | 1.2420 |
| 0.2613 | 19.0870 | 878 | 1.5107 | 0.2424 | 1.5107 | 1.2291 |
| 0.2613 | 19.1304 | 880 | 1.4398 | 0.2065 | 1.4398 | 1.1999 |
| 0.2613 | 19.1739 | 882 | 1.3891 | 0.2065 | 1.3891 | 1.1786 |
| 0.2613 | 19.2174 | 884 | 1.3882 | 0.2065 | 1.3882 | 1.1782 |
| 0.2613 | 19.2609 | 886 | 1.3677 | 0.2372 | 1.3677 | 1.1695 |
| 0.2613 | 19.3043 | 888 | 1.3458 | 0.2372 | 1.3458 | 1.1601 |
| 0.2613 | 19.3478 | 890 | 1.3543 | 0.2065 | 1.3543 | 1.1638 |
| 0.2613 | 19.3913 | 892 | 1.4147 | 0.2065 | 1.4147 | 1.1894 |
| 0.2613 | 19.4348 | 894 | 1.4954 | 0.2065 | 1.4954 | 1.2229 |
| 0.2613 | 19.4783 | 896 | 1.5624 | 0.2372 | 1.5624 | 1.2500 |
| 0.2613 | 19.5217 | 898 | 1.5194 | 0.2065 | 1.5194 | 1.2326 |
| 0.2613 | 19.5652 | 900 | 1.4135 | 0.1142 | 1.4135 | 1.1889 |
| 0.2613 | 19.6087 | 902 | 1.3467 | 0.0760 | 1.3467 | 1.1605 |
| 0.2613 | 19.6522 | 904 | 1.3329 | 0.0760 | 1.3329 | 1.1545 |
| 0.2613 | 19.6957 | 906 | 1.3788 | 0.0760 | 1.3788 | 1.1742 |
| 0.2613 | 19.7391 | 908 | 1.3941 | 0.1142 | 1.3941 | 1.1807 |
| 0.2613 | 19.7826 | 910 | 1.3446 | 0.0760 | 1.3446 | 1.1596 |
| 0.2613 | 19.8261 | 912 | 1.3464 | 0.0781 | 1.3464 | 1.1603 |
| 0.2613 | 19.8696 | 914 | 1.3786 | 0.0781 | 1.3786 | 1.1741 |
| 0.2613 | 19.9130 | 916 | 1.3890 | 0.0781 | 1.3890 | 1.1786 |
| 0.2613 | 19.9565 | 918 | 1.4255 | 0.1142 | 1.4255 | 1.1940 |
| 0.2613 | 20.0 | 920 | 1.4037 | 0.1142 | 1.4037 | 1.1848 |
| 0.2613 | 20.0435 | 922 | 1.3277 | 0.0760 | 1.3277 | 1.1523 |
| 0.2613 | 20.0870 | 924 | 1.3014 | 0.1700 | 1.3014 | 1.1408 |
| 0.2613 | 20.1304 | 926 | 1.3892 | 0.2075 | 1.3892 | 1.1786 |
| 0.2613 | 20.1739 | 928 | 1.5929 | 0.2317 | 1.5929 | 1.2621 |
| 0.2613 | 20.2174 | 930 | 1.6970 | 0.2317 | 1.6970 | 1.3027 |
| 0.2613 | 20.2609 | 932 | 1.7191 | 0.2363 | 1.7191 | 1.3112 |
| 0.2613 | 20.3043 | 934 | 1.6680 | 0.2568 | 1.6680 | 1.2915 |
| 0.2613 | 20.3478 | 936 | 1.6164 | 0.2239 | 1.6164 | 1.2714 |
| 0.2613 | 20.3913 | 938 | 1.5876 | 0.2372 | 1.5876 | 1.2600 |
| 0.2613 | 20.4348 | 940 | 1.5170 | 0.1744 | 1.5170 | 1.2317 |
| 0.2613 | 20.4783 | 942 | 1.4762 | 0.0781 | 1.4762 | 1.2150 |
| 0.2613 | 20.5217 | 944 | 1.4673 | 0.0781 | 1.4673 | 1.2113 |
| 0.2613 | 20.5652 | 946 | 1.4335 | 0.0781 | 1.4335 | 1.1973 |
| 0.2613 | 20.6087 | 948 | 1.4069 | 0.0390 | 1.4069 | 1.1861 |
| 0.2613 | 20.6522 | 950 | 1.3963 | 0.0390 | 1.3963 | 1.1816 |
| 0.2613 | 20.6957 | 952 | 1.3900 | 0.0781 | 1.3900 | 1.1790 |
| 0.2613 | 20.7391 | 954 | 1.4040 | 0.0781 | 1.4040 | 1.1849 |
| 0.2613 | 20.7826 | 956 | 1.4156 | 0.1142 | 1.4156 | 1.1898 |
| 0.2613 | 20.8261 | 958 | 1.4111 | 0.2126 | 1.4111 | 1.1879 |
| 0.2613 | 20.8696 | 960 | 1.4425 | 0.2474 | 1.4425 | 1.2011 |
| 0.2613 | 20.9130 | 962 | 1.4642 | 0.2474 | 1.4642 | 1.2100 |
| 0.2613 | 20.9565 | 964 | 1.4979 | 0.2474 | 1.4979 | 1.2239 |
| 0.2613 | 21.0 | 966 | 1.4797 | 0.2474 | 1.4797 | 1.2164 |
| 0.2613 | 21.0435 | 968 | 1.4519 | 0.0878 | 1.4519 | 1.2049 |
| 0.2613 | 21.0870 | 970 | 1.4357 | 0.0401 | 1.4357 | 1.1982 |
| 0.2613 | 21.1304 | 972 | 1.4458 | 0.0510 | 1.4458 | 1.2024 |
| 0.2613 | 21.1739 | 974 | 1.4380 | 0.1228 | 1.4380 | 1.1992 |
| 0.2613 | 21.2174 | 976 | 1.4340 | 0.0781 | 1.4340 | 1.1975 |
| 0.2613 | 21.2609 | 978 | 1.4431 | 0.0781 | 1.4431 | 1.2013 |
| 0.2613 | 21.3043 | 980 | 1.4430 | 0.0781 | 1.4430 | 1.2012 |
| 0.2613 | 21.3478 | 982 | 1.4556 | 0.0781 | 1.4556 | 1.2065 |
| 0.2613 | 21.3913 | 984 | 1.4817 | 0.1142 | 1.4817 | 1.2172 |
| 0.2613 | 21.4348 | 986 | 1.4795 | 0.1142 | 1.4795 | 1.2164 |
| 0.2613 | 21.4783 | 988 | 1.4291 | 0.0390 | 1.4291 | 1.1955 |
| 0.2613 | 21.5217 | 990 | 1.3709 | 0.0 | 1.3709 | 1.1709 |
| 0.2613 | 21.5652 | 992 | 1.3485 | 0.0 | 1.3485 | 1.1613 |
| 0.2613 | 21.6087 | 994 | 1.3507 | 0.0 | 1.3507 | 1.1622 |
| 0.2613 | 21.6522 | 996 | 1.3278 | 0.0 | 1.3278 | 1.1523 |
| 0.2613 | 21.6957 | 998 | 1.3225 | 0.0 | 1.3225 | 1.1500 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
thaffggg/94add4fb-1b6b-498e-bdda-09689212b56f | thaffggg | 2025-01-20T23:40:46Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:10:03Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 94add4fb-1b6b-498e-bdda-09689212b56f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2ae5e8a9f5305db8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2ae5e8a9f5305db8_train_data.json
type:
field_instruction: cleaned_description
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/94add4fb-1b6b-498e-bdda-09689212b56f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2ae5e8a9f5305db8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5dae0642-01e7-4e34-8316-2fb97377e93c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5dae0642-01e7-4e34-8316-2fb97377e93c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 94add4fb-1b6b-498e-bdda-09689212b56f
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4551 | 0.0023 | 200 | 0.5853 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/realfoto | LHRuig | 2025-01-20T23:40:19Z | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:39:14Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# realfoto
<Gallery />
## Model description
realfoto lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/realfoto/tree/main) them in the Files & versions tab.
|
thabel/whisper-medium-yo | thabel | 2025-01-20T23:40:06Z | 98 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-12-20T11:34:37Z | ---
library_name: transformers
language:
- dv
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: yo
split: test
args: yo
metrics:
- name: Wer
type: wer
value: 47.33077228772023
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8186
- Wer Ortho: 69.8365
- Wer: 47.3308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.061 | 5.8824 | 500 | 0.8186 | 69.8365 | 47.3308 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF | mradermacher | 2025-01-20T23:39:50Z | 504 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"sft",
"en",
"id",
"dataset:gmonsoon/CoT-id",
"base_model:gmonsoon/Eunoia-Gemma-9B-o1-Indo",
"base_model:quantized:gmonsoon/Eunoia-Gemma-9B-o1-Indo",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-14T14:09:42Z | ---
base_model: gmonsoon/Eunoia-Gemma-9B-o1-Indo
datasets:
- gmonsoon/CoT-id
language:
- en
- id
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gmonsoon/Eunoia-Gemma-9B-o1-Indo
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
beichenxie/nikeai-test-2 | beichenxie | 2025-01-20T23:38:30Z | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"replicate",
"template:sd-lora",
"sd3.5-large",
"sd3.5",
"sd3.5-diffusers",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2025-01-20T23:32:22Z | ---
license: other
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- replicate
- template:sd-lora
- sd3.5-large
- sd3.5
- sd3.5-diffusers
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: HILOS, nikeai test
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3.5-Large DreamBooth LoRA - beichenxie/nikeai-test-2
<Gallery />
## Model description
These are beichenxie/nikeai-test-2 DreamBooth LoRA weights for stable-diffusion-3.5-large.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `HILOS, nikeai test` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](beichenxie/nikeai-test-2/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stable-diffusion-3.5-large, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('beichenxie/nikeai-test-2', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('HILOS, nikeai test').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/beichenxie/nikeai-test-2/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md).
## Training details
Trained on Replicate using: [lucataco/stable-diffusion-3.5-large-lora-trainer](https://replicate.com/lucataco/stable-diffusion-3.5-large-lora-trainer)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
leixa/d711f2b7-a885-4745-987c-446462155057 | leixa | 2025-01-20T23:38:29Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceM4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-20T23:37:59Z | ---
library_name: peft
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d711f2b7-a885-4745-987c-446462155057
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceM4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 923c6e5b442f353f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/923c6e5b442f353f_train_data.json
type:
field_input: confidence
field_instruction: report
field_output: statement
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/d711f2b7-a885-4745-987c-446462155057
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/923c6e5b442f353f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 531f15f8-749c-479d-84e5-335387dc7e76
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 531f15f8-749c-479d-84e5-335387dc7e76
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# d711f2b7-a885-4745-987c-446462155057
This model is a fine-tuned version of [HuggingFaceM4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceM4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0042 | 1 | 10.3767 |
| 10.3769 | 0.0379 | 9 | 10.3764 |
| 10.3753 | 0.0758 | 18 | 10.3757 |
| 10.3743 | 0.1137 | 27 | 10.3749 |
| 10.3749 | 0.1516 | 36 | 10.3739 |
| 10.3718 | 0.1895 | 45 | 10.3729 |
| 10.3726 | 0.2274 | 54 | 10.3719 |
| 10.3692 | 0.2653 | 63 | 10.3710 |
| 10.3699 | 0.3032 | 72 | 10.3703 |
| 10.369 | 0.3411 | 81 | 10.3699 |
| 10.3684 | 0.3789 | 90 | 10.3698 |
| 10.3699 | 0.4168 | 99 | 10.3697 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF | mradermacher | 2025-01-20T23:38:02Z | 573 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"sft",
"en",
"id",
"dataset:gmonsoon/CoT-id",
"base_model:gmonsoon/Eunoia-Gemma-9B-o1-Indo",
"base_model:quantized:gmonsoon/Eunoia-Gemma-9B-o1-Indo",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-17T09:56:31Z | ---
base_model: gmonsoon/Eunoia-Gemma-9B-o1-Indo
datasets:
- gmonsoon/CoT-id
language:
- en
- id
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/gmonsoon/Eunoia-Gemma-9B-o1-Indo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Eunoia-Gemma-9B-o1-Indo-i1-GGUF/resolve/main/Eunoia-Gemma-9B-o1-Indo.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nathanialhunt/a5b35d19-809b-4474-b3e7-2109f98d9b84 | nathanialhunt | 2025-01-20T23:38:00Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:Eurdem/Defne_llama3_2x8B",
"base_model:adapter:Eurdem/Defne_llama3_2x8B",
"license:llama3",
"region:us"
] | null | 2025-01-20T22:39:16Z | ---
library_name: peft
license: llama3
base_model: Eurdem/Defne_llama3_2x8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a5b35d19-809b-4474-b3e7-2109f98d9b84
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Eurdem/Defne_llama3_2x8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7e3b096caae7eb1c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7e3b096caae7eb1c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/a5b35d19-809b-4474-b3e7-2109f98d9b84
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/7e3b096caae7eb1c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 319fd169-6b6a-48ba-95dd-c64f160d3657
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 319fd169-6b6a-48ba-95dd-c64f160d3657
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a5b35d19-809b-4474-b3e7-2109f98d9b84
This model is a fine-tuned version of [Eurdem/Defne_llama3_2x8B](https://huggingface.co/Eurdem/Defne_llama3_2x8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso10/a4fd6254-1a96-4603-aea9-5a2d20a2f3bc | lesso10 | 2025-01-20T23:37:34Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:55:41Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4fd6254-1a96-4603-aea9-5a2d20a2f3bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 5da4fdb4f9d40cf6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5da4fdb4f9d40cf6_train_data.json
type:
field_input: topic;
field_instruction: message_1
field_output: message_2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: true
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso10/a4fd6254-1a96-4603-aea9-5a2d20a2f3bc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/5da4fdb4f9d40cf6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 971c59eb-5de8-4a78-8d22-6a7da4c9ee82
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 971c59eb-5de8-4a78-8d22-6a7da4c9ee82
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# a4fd6254-1a96-4603-aea9-5a2d20a2f3bc
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0005 | 5 | nan |
| 0.0 | 0.0011 | 10 | nan |
| 0.0 | 0.0016 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/detailsharp | LHRuig | 2025-01-20T23:37:03Z | 28 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:36:29Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# detailsharp
<Gallery />
## Model description
detailsharp lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/detailsharp/tree/main) them in the Files & versions tab.
|
matrixportal/Llama-3.1-8B-BookAdventures-GGUF | matrixportal | 2025-01-20T23:36:24Z | 28 | 0 | null | [
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:KoboldAI/Llama-3.1-8B-BookAdventures",
"base_model:quantized:KoboldAI/Llama-3.1-8B-BookAdventures",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-20T23:35:43Z | ---
license: cc-by-nc-sa-4.0
base_model: KoboldAI/Llama-3.1-8B-BookAdventures
tags:
- llama-factory
- full
- generated_from_trainer
- llama-cpp
- gguf-my-repo
model-index:
- name: KoboldAI/Llama-3.1-8B-BookAdventures
results: []
---
# matrixportal/Llama-3.1-8B-BookAdventures-GGUF
This model was converted to GGUF format from [`KoboldAI/Llama-3.1-8B-BookAdventures`](https://huggingface.co/KoboldAI/Llama-3.1-8B-BookAdventures) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/KoboldAI/Llama-3.1-8B-BookAdventures) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/Llama-3.1-8B-BookAdventures-GGUF --hf-file llama-3.1-8b-bookadventures-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/Llama-3.1-8B-BookAdventures-GGUF --hf-file llama-3.1-8b-bookadventures-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/Llama-3.1-8B-BookAdventures-GGUF --hf-file llama-3.1-8b-bookadventures-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/Llama-3.1-8B-BookAdventures-GGUF --hf-file llama-3.1-8b-bookadventures-q4_k_m.gguf -c 2048
```
|
fpadovani/english_childes_context_42 | fpadovani | 2025-01-20T23:36:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-01-20T22:20:37Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: childes_mlm_unmasking_context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# childes_mlm_unmasking_context
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100000
- training_steps: 400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| No log | 1.2698 | 2000 | 5.5274 |
| 6.237 | 2.5397 | 4000 | 5.4702 |
| 6.237 | 3.8095 | 6000 | 5.3851 |
| 5.4081 | 5.0794 | 8000 | 5.3058 |
| 5.4081 | 6.3492 | 10000 | 4.0259 |
| 4.2991 | 7.6190 | 12000 | 3.3054 |
| 4.2991 | 8.8889 | 14000 | 3.0169 |
| 3.104 | 10.1587 | 16000 | 2.7758 |
| 3.104 | 11.4286 | 18000 | 2.6658 |
| 2.7387 | 12.6984 | 20000 | 2.5578 |
| 2.7387 | 13.9683 | 22000 | 2.5101 |
| 2.5352 | 15.2381 | 24000 | 2.4254 |
| 2.5352 | 16.5079 | 26000 | 2.3459 |
| 2.4005 | 17.7778 | 28000 | 2.3221 |
| 2.4005 | 19.0476 | 30000 | 2.2965 |
| 2.3081 | 20.3175 | 32000 | 2.2853 |
| 2.3081 | 21.5873 | 34000 | 2.2284 |
| 2.2426 | 22.8571 | 36000 | 2.1734 |
| 2.2426 | 24.1270 | 38000 | 2.1705 |
| 2.193 | 25.3968 | 40000 | 2.1931 |
| 2.193 | 26.6667 | 42000 | 2.1680 |
| 2.1678 | 27.9365 | 44000 | 2.1337 |
| 2.1678 | 29.2063 | 46000 | 2.1411 |
| 2.1464 | 30.4762 | 48000 | 2.1310 |
| 2.1464 | 31.7460 | 50000 | 2.1229 |
| 2.1285 | 33.0159 | 52000 | 2.1331 |
| 2.1285 | 34.2857 | 54000 | 2.1592 |
| 2.1199 | 35.5556 | 56000 | 2.1318 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
dimasik1987/d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8 | dimasik1987 | 2025-01-20T23:36:04Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-20T23:35:41Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd22c8863ed4176b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd22c8863ed4176b_train_data.json
type:
field_input: text
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/dd22c8863ed4176b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee0747e5-378f-43ac-83d3-8dd08d6876bf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee0747e5-378f-43ac-83d3-8dd08d6876bf
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# d0b444d3-4c6e-4c6f-a3e0-c745e20ca3f8
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0292 | 1 | 10.3601 |
| 10.3594 | 0.1460 | 5 | 10.3562 |
| 10.3515 | 0.2920 | 10 | 10.3464 |
| 10.3409 | 0.4380 | 15 | 10.3383 |
| 10.337 | 0.5839 | 20 | 10.3330 |
| 10.3318 | 0.7299 | 25 | 10.3307 |
| 10.3292 | 0.8759 | 30 | 10.3303 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/moodcine | LHRuig | 2025-01-20T23:35:59Z | 12 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:35:47Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# moodcine
<Gallery />
## Model description
moodcine lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/moodcine/tree/main) them in the Files & versions tab.
|
prxy5604/37578cc4-aa06-437b-80e1-561bf03536ef | prxy5604 | 2025-01-20T23:35:57Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-20T23:35:36Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37578cc4-aa06-437b-80e1-561bf03536ef
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- dd22c8863ed4176b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd22c8863ed4176b_train_data.json
type:
field_input: text
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/37578cc4-aa06-437b-80e1-561bf03536ef
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/dd22c8863ed4176b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee0747e5-378f-43ac-83d3-8dd08d6876bf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee0747e5-378f-43ac-83d3-8dd08d6876bf
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 37578cc4-aa06-437b-80e1-561bf03536ef
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 52
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3598 | 0.0580 | 1 | 10.3596 |
| 10.5029 | 2.8986 | 50 | 10.2951 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/919b9850-bcff-4436-97bc-01c41a6c1517 | JacksonBrune | 2025-01-20T23:35:13Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T20:14:32Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 919b9850-bcff-4436-97bc-01c41a6c1517
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 50587f38ed52161f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/50587f38ed52161f_train_data.json
type:
field_input: file
field_instruction: directory
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/919b9850-bcff-4436-97bc-01c41a6c1517
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/50587f38ed52161f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3f448567-d842-4770-a570-924da07f2a6c
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3f448567-d842-4770-a570-924da07f2a6c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 919b9850-bcff-4436-97bc-01c41a6c1517
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 3 | nan |
| 0.0 | 0.0000 | 6 | nan |
| 0.0 | 0.0000 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/NBeerbower-ConversationalMix-8b-GGUF | mradermacher | 2025-01-20T23:34:47Z | 328 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:MrRobotoAI/NBeerbower-ConversationalMix-8b",
"base_model:quantized:MrRobotoAI/NBeerbower-ConversationalMix-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-19T21:42:40Z | ---
base_model: MrRobotoAI/NBeerbower-ConversationalMix-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MrRobotoAI/NBeerbower-ConversationalMix-8b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NBeerbower-ConversationalMix-8b-GGUF/resolve/main/NBeerbower-ConversationalMix-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso13/d9c12229-8dbb-45fa-ba4f-9717c95db112 | lesso13 | 2025-01-20T23:34:35Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"base_model:adapter:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:03:26Z | ---
library_name: peft
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d9c12229-8dbb-45fa-ba4f-9717c95db112
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
bf16: true
chat_template: llama3
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso13/d9c12229-8dbb-45fa-ba4f-9717c95db112
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 45GiB
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075526eb-32e0-4485-aab7-014e4d302171
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075526eb-32e0-4485-aab7-014e4d302171
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d9c12229-8dbb-45fa-ba4f-9717c95db112
This model is a fine-tuned version of [samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f](https://huggingface.co/samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0010 | 5 | nan |
| 0.0 | 0.0020 | 10 | nan |
| 0.0 | 0.0031 | 15 | nan |
| 0.0 | 0.0041 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
demohong/7f82ae83-b307-42ac-b3a7-914fcf126608 | demohong | 2025-01-20T23:30:24Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:59:26Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f82ae83-b307-42ac-b3a7-914fcf126608
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 91e193d3dca1611f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/91e193d3dca1611f_train_data.json
type:
field_input: parent_id
field_instruction: role
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/7f82ae83-b307-42ac-b3a7-914fcf126608
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/91e193d3dca1611f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 856a9aac-189f-40f7-b27c-c5616995b0d1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 856a9aac-189f-40f7-b27c-c5616995b0d1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7f82ae83-b307-42ac-b3a7-914fcf126608
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3105 | 0.0705 | 200 | 1.5966 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nbninh/d2c84520-d1af-4aa3-b9a1-d0cdf04d4d4d | nbninh | 2025-01-20T23:29:25Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T21:45:32Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d2c84520-d1af-4aa3-b9a1-d0cdf04d4d4d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 41d403c8b37c92fc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/41d403c8b37c92fc_train_data.json
type:
field_input: mesh_terms
field_instruction: title
field_output: abstract
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/d2c84520-d1af-4aa3-b9a1-d0cdf04d4d4d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/41d403c8b37c92fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075ea541-bd04-429e-a989-c49dabc36fc3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075ea541-bd04-429e-a989-c49dabc36fc3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d2c84520-d1af-4aa3-b9a1-d0cdf04d4d4d
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7635 | 0.0048 | 200 | 1.4790 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/detail2k | LHRuig | 2025-01-20T23:29:11Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:28:58Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# detail2k
<Gallery />
## Model description
detail2k lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/detail2k/tree/main) them in the Files & versions tab.
|
MikeRoz/deepseek-ai_DeepSeek-R1-Distill-Llama-70B-4.25bpw-h6-exl2 | MikeRoz | 2025-01-20T23:28:11Z | 227 | 4 | null | [
"safetensors",
"llama",
"exl2",
"region:us"
] | null | 2025-01-20T20:50:11Z | # DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
LHRuig/adddetails | LHRuig | 2025-01-20T23:26:11Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:26:06Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# adddetail
<Gallery />
## Model description
adddetail lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/adddetails/tree/main) them in the Files & versions tab.
|
lesso10/1c6a64b4-e7f1-4e45-9efb-b4b0d937a28a | lesso10 | 2025-01-20T23:25:07Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:41:43Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c6a64b4-e7f1-4e45-9efb-b4b0d937a28a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 9ff4e3b24bf3b2a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ff4e3b24bf3b2a4_train_data.json
type:
field_input: sentence1
field_instruction: phrase1
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: true
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso10/1c6a64b4-e7f1-4e45-9efb-b4b0d937a28a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/9ff4e3b24bf3b2a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 05245b1d-e8ff-44bb-a139-f31fd23d5a4a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 05245b1d-e8ff-44bb-a139-f31fd23d5a4a
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 1c6a64b4-e7f1-4e45-9efb-b4b0d937a28a
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 4.9637 |
| 5.0443 | 0.0014 | 5 | 4.8203 |
| 4.2709 | 0.0029 | 10 | 3.9057 |
| 3.0327 | 0.0043 | 15 | 3.1180 |
| 3.3112 | 0.0057 | 20 | 3.0652 |
| 2.8835 | 0.0071 | 25 | 3.0292 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/rippillustrious-v10-sdxl | John6666 | 2025-01-20T23:24:26Z | 81 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-20T23:18:28Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1163094/rippillustrious?modelVersionId=1308356).
This model created by [tbets182132](https://civitai.com/user/tbets182132).
|
LHRuig/kodakmotion | LHRuig | 2025-01-20T23:24:01Z | 11 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:23:17Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# kodakmotion
<Gallery />
## Model description
kodakmotion lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/kodakmotion/tree/main) them in the Files & versions tab.
|
lhong4759/bbef3a84-7d4f-465d-914c-e5286aa7e060 | lhong4759 | 2025-01-20T23:23:56Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"base_model:adapter:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:03:00Z | ---
library_name: peft
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bbef3a84-7d4f-465d-914c-e5286aa7e060
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/bbef3a84-7d4f-465d-914c-e5286aa7e060
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075526eb-32e0-4485-aab7-014e4d302171
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075526eb-32e0-4485-aab7-014e4d302171
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bbef3a84-7d4f-465d-914c-e5286aa7e060
This model is a fine-tuned version of [samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f](https://huggingface.co/samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2357 | 0.0407 | 200 | 1.1312 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adamo1139/DeepSeek-R1-Distill-Qwen-1.5B-3bpw-exl2 | adamo1139 | 2025-01-20T23:23:37Z | 5 | 0 | null | [
"qwen2",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"3-bit",
"exl2",
"region:us"
] | null | 2025-01-20T22:53:24Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
mradermacher/LwQ-Reasoner-10B-GGUF | mradermacher | 2025-01-20T23:23:24Z | 345 | 0 | transformers | [
"transformers",
"gguf",
"LlamaWithQuestions",
"CoT",
"Reasoner",
"LWQ",
"en",
"base_model:prithivMLmods/LwQ-Reasoner-10B",
"base_model:quantized:prithivMLmods/LwQ-Reasoner-10B",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | null | 2025-01-20T16:50:16Z | ---
base_model: prithivMLmods/LwQ-Reasoner-10B
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- LlamaWithQuestions
- CoT
- Reasoner
- LWQ
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/LwQ-Reasoner-10B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LwQ-Reasoner-10B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q2_K.gguf) | Q2_K | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q3_K_S.gguf) | Q3_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q3_K_M.gguf) | Q3_K_M | 5.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q3_K_L.gguf) | Q3_K_L | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.IQ4_XS.gguf) | IQ4_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q4_K_S.gguf) | Q4_K_S | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q4_K_M.gguf) | Q4_K_M | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q5_K_S.gguf) | Q5_K_S | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q5_K_M.gguf) | Q5_K_M | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q6_K.gguf) | Q6_K | 8.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.Q8_0.gguf) | Q8_0 | 11.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LwQ-Reasoner-10B-GGUF/resolve/main/LwQ-Reasoner-10B.f16.gguf) | f16 | 20.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LHRuig/hollycine | LHRuig | 2025-01-20T23:23:02Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:22:02Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# hollycine
<Gallery />
## Model description
hollycine lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/hollycine/tree/main) them in the Files & versions tab.
|
adamo1139/DeepSeek-R1-Distill-Qwen-1.5B-5bpw-exl2 | adamo1139 | 2025-01-20T23:22:53Z | 5 | 0 | null | [
"qwen2",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"5-bit",
"exl2",
"region:us"
] | null | 2025-01-20T22:51:52Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
oldiday/625b4c61-4c53-4c35-8346-7973e6e5d4d4 | oldiday | 2025-01-20T23:22:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T22:55:44Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 625b4c61-4c53-4c35-8346-7973e6e5d4d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5da4fdb4f9d40cf6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5da4fdb4f9d40cf6_train_data.json
type:
field_input: topic;
field_instruction: message_1
field_output: message_2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/625b4c61-4c53-4c35-8346-7973e6e5d4d4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/5da4fdb4f9d40cf6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 971c59eb-5de8-4a78-8d22-6a7da4c9ee82
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: 971c59eb-5de8-4a78-8d22-6a7da4c9ee82
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 625b4c61-4c53-4c35-8346-7973e6e5d4d4
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | 0.6948 |
| 0.6968 | 0.0153 | 9 | 0.6832 |
| 0.6562 | 0.0306 | 18 | 0.6471 |
| 0.6206 | 0.0459 | 27 | 0.6231 |
| 0.6178 | 0.0612 | 36 | 0.6112 |
| 0.625 | 0.0765 | 45 | 0.6054 |
| 0.605 | 0.0918 | 54 | 0.6015 |
| 0.5935 | 0.1071 | 63 | 0.5990 |
| 0.6049 | 0.1224 | 72 | 0.5973 |
| 0.5989 | 0.1378 | 81 | 0.5964 |
| 0.5937 | 0.1531 | 90 | 0.5960 |
| 0.6079 | 0.1684 | 99 | 0.5959 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k18_task5_organization | MayBashendy | 2025-01-20T23:22:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-20T23:10:20Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k18_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k18_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2288
- Qwk: 0.0
- Mse: 1.2288
- Rmse: 1.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0465 | 2 | 4.0158 | 0.0034 | 4.0158 | 2.0039 |
| No log | 0.0930 | 4 | 2.2748 | 0.0271 | 2.2748 | 1.5083 |
| No log | 0.1395 | 6 | 1.6688 | 0.0329 | 1.6688 | 1.2918 |
| No log | 0.1860 | 8 | 1.2346 | 0.0967 | 1.2346 | 1.1111 |
| No log | 0.2326 | 10 | 1.1925 | 0.0731 | 1.1925 | 1.0920 |
| No log | 0.2791 | 12 | 1.1109 | 0.2150 | 1.1109 | 1.0540 |
| No log | 0.3256 | 14 | 1.0932 | 0.1884 | 1.0932 | 1.0456 |
| No log | 0.3721 | 16 | 1.2148 | 0.0380 | 1.2148 | 1.1022 |
| No log | 0.4186 | 18 | 1.2750 | 0.0 | 1.2750 | 1.1292 |
| No log | 0.4651 | 20 | 1.1496 | 0.0760 | 1.1496 | 1.0722 |
| No log | 0.5116 | 22 | 1.1357 | 0.1379 | 1.1357 | 1.0657 |
| No log | 0.5581 | 24 | 1.1803 | 0.1333 | 1.1803 | 1.0864 |
| No log | 0.6047 | 26 | 1.1167 | 0.1981 | 1.1167 | 1.0568 |
| No log | 0.6512 | 28 | 1.1559 | 0.0445 | 1.1559 | 1.0751 |
| No log | 0.6977 | 30 | 1.2187 | 0.0445 | 1.2187 | 1.1039 |
| No log | 0.7442 | 32 | 1.2232 | 0.0445 | 1.2232 | 1.1060 |
| No log | 0.7907 | 34 | 1.1709 | 0.1643 | 1.1709 | 1.0821 |
| No log | 0.8372 | 36 | 1.1800 | 0.0792 | 1.1800 | 1.0863 |
| No log | 0.8837 | 38 | 1.3732 | -0.0296 | 1.3732 | 1.1719 |
| No log | 0.9302 | 40 | 1.6117 | 0.0143 | 1.6117 | 1.2695 |
| No log | 0.9767 | 42 | 1.7119 | 0.0143 | 1.7119 | 1.3084 |
| No log | 1.0233 | 44 | 1.5533 | 0.0 | 1.5533 | 1.2463 |
| No log | 1.0698 | 46 | 1.4371 | -0.0148 | 1.4371 | 1.1988 |
| No log | 1.1163 | 48 | 1.1990 | 0.0701 | 1.1990 | 1.0950 |
| No log | 1.1628 | 50 | 1.0921 | 0.2239 | 1.0921 | 1.0450 |
| No log | 1.2093 | 52 | 1.2033 | 0.0102 | 1.2033 | 1.0969 |
| No log | 1.2558 | 54 | 1.2896 | -0.1560 | 1.2896 | 1.1356 |
| No log | 1.3023 | 56 | 1.4626 | -0.0326 | 1.4626 | 1.2094 |
| No log | 1.3488 | 58 | 1.3772 | -0.0022 | 1.3772 | 1.1735 |
| No log | 1.3953 | 60 | 1.2187 | 0.1361 | 1.2187 | 1.1040 |
| No log | 1.4419 | 62 | 1.2426 | 0.0374 | 1.2426 | 1.1147 |
| No log | 1.4884 | 64 | 1.2838 | 0.0999 | 1.2838 | 1.1330 |
| No log | 1.5349 | 66 | 1.2161 | 0.0700 | 1.2161 | 1.1028 |
| No log | 1.5814 | 68 | 1.1278 | 0.2692 | 1.1278 | 1.0620 |
| No log | 1.6279 | 70 | 1.1283 | 0.1725 | 1.1283 | 1.0622 |
| No log | 1.6744 | 72 | 1.0934 | 0.2366 | 1.0934 | 1.0456 |
| No log | 1.7209 | 74 | 1.0742 | 0.1783 | 1.0742 | 1.0364 |
| No log | 1.7674 | 76 | 1.1562 | 0.1389 | 1.1562 | 1.0752 |
| No log | 1.8140 | 78 | 1.4371 | -0.0641 | 1.4371 | 1.1988 |
| No log | 1.8605 | 80 | 1.4812 | -0.0641 | 1.4812 | 1.2171 |
| No log | 1.9070 | 82 | 1.4368 | -0.0641 | 1.4368 | 1.1987 |
| No log | 1.9535 | 84 | 1.2691 | -0.0091 | 1.2691 | 1.1266 |
| No log | 2.0 | 86 | 1.2183 | 0.0542 | 1.2183 | 1.1038 |
| No log | 2.0465 | 88 | 1.2777 | 0.0188 | 1.2777 | 1.1304 |
| No log | 2.0930 | 90 | 1.4962 | -0.1067 | 1.4962 | 1.2232 |
| No log | 2.1395 | 92 | 1.5524 | -0.0747 | 1.5524 | 1.2460 |
| No log | 2.1860 | 94 | 1.3420 | 0.0587 | 1.3420 | 1.1585 |
| No log | 2.2326 | 96 | 1.1766 | 0.0164 | 1.1766 | 1.0847 |
| No log | 2.2791 | 98 | 1.1748 | 0.0164 | 1.1748 | 1.0839 |
| No log | 2.3256 | 100 | 1.3960 | 0.0464 | 1.3960 | 1.1815 |
| No log | 2.3721 | 102 | 1.7947 | -0.1843 | 1.7947 | 1.3397 |
| No log | 2.4186 | 104 | 1.9849 | -0.3093 | 1.9849 | 1.4089 |
| No log | 2.4651 | 106 | 1.9655 | -0.1443 | 1.9655 | 1.4020 |
| No log | 2.5116 | 108 | 1.7718 | -0.0788 | 1.7718 | 1.3311 |
| No log | 2.5581 | 110 | 1.5684 | -0.0167 | 1.5684 | 1.2524 |
| No log | 2.6047 | 112 | 1.6681 | -0.0688 | 1.6681 | 1.2916 |
| No log | 2.6512 | 114 | 1.8768 | -0.1179 | 1.8768 | 1.3700 |
| No log | 2.6977 | 116 | 2.0265 | -0.1122 | 2.0265 | 1.4236 |
| No log | 2.7442 | 118 | 2.0327 | -0.1111 | 2.0327 | 1.4257 |
| No log | 2.7907 | 120 | 2.0567 | -0.1154 | 2.0567 | 1.4341 |
| No log | 2.8372 | 122 | 1.8254 | -0.0655 | 1.8254 | 1.3511 |
| No log | 2.8837 | 124 | 1.5249 | -0.0735 | 1.5249 | 1.2349 |
| No log | 2.9302 | 126 | 1.7057 | -0.1078 | 1.7057 | 1.3060 |
| No log | 2.9767 | 128 | 2.0800 | -0.2156 | 2.0800 | 1.4422 |
| No log | 3.0233 | 130 | 2.3606 | -0.2661 | 2.3606 | 1.5364 |
| No log | 3.0698 | 132 | 2.2816 | -0.1571 | 2.2816 | 1.5105 |
| No log | 3.1163 | 134 | 2.0213 | -0.1658 | 2.0213 | 1.4217 |
| No log | 3.1628 | 136 | 1.9379 | -0.1254 | 1.9379 | 1.3921 |
| No log | 3.2093 | 138 | 1.8792 | -0.0892 | 1.8792 | 1.3708 |
| No log | 3.2558 | 140 | 1.8788 | -0.0849 | 1.8788 | 1.3707 |
| No log | 3.3023 | 142 | 1.7651 | -0.0762 | 1.7651 | 1.3286 |
| No log | 3.3488 | 144 | 1.5874 | 0.0294 | 1.5874 | 1.2599 |
| No log | 3.3953 | 146 | 1.5857 | 0.0294 | 1.5857 | 1.2593 |
| No log | 3.4419 | 148 | 1.6411 | 0.0279 | 1.6411 | 1.2810 |
| No log | 3.4884 | 150 | 1.7297 | -0.0267 | 1.7297 | 1.3152 |
| No log | 3.5349 | 152 | 1.7574 | -0.0757 | 1.7574 | 1.3257 |
| No log | 3.5814 | 154 | 1.7628 | -0.0397 | 1.7628 | 1.3277 |
| No log | 3.6279 | 156 | 1.7589 | -0.0508 | 1.7589 | 1.3262 |
| No log | 3.6744 | 158 | 1.7224 | -0.0378 | 1.7224 | 1.3124 |
| No log | 3.7209 | 160 | 1.7568 | -0.0514 | 1.7568 | 1.3254 |
| No log | 3.7674 | 162 | 1.8652 | 0.0726 | 1.8652 | 1.3657 |
| No log | 3.8140 | 164 | 1.9175 | 0.0402 | 1.9175 | 1.3847 |
| No log | 3.8605 | 166 | 1.8288 | 0.0400 | 1.8288 | 1.3523 |
| No log | 3.9070 | 168 | 1.8386 | 0.0562 | 1.8386 | 1.3559 |
| No log | 3.9535 | 170 | 1.8316 | 0.1342 | 1.8316 | 1.3534 |
| No log | 4.0 | 172 | 1.6490 | 0.2105 | 1.6490 | 1.2842 |
| No log | 4.0465 | 174 | 1.5040 | 0.1282 | 1.5040 | 1.2264 |
| No log | 4.0930 | 176 | 1.4472 | 0.1198 | 1.4472 | 1.2030 |
| No log | 4.1395 | 178 | 1.4452 | 0.0270 | 1.4452 | 1.2022 |
| No log | 4.1860 | 180 | 1.5266 | 0.0809 | 1.5266 | 1.2356 |
| No log | 4.2326 | 182 | 1.7969 | 0.1525 | 1.7969 | 1.3405 |
| No log | 4.2791 | 184 | 1.8888 | 0.2098 | 1.8888 | 1.3744 |
| No log | 4.3256 | 186 | 1.7524 | 0.1663 | 1.7524 | 1.3238 |
| No log | 4.3721 | 188 | 1.5118 | 0.2342 | 1.5118 | 1.2296 |
| No log | 4.4186 | 190 | 1.4063 | 0.0946 | 1.4063 | 1.1859 |
| No log | 4.4651 | 192 | 1.4720 | 0.1911 | 1.4720 | 1.2133 |
| No log | 4.5116 | 194 | 1.4943 | 0.1423 | 1.4943 | 1.2224 |
| No log | 4.5581 | 196 | 1.4577 | 0.1423 | 1.4577 | 1.2073 |
| No log | 4.6047 | 198 | 1.3969 | 0.0602 | 1.3969 | 1.1819 |
| No log | 4.6512 | 200 | 1.3910 | 0.1110 | 1.3910 | 1.1794 |
| No log | 4.6977 | 202 | 1.4709 | 0.1110 | 1.4709 | 1.2128 |
| No log | 4.7442 | 204 | 1.5518 | 0.1423 | 1.5518 | 1.2457 |
| No log | 4.7907 | 206 | 1.5587 | 0.1027 | 1.5587 | 1.2485 |
| No log | 4.8372 | 208 | 1.5567 | 0.1027 | 1.5567 | 1.2477 |
| No log | 4.8837 | 210 | 1.5407 | 0.1423 | 1.5407 | 1.2413 |
| No log | 4.9302 | 212 | 1.4965 | 0.1904 | 1.4965 | 1.2233 |
| No log | 4.9767 | 214 | 1.3997 | 0.1943 | 1.3997 | 1.1831 |
| No log | 5.0233 | 216 | 1.3739 | 0.2126 | 1.3739 | 1.1721 |
| No log | 5.0698 | 218 | 1.4697 | 0.2292 | 1.4697 | 1.2123 |
| No log | 5.1163 | 220 | 1.4313 | 0.2015 | 1.4313 | 1.1964 |
| No log | 5.1628 | 222 | 1.4067 | 0.2203 | 1.4067 | 1.1860 |
| No log | 5.2093 | 224 | 1.4171 | 0.1886 | 1.4171 | 1.1904 |
| No log | 5.2558 | 226 | 1.5312 | 0.2117 | 1.5312 | 1.2374 |
| No log | 5.3023 | 228 | 1.7132 | 0.2058 | 1.7132 | 1.3089 |
| No log | 5.3488 | 230 | 1.7432 | 0.2206 | 1.7432 | 1.3203 |
| No log | 5.3953 | 232 | 1.7312 | 0.1963 | 1.7312 | 1.3158 |
| No log | 5.4419 | 234 | 1.7061 | 0.2389 | 1.7061 | 1.3062 |
| No log | 5.4884 | 236 | 1.5999 | 0.1461 | 1.5999 | 1.2649 |
| No log | 5.5349 | 238 | 1.5428 | 0.1911 | 1.5428 | 1.2421 |
| No log | 5.5814 | 240 | 1.3937 | 0.1142 | 1.3937 | 1.1806 |
| No log | 5.6279 | 242 | 1.3014 | 0.1052 | 1.3014 | 1.1408 |
| No log | 5.6744 | 244 | 1.2485 | 0.1052 | 1.2485 | 1.1174 |
| No log | 5.7209 | 246 | 1.2400 | 0.0401 | 1.2400 | 1.1136 |
| No log | 5.7674 | 248 | 1.2161 | 0.0401 | 1.2161 | 1.1028 |
| No log | 5.8140 | 250 | 1.1670 | 0.0556 | 1.1670 | 1.0803 |
| No log | 5.8605 | 252 | 1.1944 | 0.0155 | 1.1944 | 1.0929 |
| No log | 5.9070 | 254 | 1.3146 | 0.0781 | 1.3146 | 1.1466 |
| No log | 5.9535 | 256 | 1.5336 | 0.1880 | 1.5336 | 1.2384 |
| No log | 6.0 | 258 | 1.6128 | 0.2465 | 1.6128 | 1.2700 |
| No log | 6.0465 | 260 | 1.5854 | 0.2465 | 1.5854 | 1.2591 |
| No log | 6.0930 | 262 | 1.5079 | 0.1966 | 1.5079 | 1.2280 |
| No log | 6.1395 | 264 | 1.4465 | 0.1886 | 1.4465 | 1.2027 |
| No log | 6.1860 | 266 | 1.4232 | 0.2027 | 1.4232 | 1.1930 |
| No log | 6.2326 | 268 | 1.3952 | 0.2027 | 1.3952 | 1.1812 |
| No log | 6.2791 | 270 | 1.3799 | 0.2027 | 1.3799 | 1.1747 |
| No log | 6.3256 | 272 | 1.3777 | 0.2089 | 1.3777 | 1.1738 |
| No log | 6.3721 | 274 | 1.4345 | 0.2203 | 1.4345 | 1.1977 |
| No log | 6.4186 | 276 | 1.4906 | 0.2555 | 1.4906 | 1.2209 |
| No log | 6.4651 | 278 | 1.6231 | 0.2771 | 1.6231 | 1.2740 |
| No log | 6.5116 | 280 | 1.6548 | 0.2270 | 1.6548 | 1.2864 |
| No log | 6.5581 | 282 | 1.4900 | 0.1058 | 1.4900 | 1.2207 |
| No log | 6.6047 | 284 | 1.4021 | 0.0781 | 1.4021 | 1.1841 |
| No log | 6.6512 | 286 | 1.4033 | 0.0781 | 1.4033 | 1.1846 |
| No log | 6.6977 | 288 | 1.4786 | 0.2315 | 1.4786 | 1.2160 |
| No log | 6.7442 | 290 | 1.4991 | 0.2915 | 1.4991 | 1.2244 |
| No log | 6.7907 | 292 | 1.4009 | 0.2455 | 1.4009 | 1.1836 |
| No log | 6.8372 | 294 | 1.3681 | 0.2455 | 1.3681 | 1.1697 |
| No log | 6.8837 | 296 | 1.3522 | 0.2455 | 1.3522 | 1.1628 |
| No log | 6.9302 | 298 | 1.3574 | 0.2506 | 1.3574 | 1.1651 |
| No log | 6.9767 | 300 | 1.4005 | 0.2506 | 1.4005 | 1.1834 |
| No log | 7.0233 | 302 | 1.4022 | 0.2203 | 1.4022 | 1.1841 |
| No log | 7.0698 | 304 | 1.3696 | 0.1886 | 1.3696 | 1.1703 |
| No log | 7.1163 | 306 | 1.3437 | 0.1886 | 1.3437 | 1.1592 |
| No log | 7.1628 | 308 | 1.3611 | 0.1886 | 1.3611 | 1.1667 |
| No log | 7.2093 | 310 | 1.4336 | 0.2260 | 1.4336 | 1.1973 |
| No log | 7.2558 | 312 | 1.4199 | 0.1886 | 1.4199 | 1.1916 |
| No log | 7.3023 | 314 | 1.3951 | 0.1886 | 1.3951 | 1.1811 |
| No log | 7.3488 | 316 | 1.3180 | 0.1316 | 1.3180 | 1.1481 |
| No log | 7.3953 | 318 | 1.3180 | 0.1548 | 1.3180 | 1.1480 |
| No log | 7.4419 | 320 | 1.4264 | 0.1980 | 1.4264 | 1.1943 |
| No log | 7.4884 | 322 | 1.7047 | 0.2315 | 1.7047 | 1.3057 |
| No log | 7.5349 | 324 | 1.7725 | 0.2116 | 1.7725 | 1.3313 |
| No log | 7.5814 | 326 | 1.6272 | 0.2363 | 1.6272 | 1.2756 |
| No log | 7.6279 | 328 | 1.5210 | 0.2395 | 1.5210 | 1.2333 |
| No log | 7.6744 | 330 | 1.4116 | 0.2027 | 1.4116 | 1.1881 |
| No log | 7.7209 | 332 | 1.3019 | 0.1622 | 1.3019 | 1.1410 |
| No log | 7.7674 | 334 | 1.2722 | 0.1622 | 1.2722 | 1.1279 |
| No log | 7.8140 | 336 | 1.3118 | 0.1473 | 1.3118 | 1.1453 |
| No log | 7.8605 | 338 | 1.3846 | 0.1886 | 1.3846 | 1.1767 |
| No log | 7.9070 | 340 | 1.3764 | 0.1886 | 1.3764 | 1.1732 |
| No log | 7.9535 | 342 | 1.3974 | 0.1886 | 1.3974 | 1.1821 |
| No log | 8.0 | 344 | 1.4388 | 0.2506 | 1.4388 | 1.1995 |
| No log | 8.0465 | 346 | 1.4310 | 0.2126 | 1.4310 | 1.1962 |
| No log | 8.0930 | 348 | 1.4688 | 0.2424 | 1.4688 | 1.2119 |
| No log | 8.1395 | 350 | 1.4365 | 0.2424 | 1.4365 | 1.1986 |
| No log | 8.1860 | 352 | 1.3914 | 0.1814 | 1.3914 | 1.1796 |
| No log | 8.2326 | 354 | 1.3864 | 0.1814 | 1.3864 | 1.1775 |
| No log | 8.2791 | 356 | 1.3512 | 0.1552 | 1.3512 | 1.1624 |
| No log | 8.3256 | 358 | 1.3229 | 0.1473 | 1.3229 | 1.1502 |
| No log | 8.3721 | 360 | 1.3194 | 0.1552 | 1.3194 | 1.1486 |
| No log | 8.4186 | 362 | 1.3151 | 0.1552 | 1.3151 | 1.1468 |
| No log | 8.4651 | 364 | 1.4077 | 0.2640 | 1.4077 | 1.1865 |
| No log | 8.5116 | 366 | 1.5515 | 0.2223 | 1.5515 | 1.2456 |
| No log | 8.5581 | 368 | 1.6437 | 0.1978 | 1.6437 | 1.2821 |
| No log | 8.6047 | 370 | 1.7347 | 0.2414 | 1.7347 | 1.3171 |
| No log | 8.6512 | 372 | 1.8221 | 0.2224 | 1.8221 | 1.3499 |
| No log | 8.6977 | 374 | 1.6633 | 0.2481 | 1.6633 | 1.2897 |
| No log | 8.7442 | 376 | 1.4077 | 0.1898 | 1.4077 | 1.1865 |
| No log | 8.7907 | 378 | 1.3061 | 0.0931 | 1.3061 | 1.1428 |
| No log | 8.8372 | 380 | 1.2222 | 0.0160 | 1.2222 | 1.1055 |
| No log | 8.8837 | 382 | 1.2155 | 0.0160 | 1.2155 | 1.1025 |
| No log | 8.9302 | 384 | 1.2623 | 0.0445 | 1.2623 | 1.1235 |
| No log | 8.9767 | 386 | 1.3885 | 0.1552 | 1.3885 | 1.1783 |
| No log | 9.0233 | 388 | 1.4957 | 0.2795 | 1.4957 | 1.2230 |
| No log | 9.0698 | 390 | 1.3858 | 0.2640 | 1.3858 | 1.1772 |
| No log | 9.1163 | 392 | 1.2752 | 0.1961 | 1.2752 | 1.1293 |
| No log | 9.1628 | 394 | 1.2226 | 0.1961 | 1.2226 | 1.1057 |
| No log | 9.2093 | 396 | 1.1909 | 0.1697 | 1.1909 | 1.0913 |
| No log | 9.2558 | 398 | 1.1800 | 0.1552 | 1.1800 | 1.0863 |
| No log | 9.3023 | 400 | 1.1418 | 0.1202 | 1.1418 | 1.0686 |
| No log | 9.3488 | 402 | 1.1663 | 0.1552 | 1.1663 | 1.0800 |
| No log | 9.3953 | 404 | 1.1681 | 0.1552 | 1.1681 | 1.0808 |
| No log | 9.4419 | 406 | 1.2341 | 0.1552 | 1.2341 | 1.1109 |
| No log | 9.4884 | 408 | 1.3670 | 0.2647 | 1.3670 | 1.1692 |
| No log | 9.5349 | 410 | 1.4327 | 0.2465 | 1.4327 | 1.1970 |
| No log | 9.5814 | 412 | 1.4405 | 0.2465 | 1.4405 | 1.2002 |
| No log | 9.6279 | 414 | 1.4134 | 0.2602 | 1.4134 | 1.1889 |
| No log | 9.6744 | 416 | 1.3585 | 0.2506 | 1.3585 | 1.1656 |
| No log | 9.7209 | 418 | 1.3508 | 0.1886 | 1.3508 | 1.1623 |
| No log | 9.7674 | 420 | 1.3591 | 0.1952 | 1.3591 | 1.1658 |
| No log | 9.8140 | 422 | 1.3131 | 0.1952 | 1.3131 | 1.1459 |
| No log | 9.8605 | 424 | 1.3215 | 0.1886 | 1.3215 | 1.1496 |
| No log | 9.9070 | 426 | 1.4130 | 0.2506 | 1.4130 | 1.1887 |
| No log | 9.9535 | 428 | 1.3922 | 0.1310 | 1.3922 | 1.1799 |
| No log | 10.0 | 430 | 1.4431 | 0.1428 | 1.4431 | 1.2013 |
| No log | 10.0465 | 432 | 1.5592 | 0.2117 | 1.5592 | 1.2487 |
| No log | 10.0930 | 434 | 1.5122 | 0.2292 | 1.5122 | 1.2297 |
| No log | 10.1395 | 436 | 1.4766 | 0.2647 | 1.4766 | 1.2151 |
| No log | 10.1860 | 438 | 1.3397 | 0.2315 | 1.3397 | 1.1574 |
| No log | 10.2326 | 440 | 1.2819 | 0.1886 | 1.2819 | 1.1322 |
| No log | 10.2791 | 442 | 1.2954 | 0.1700 | 1.2954 | 1.1381 |
| No log | 10.3256 | 444 | 1.3657 | 0.2315 | 1.3657 | 1.1686 |
| No log | 10.3721 | 446 | 1.4421 | 0.2315 | 1.4421 | 1.2009 |
| No log | 10.4186 | 448 | 1.4492 | 0.2015 | 1.4492 | 1.2038 |
| No log | 10.4651 | 450 | 1.3858 | 0.1202 | 1.3858 | 1.1772 |
| No log | 10.5116 | 452 | 1.3957 | 0.0833 | 1.3957 | 1.1814 |
| No log | 10.5581 | 454 | 1.4324 | 0.0833 | 1.4324 | 1.1968 |
| No log | 10.6047 | 456 | 1.5332 | 0.1552 | 1.5332 | 1.2382 |
| No log | 10.6512 | 458 | 1.5961 | 0.1952 | 1.5961 | 1.2634 |
| No log | 10.6977 | 460 | 1.6255 | 0.1595 | 1.6255 | 1.2750 |
| No log | 10.7442 | 462 | 1.5482 | 0.1634 | 1.5482 | 1.2443 |
| No log | 10.7907 | 464 | 1.4990 | 0.1407 | 1.4990 | 1.2243 |
| No log | 10.8372 | 466 | 1.4643 | 0.1473 | 1.4643 | 1.2101 |
| No log | 10.8837 | 468 | 1.4725 | 0.1407 | 1.4725 | 1.2135 |
| No log | 10.9302 | 470 | 1.4109 | 0.0781 | 1.4109 | 1.1878 |
| No log | 10.9767 | 472 | 1.3841 | 0.1142 | 1.3841 | 1.1765 |
| No log | 11.0233 | 474 | 1.4789 | 0.2239 | 1.4789 | 1.2161 |
| No log | 11.0698 | 476 | 1.5078 | 0.2126 | 1.5078 | 1.2279 |
| No log | 11.1163 | 478 | 1.5220 | 0.1486 | 1.5220 | 1.2337 |
| No log | 11.1628 | 480 | 1.4774 | 0.1552 | 1.4774 | 1.2155 |
| No log | 11.2093 | 482 | 1.4310 | 0.1486 | 1.4310 | 1.1962 |
| No log | 11.2558 | 484 | 1.4349 | 0.2424 | 1.4349 | 1.1979 |
| No log | 11.3023 | 486 | 1.3930 | 0.2424 | 1.3930 | 1.1803 |
| No log | 11.3488 | 488 | 1.3830 | 0.2126 | 1.3830 | 1.1760 |
| No log | 11.3953 | 490 | 1.3882 | 0.1407 | 1.3882 | 1.1782 |
| No log | 11.4419 | 492 | 1.3933 | 0.1473 | 1.3933 | 1.1804 |
| No log | 11.4884 | 494 | 1.3616 | 0.1473 | 1.3616 | 1.1669 |
| No log | 11.5349 | 496 | 1.3209 | 0.1473 | 1.3209 | 1.1493 |
| No log | 11.5814 | 498 | 1.2735 | 0.0401 | 1.2735 | 1.1285 |
| 0.3529 | 11.6279 | 500 | 1.2606 | 0.0401 | 1.2606 | 1.1228 |
| 0.3529 | 11.6744 | 502 | 1.2457 | 0.0401 | 1.2457 | 1.1161 |
| 0.3529 | 11.7209 | 504 | 1.2800 | 0.0401 | 1.2800 | 1.1314 |
| 0.3529 | 11.7674 | 506 | 1.3705 | 0.1052 | 1.3705 | 1.1707 |
| 0.3529 | 11.8140 | 508 | 1.4245 | 0.1351 | 1.4245 | 1.1935 |
| 0.3529 | 11.8605 | 510 | 1.4236 | 0.1351 | 1.4236 | 1.1932 |
| 0.3529 | 11.9070 | 512 | 1.4686 | 0.1142 | 1.4686 | 1.2118 |
| 0.3529 | 11.9535 | 514 | 1.5206 | 0.2126 | 1.5206 | 1.2331 |
| 0.3529 | 12.0 | 516 | 1.4949 | 0.2126 | 1.4949 | 1.2226 |
| 0.3529 | 12.0465 | 518 | 1.4517 | 0.1562 | 1.4517 | 1.2048 |
| 0.3529 | 12.0930 | 520 | 1.4807 | 0.2126 | 1.4807 | 1.2169 |
| 0.3529 | 12.1395 | 522 | 1.5059 | 0.2239 | 1.5059 | 1.2272 |
| 0.3529 | 12.1860 | 524 | 1.6043 | 0.2391 | 1.6043 | 1.2666 |
| 0.3529 | 12.2326 | 526 | 1.5695 | 0.2391 | 1.5695 | 1.2528 |
| 0.3529 | 12.2791 | 528 | 1.4159 | 0.2424 | 1.4159 | 1.1899 |
| 0.3529 | 12.3256 | 530 | 1.3219 | 0.1486 | 1.3219 | 1.1497 |
| 0.3529 | 12.3721 | 532 | 1.3401 | 0.2065 | 1.3401 | 1.1576 |
| 0.3529 | 12.4186 | 534 | 1.3882 | 0.2424 | 1.3882 | 1.1782 |
| 0.3529 | 12.4651 | 536 | 1.3911 | 0.2126 | 1.3911 | 1.1795 |
| 0.3529 | 12.5116 | 538 | 1.3736 | 0.2506 | 1.3736 | 1.1720 |
| 0.3529 | 12.5581 | 540 | 1.3963 | 0.2424 | 1.3963 | 1.1816 |
| 0.3529 | 12.6047 | 542 | 1.4377 | 0.2709 | 1.4377 | 1.1990 |
| 0.3529 | 12.6512 | 544 | 1.3851 | 0.2126 | 1.3851 | 1.1769 |
| 0.3529 | 12.6977 | 546 | 1.3814 | 0.2126 | 1.3814 | 1.1753 |
| 0.3529 | 12.7442 | 548 | 1.4161 | 0.2424 | 1.4161 | 1.1900 |
| 0.3529 | 12.7907 | 550 | 1.4834 | 0.2690 | 1.4834 | 1.2180 |
| 0.3529 | 12.8372 | 552 | 1.5133 | 0.2731 | 1.5133 | 1.2302 |
| 0.3529 | 12.8837 | 554 | 1.4658 | 0.2690 | 1.4658 | 1.2107 |
| 0.3529 | 12.9302 | 556 | 1.3921 | 0.2315 | 1.3921 | 1.1799 |
| 0.3529 | 12.9767 | 558 | 1.3701 | 0.2506 | 1.3701 | 1.1705 |
| 0.3529 | 13.0233 | 560 | 1.4052 | 0.2795 | 1.4052 | 1.1854 |
| 0.3529 | 13.0698 | 562 | 1.4930 | 0.2602 | 1.4930 | 1.2219 |
| 0.3529 | 13.1163 | 564 | 1.5950 | 0.2731 | 1.5950 | 1.2629 |
| 0.3529 | 13.1628 | 566 | 1.6690 | 0.3232 | 1.6690 | 1.2919 |
| 0.3529 | 13.2093 | 568 | 1.6193 | 0.3172 | 1.6193 | 1.2725 |
| 0.3529 | 13.2558 | 570 | 1.5363 | 0.2752 | 1.5363 | 1.2395 |
| 0.3529 | 13.3023 | 572 | 1.4229 | 0.0833 | 1.4229 | 1.1928 |
| 0.3529 | 13.3488 | 574 | 1.3506 | 0.0833 | 1.3506 | 1.1622 |
| 0.3529 | 13.3953 | 576 | 1.3492 | 0.0833 | 1.3492 | 1.1616 |
| 0.3529 | 13.4419 | 578 | 1.3896 | 0.0833 | 1.3896 | 1.1788 |
| 0.3529 | 13.4884 | 580 | 1.4283 | 0.0781 | 1.4283 | 1.1951 |
| 0.3529 | 13.5349 | 582 | 1.4724 | 0.1142 | 1.4724 | 1.2134 |
| 0.3529 | 13.5814 | 584 | 1.4633 | 0.1142 | 1.4633 | 1.2097 |
| 0.3529 | 13.6279 | 586 | 1.4614 | 0.1142 | 1.4614 | 1.2089 |
| 0.3529 | 13.6744 | 588 | 1.4658 | 0.1486 | 1.4658 | 1.2107 |
| 0.3529 | 13.7209 | 590 | 1.4604 | 0.1228 | 1.4604 | 1.2085 |
| 0.3529 | 13.7674 | 592 | 1.3979 | 0.1228 | 1.3979 | 1.1823 |
| 0.3529 | 13.8140 | 594 | 1.3791 | 0.1228 | 1.3791 | 1.1744 |
| 0.3529 | 13.8605 | 596 | 1.4188 | 0.2126 | 1.4188 | 1.1911 |
| 0.3529 | 13.9070 | 598 | 1.4513 | 0.2709 | 1.4513 | 1.2047 |
| 0.3529 | 13.9535 | 600 | 1.4620 | 0.2709 | 1.4620 | 1.2092 |
| 0.3529 | 14.0 | 602 | 1.3948 | 0.2424 | 1.3948 | 1.1810 |
| 0.3529 | 14.0465 | 604 | 1.3774 | 0.2126 | 1.3774 | 1.1736 |
| 0.3529 | 14.0930 | 606 | 1.3843 | 0.2424 | 1.3843 | 1.1765 |
| 0.3529 | 14.1395 | 608 | 1.3777 | 0.1814 | 1.3777 | 1.1738 |
| 0.3529 | 14.1860 | 610 | 1.3095 | 0.0781 | 1.3095 | 1.1443 |
| 0.3529 | 14.2326 | 612 | 1.2472 | 0.0401 | 1.2472 | 1.1168 |
| 0.3529 | 14.2791 | 614 | 1.2122 | 0.0401 | 1.2122 | 1.1010 |
| 0.3529 | 14.3256 | 616 | 1.1979 | 0.0 | 1.1979 | 1.0945 |
| 0.3529 | 14.3721 | 618 | 1.2415 | 0.0401 | 1.2415 | 1.1142 |
| 0.3529 | 14.4186 | 620 | 1.2917 | 0.0781 | 1.2917 | 1.1365 |
| 0.3529 | 14.4651 | 622 | 1.3478 | 0.1310 | 1.3478 | 1.1609 |
| 0.3529 | 14.5116 | 624 | 1.4031 | 0.1634 | 1.4031 | 1.1845 |
| 0.3529 | 14.5581 | 626 | 1.4545 | 0.2522 | 1.4545 | 1.2060 |
| 0.3529 | 14.6047 | 628 | 1.4968 | 0.2522 | 1.4968 | 1.2234 |
| 0.3529 | 14.6512 | 630 | 1.5350 | 0.2522 | 1.5350 | 1.2389 |
| 0.3529 | 14.6977 | 632 | 1.5023 | 0.2239 | 1.5023 | 1.2257 |
| 0.3529 | 14.7442 | 634 | 1.3888 | 0.0401 | 1.3888 | 1.1785 |
| 0.3529 | 14.7907 | 636 | 1.2818 | 0.0 | 1.2818 | 1.1322 |
| 0.3529 | 14.8372 | 638 | 1.2042 | 0.0 | 1.2042 | 1.0974 |
| 0.3529 | 14.8837 | 640 | 1.1860 | 0.0 | 1.1860 | 1.0890 |
| 0.3529 | 14.9302 | 642 | 1.2288 | 0.0 | 1.2288 | 1.1085 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
datlaaaaaaa/e8cccbe1-c136-409a-a725-9f09127c1a3f | datlaaaaaaa | 2025-01-20T23:21:58Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:54:13Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e8cccbe1-c136-409a-a725-9f09127c1a3f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 854bca96bed40197_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/854bca96bed40197_train_data.json
type:
field_input: state_before
field_instruction: tactic
field_output: state_after
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/e8cccbe1-c136-409a-a725-9f09127c1a3f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/854bca96bed40197_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cff9d1c5-a847-4707-b347-d0451baf6b24
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cff9d1c5-a847-4707-b347-d0451baf6b24
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e8cccbe1-c136-409a-a725-9f09127c1a3f
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0849 | 0.0077 | 200 | 0.3004 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adamo1139/DeepSeek-R1-Distill-Qwen-1.5B-6bpw-exl2 | adamo1139 | 2025-01-20T23:21:56Z | 8 | 0 | null | [
"qwen2",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"6-bit",
"exl2",
"region:us"
] | null | 2025-01-20T22:51:16Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
adamo1139/DeepSeek-R1-Distill-Qwen-1.5B-8bpw-exl2 | adamo1139 | 2025-01-20T23:21:21Z | 27 | 0 | null | [
"qwen2",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"8-bit",
"exl2",
"region:us"
] | null | 2025-01-20T22:50:28Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
LHRuig/cinestyle | LHRuig | 2025-01-20T23:20:52Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:20:41Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# cinestyle
<Gallery />
## Model description
cinestyle lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/cinestyle/tree/main) them in the Files & versions tab.
|
LHRuig/cinedrama | LHRuig | 2025-01-20T23:20:46Z | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T23:19:20Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# cinedrama
<Gallery />
## Model description
cinedrama lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/cinedrama/tree/main) them in the Files & versions tab.
|
kaizen9/phi-1_5_HQ_3000_20k | kaizen9 | 2025-01-20T23:20:32Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-20T23:16:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso08/7bc9e54e-f755-40b7-a740-c391d742641d | lesso08 | 2025-01-20T23:20:20Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:22:58Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7bc9e54e-f755-40b7-a740-c391d742641d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 6e60a538f672529c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6e60a538f672529c_train_data.json
type:
field_input: communityName
field_instruction: label
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso08/7bc9e54e-f755-40b7-a740-c391d742641d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/6e60a538f672529c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3eb92360-4e77-4cc9-9ffa-0e03d7ea7423
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3eb92360-4e77-4cc9-9ffa-0e03d7ea7423
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7bc9e54e-f755-40b7-a740-c391d742641d
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0012 | 15 | nan |
| 0.0 | 0.0016 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/9b1deaac-4565-48f2-a511-89a1ab96e3e3 | mrHungddddh | 2025-01-20T23:19:50Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:04:02Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9b1deaac-4565-48f2-a511-89a1ab96e3e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a7412d8c8f805ddf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7412d8c8f805ddf_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/9b1deaac-4565-48f2-a511-89a1ab96e3e3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a7412d8c8f805ddf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba28db81-f399-44e5-bdef-7af8dcf5a4ca
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba28db81-f399-44e5-bdef-7af8dcf5a4ca
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9b1deaac-4565-48f2-a511-89a1ab96e3e3
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7427 | 0.0304 | 200 | 1.2632 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rawsh/q1-3B-PRIME | rawsh | 2025-01-20T23:19:31Z | 523 | 1 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:PRIME-RL/Eurus-2-RL-Data",
"base_model:PowerInfer/SmallThinker-3B-Preview",
"base_model:finetune:PowerInfer/SmallThinker-3B-Preview",
"region:us"
] | text-generation | 2025-01-16T03:26:54Z | ---
base_model:
- Qwen/Qwen2.5-3B-Instruct
- PowerInfer/SmallThinker-3B-Preview
datasets:
- PRIME-RL/Eurus-2-RL-Data
language:
- en
pipeline_tag: text-generation
---
# q1-3B-PRIME
**q1-3B-PRIME**, a small reasoning model trained with reinforcement learning.
Trained using SmallThinker-3B-Preview as a base model (Qwen2.5-3B-Instruct full finetuned on QwQ reasoning traces) for a roughly ~22.5% improvement on the test set in 120 training steps. (Note: lots of performance left on the table since PRIME saturates after 300+ steps.)
# Benchmark Performance
## Math
| Model | AIME24 | AMC23 | MATH-500 |
|---------|--------|-------|-------|
| Qwen2.5-3B-Instruct | 6.67 | 45 | - |
| SmallThinker-3B-Preview| 16.667 | 57.5 | - |
| **q1-3B-PRIME** | **26.667** | **67.5** | 64.8 |
| Eurus-7B-PRIME | **26.667** | 57.8 | **79.2** |
| GPT-4o | 9.3 | 45.8 | 76.4 |
## Coding
| Model | HumanEval | Leetcode |
|---------|--------|-------|
| Qwen2.5-3B-Instruct | 74.4 | - |
| **q1-3B-PRIME** | 71.95 | 20.55 |
| GPT-4o | 90.2 | - | |
laquythang/efe15045-778f-4dfa-8d03-17ab758f41b0 | laquythang | 2025-01-20T23:18:35Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:04:13Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: efe15045-778f-4dfa-8d03-17ab758f41b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a7412d8c8f805ddf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7412d8c8f805ddf_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/efe15045-778f-4dfa-8d03-17ab758f41b0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a7412d8c8f805ddf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba28db81-f399-44e5-bdef-7af8dcf5a4ca
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba28db81-f399-44e5-bdef-7af8dcf5a4ca
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# efe15045-778f-4dfa-8d03-17ab758f41b0
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7354 | 0.0304 | 200 | 1.2649 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nadejdatarabukina/7623d9ae-b2cb-4a91-801e-e6ab04be6251 | nadejdatarabukina | 2025-01-20T23:18:27Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-20T22:59:19Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7623d9ae-b2cb-4a91-801e-e6ab04be6251
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 91e193d3dca1611f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/91e193d3dca1611f_train_data.json
type:
field_input: parent_id
field_instruction: role
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/7623d9ae-b2cb-4a91-801e-e6ab04be6251
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/91e193d3dca1611f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 856a9aac-189f-40f7-b27c-c5616995b0d1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 856a9aac-189f-40f7-b27c-c5616995b0d1
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7623d9ae-b2cb-4a91-801e-e6ab04be6251
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0018 | 5 | nan |
| 0.0 | 0.0035 | 10 | nan |
| 0.0 | 0.0053 | 15 | nan |
| 0.0 | 0.0071 | 20 | nan |
| 0.0 | 0.0088 | 25 | nan |
| 0.0 | 0.0106 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/real-dream-sdxlpony14-sdxl | John6666 | 2025-01-20T23:18:26Z | 659 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"pony",
"en",
"base_model:luisrguerra/real-dream-xl-pony-releases",
"base_model:finetune:luisrguerra/real-dream-xl-pony-releases",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-20T23:13:33Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- pony
base_model: luisrguerra/real-dream-xl-pony-releases
---
Original model is [here](https://civitai.com/models/153568/real-dream?modelVersionId=1308507).
The author is [here](https://huggingface.co/luisrguerra).
This model created by [sinatra](https://civitai.com/user/sinatra).
|
nblinh/94055e20-ee90-4d03-9da3-41f4e7349a5a | nblinh | 2025-01-20T23:17:56Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T21:52:01Z | ---
library_name: peft
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 94055e20-ee90-4d03-9da3-41f4e7349a5a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1983306ea4f53c9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1983306ea4f53c9d_train_data.json
type:
field_input: prompt
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/94055e20-ee90-4d03-9da3-41f4e7349a5a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1983306ea4f53c9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 75177f7d-3059-4092-918d-8e9c49bae6b5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 75177f7d-3059-4092-918d-8e9c49bae6b5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 94055e20-ee90-4d03-9da3-41f4e7349a5a
This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.333 | 0.0334 | 200 | 0.3520 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Monday-Someday/mobilenet_v2_1.0_224-finetuned-ISIC-dec2024test | Monday-Someday | 2025-01-20T23:17:03Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mobilenet_v2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/mobilenet_v2_1.0_224",
"base_model:finetune:google/mobilenet_v2_1.0_224",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-01-20T02:12:01Z | ---
library_name: transformers
license: other
base_model: google/mobilenet_v2_1.0_224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: mobilenet_v2_1.0_224-finetuned-ISIC-dec2024test
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9276220745449292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2_1.0_224-finetuned-ISIC-dec2024test
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1763
- Accuracy: 0.9276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9055 | 0.9985 | 486 | 0.1955 | 0.9195 |
| 0.8797 | 1.9985 | 972 | 0.2074 | 0.9138 |
| 0.8144 | 2.9985 | 1458 | 0.1797 | 0.9263 |
| 0.9243 | 3.9985 | 1944 | 0.1862 | 0.9233 |
| 0.8199 | 4.9985 | 2430 | 0.1763 | 0.9276 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cpu
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso15/858b3fe8-837d-4d0b-908d-af0eb85b1273 | lesso15 | 2025-01-20T23:15:30Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:43:29Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 858b3fe8-837d-4d0b-908d-af0eb85b1273
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 9ff4e3b24bf3b2a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ff4e3b24bf3b2a4_train_data.json
type:
field_input: sentence1
field_instruction: phrase1
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: true
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso15/858b3fe8-837d-4d0b-908d-af0eb85b1273
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/9ff4e3b24bf3b2a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 05245b1d-e8ff-44bb-a139-f31fd23d5a4a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 05245b1d-e8ff-44bb-a139-f31fd23d5a4a
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 858b3fe8-837d-4d0b-908d-af0eb85b1273
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 4.9637 |
| 5.0445 | 0.0014 | 5 | 4.8259 |
| 4.2696 | 0.0029 | 10 | 3.8969 |
| 3.0239 | 0.0043 | 15 | 3.1104 |
| 3.3092 | 0.0057 | 20 | 3.0548 |
| 2.8685 | 0.0071 | 25 | 3.0180 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kokovova/e5efb00d-850a-4f79-b6b3-19433c4b5d28 | kokovova | 2025-01-20T23:14:22Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T22:15:38Z | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5efb00d-850a-4f79-b6b3-19433c4b5d28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6d346ae45cb7310f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6d346ae45cb7310f_train_data.json
type:
field_input: problem
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: kokovova/e5efb00d-850a-4f79-b6b3-19433c4b5d28
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/6d346ae45cb7310f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 293f4171-e6c9-4854-a803-09018c88d137
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 293f4171-e6c9-4854-a803-09018c88d137
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# e5efb00d-850a-4f79-b6b3-19433c4b5d28
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0002 | 10 | nan |
| 0.0 | 0.0003 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhungphammmmm/d9757073-d7e1-40b9-be8d-0c20a4559179 | nhungphammmmm | 2025-01-20T23:13:54Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"base_model:adapter:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:02:39Z | ---
library_name: peft
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d9757073-d7e1-40b9-be8d-0c20a4559179
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/d9757073-d7e1-40b9-be8d-0c20a4559179
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075526eb-32e0-4485-aab7-014e4d302171
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075526eb-32e0-4485-aab7-014e4d302171
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d9757073-d7e1-40b9-be8d-0c20a4559179
This model is a fine-tuned version of [samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f](https://huggingface.co/samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2385 | 0.0407 | 200 | 1.1318 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhunglaaaaaaa/133de99e-2f53-4b9b-b8d2-edea6793573f | nhunglaaaaaaa | 2025-01-20T23:13:15Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T23:03:55Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 133de99e-2f53-4b9b-b8d2-edea6793573f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a7412d8c8f805ddf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7412d8c8f805ddf_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/133de99e-2f53-4b9b-b8d2-edea6793573f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a7412d8c8f805ddf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba28db81-f399-44e5-bdef-7af8dcf5a4ca
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba28db81-f399-44e5-bdef-7af8dcf5a4ca
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 133de99e-2f53-4b9b-b8d2-edea6793573f
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7557 | 0.0304 | 200 | 1.2642 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sergioalves/94dd5542-37d0-4a0a-b042-711c0d084791 | sergioalves | 2025-01-20T23:12:29Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"base_model:adapter:samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f",
"region:us"
] | null | 2025-01-20T23:02:39Z | ---
library_name: peft
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 94dd5542-37d0-4a0a-b042-711c0d084791
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: sergioalves/94dd5542-37d0-4a0a-b042-711c0d084791
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 075526eb-32e0-4485-aab7-014e4d302171
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 075526eb-32e0-4485-aab7-014e4d302171
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 94dd5542-37d0-4a0a-b042-711c0d084791
This model is a fine-tuned version of [samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f](https://huggingface.co/samoline/7d183bf9-ed95-443c-94dc-1cad850bf23f) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0010 | 5 | nan |
| 0.0 | 0.0020 | 10 | nan |
| 0.0 | 0.0031 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LHRuig/hyperreal | LHRuig | 2025-01-20T23:10:45Z | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-01-20T22:56:28Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# hyperreal
<Gallery />
## Model description
hyperreal lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/hyperreal/tree/main) them in the Files & versions tab.
|
MayBashendy/ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k17_task5_organization | MayBashendy | 2025-01-20T23:09:52Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-20T23:00:25Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k17_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k17_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2639
- Qwk: 0.1814
- Mse: 1.2639
- Rmse: 1.1242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0488 | 2 | 3.9152 | -0.0319 | 3.9152 | 1.9787 |
| No log | 0.0976 | 4 | 2.3161 | 0.0372 | 2.3161 | 1.5219 |
| No log | 0.1463 | 6 | 2.0454 | 0.0260 | 2.0454 | 1.4302 |
| No log | 0.1951 | 8 | 1.6301 | 0.0100 | 1.6301 | 1.2768 |
| No log | 0.2439 | 10 | 1.2168 | 0.1091 | 1.2168 | 1.1031 |
| No log | 0.2927 | 12 | 1.1692 | 0.1901 | 1.1692 | 1.0813 |
| No log | 0.3415 | 14 | 1.2690 | 0.1205 | 1.2690 | 1.1265 |
| No log | 0.3902 | 16 | 1.3094 | 0.0613 | 1.3094 | 1.1443 |
| No log | 0.4390 | 18 | 1.2486 | 0.0374 | 1.2486 | 1.1174 |
| No log | 0.4878 | 20 | 1.1393 | 0.1304 | 1.1393 | 1.0674 |
| No log | 0.5366 | 22 | 1.0717 | 0.1167 | 1.0717 | 1.0352 |
| No log | 0.5854 | 24 | 1.0785 | 0.0888 | 1.0785 | 1.0385 |
| No log | 0.6341 | 26 | 1.1062 | 0.0445 | 1.1062 | 1.0518 |
| No log | 0.6829 | 28 | 1.0674 | 0.1203 | 1.0674 | 1.0331 |
| No log | 0.7317 | 30 | 0.9890 | 0.1107 | 0.9890 | 0.9945 |
| No log | 0.7805 | 32 | 0.9718 | 0.3221 | 0.9718 | 0.9858 |
| No log | 0.8293 | 34 | 1.1094 | 0.1576 | 1.1094 | 1.0533 |
| No log | 0.8780 | 36 | 1.0090 | 0.2392 | 1.0090 | 1.0045 |
| No log | 0.9268 | 38 | 0.9661 | 0.3935 | 0.9661 | 0.9829 |
| No log | 0.9756 | 40 | 0.9632 | 0.2834 | 0.9632 | 0.9814 |
| No log | 1.0244 | 42 | 1.0046 | 0.2217 | 1.0046 | 1.0023 |
| No log | 1.0732 | 44 | 1.0990 | 0.1189 | 1.0990 | 1.0483 |
| No log | 1.1220 | 46 | 1.1539 | 0.0761 | 1.1539 | 1.0742 |
| No log | 1.1707 | 48 | 1.1020 | 0.0823 | 1.1020 | 1.0498 |
| No log | 1.2195 | 50 | 1.0902 | 0.1350 | 1.0902 | 1.0441 |
| No log | 1.2683 | 52 | 1.1294 | 0.1003 | 1.1294 | 1.0627 |
| No log | 1.3171 | 54 | 1.1120 | 0.2145 | 1.1120 | 1.0545 |
| No log | 1.3659 | 56 | 1.0491 | 0.2912 | 1.0491 | 1.0243 |
| No log | 1.4146 | 58 | 0.9670 | 0.3048 | 0.9670 | 0.9834 |
| No log | 1.4634 | 60 | 0.9772 | 0.1330 | 0.9772 | 0.9885 |
| No log | 1.5122 | 62 | 1.0123 | 0.0888 | 1.0123 | 1.0061 |
| No log | 1.5610 | 64 | 1.0275 | 0.0888 | 1.0275 | 1.0136 |
| No log | 1.6098 | 66 | 1.0126 | 0.0888 | 1.0126 | 1.0063 |
| No log | 1.6585 | 68 | 0.9315 | 0.2672 | 0.9315 | 0.9651 |
| No log | 1.7073 | 70 | 0.9216 | 0.2895 | 0.9216 | 0.9600 |
| No log | 1.7561 | 72 | 1.0374 | 0.2392 | 1.0374 | 1.0186 |
| No log | 1.8049 | 74 | 1.0876 | 0.3283 | 1.0876 | 1.0429 |
| No log | 1.8537 | 76 | 1.1482 | 0.3283 | 1.1482 | 1.0716 |
| No log | 1.9024 | 78 | 1.2706 | 0.2926 | 1.2706 | 1.1272 |
| No log | 1.9512 | 80 | 1.4393 | 0.2588 | 1.4393 | 1.1997 |
| No log | 2.0 | 82 | 1.6033 | 0.2016 | 1.6033 | 1.2662 |
| No log | 2.0488 | 84 | 1.7020 | 0.2026 | 1.7020 | 1.3046 |
| No log | 2.0976 | 86 | 1.6749 | 0.1914 | 1.6749 | 1.2942 |
| No log | 2.1463 | 88 | 1.5513 | 0.0973 | 1.5513 | 1.2455 |
| No log | 2.1951 | 90 | 1.4070 | 0.2149 | 1.4070 | 1.1862 |
| No log | 2.2439 | 92 | 1.4139 | 0.1362 | 1.4139 | 1.1891 |
| No log | 2.2927 | 94 | 1.4569 | 0.1297 | 1.4569 | 1.2070 |
| No log | 2.3415 | 96 | 1.3699 | 0.0976 | 1.3699 | 1.1704 |
| No log | 2.3902 | 98 | 1.1940 | 0.1434 | 1.1940 | 1.0927 |
| No log | 2.4390 | 100 | 1.0175 | 0.2263 | 1.0175 | 1.0087 |
| No log | 2.4878 | 102 | 0.9743 | 0.1857 | 0.9743 | 0.9871 |
| No log | 2.5366 | 104 | 0.9971 | 0.1707 | 0.9971 | 0.9986 |
| No log | 2.5854 | 106 | 1.0930 | 0.1970 | 1.0930 | 1.0454 |
| No log | 2.6341 | 108 | 1.1941 | 0.0571 | 1.1941 | 1.0928 |
| No log | 2.6829 | 110 | 1.3024 | -0.0297 | 1.3024 | 1.1412 |
| No log | 2.7317 | 112 | 1.3887 | -0.0460 | 1.3887 | 1.1784 |
| No log | 2.7805 | 114 | 1.4502 | 0.1194 | 1.4502 | 1.2043 |
| No log | 2.8293 | 116 | 1.4877 | 0.2313 | 1.4877 | 1.2197 |
| No log | 2.8780 | 118 | 1.5875 | 0.2638 | 1.5875 | 1.2599 |
| No log | 2.9268 | 120 | 1.6965 | 0.2252 | 1.6965 | 1.3025 |
| No log | 2.9756 | 122 | 1.7232 | 0.1078 | 1.7232 | 1.3127 |
| No log | 3.0244 | 124 | 1.6639 | 0.1207 | 1.6639 | 1.2899 |
| No log | 3.0732 | 126 | 1.5461 | 0.2004 | 1.5461 | 1.2434 |
| No log | 3.1220 | 128 | 1.4334 | 0.1438 | 1.4334 | 1.1973 |
| No log | 3.1707 | 130 | 1.4259 | 0.0712 | 1.4259 | 1.1941 |
| No log | 3.2195 | 132 | 1.4356 | -0.0541 | 1.4356 | 1.1982 |
| No log | 3.2683 | 134 | 1.4579 | -0.0939 | 1.4579 | 1.2074 |
| No log | 3.3171 | 136 | 1.4491 | 0.0147 | 1.4491 | 1.2038 |
| No log | 3.3659 | 138 | 1.5209 | 0.2752 | 1.5209 | 1.2332 |
| No log | 3.4146 | 140 | 1.5553 | 0.2110 | 1.5553 | 1.2471 |
| No log | 3.4634 | 142 | 1.5332 | 0.2317 | 1.5332 | 1.2382 |
| No log | 3.5122 | 144 | 1.4563 | 0.2437 | 1.4563 | 1.2068 |
| No log | 3.5610 | 146 | 1.4090 | 0.2694 | 1.4090 | 1.1870 |
| No log | 3.6098 | 148 | 1.4642 | 0.2694 | 1.4642 | 1.2101 |
| No log | 3.6585 | 150 | 1.6547 | 0.2644 | 1.6547 | 1.2864 |
| No log | 3.7073 | 152 | 1.9515 | 0.2247 | 1.9515 | 1.3970 |
| No log | 3.7561 | 154 | 2.0626 | 0.1896 | 2.0626 | 1.4362 |
| No log | 3.8049 | 156 | 1.9816 | 0.2127 | 1.9816 | 1.4077 |
| No log | 3.8537 | 158 | 1.7125 | 0.2252 | 1.7125 | 1.3086 |
| No log | 3.9024 | 160 | 1.4900 | 0.2221 | 1.4900 | 1.2206 |
| No log | 3.9512 | 162 | 1.3005 | 0.2424 | 1.3005 | 1.1404 |
| No log | 4.0 | 164 | 1.3190 | 0.2709 | 1.3190 | 1.1485 |
| No log | 4.0488 | 166 | 1.3899 | 0.2752 | 1.3899 | 1.1789 |
| No log | 4.0976 | 168 | 1.5366 | 0.2363 | 1.5366 | 1.2396 |
| No log | 4.1463 | 170 | 1.6811 | 0.2448 | 1.6811 | 1.2966 |
| No log | 4.1951 | 172 | 1.6823 | 0.2406 | 1.6823 | 1.2971 |
| No log | 4.2439 | 174 | 1.6922 | 0.2406 | 1.6922 | 1.3009 |
| No log | 4.2927 | 176 | 1.6000 | 0.2869 | 1.6000 | 1.2649 |
| No log | 4.3415 | 178 | 1.5276 | 0.2869 | 1.5276 | 1.2360 |
| No log | 4.3902 | 180 | 1.5637 | 0.2869 | 1.5637 | 1.2505 |
| No log | 4.4390 | 182 | 1.5011 | 0.2869 | 1.5011 | 1.2252 |
| No log | 4.4878 | 184 | 1.3543 | 0.2982 | 1.3543 | 1.1637 |
| No log | 4.5366 | 186 | 1.1786 | 0.2709 | 1.1786 | 1.0856 |
| No log | 4.5854 | 188 | 1.1007 | 0.2795 | 1.1007 | 1.0492 |
| No log | 4.6341 | 190 | 1.1784 | 0.2709 | 1.1784 | 1.0855 |
| No log | 4.6829 | 192 | 1.3811 | 0.2982 | 1.3811 | 1.1752 |
| No log | 4.7317 | 194 | 1.5727 | 0.2832 | 1.5727 | 1.2541 |
| No log | 4.7805 | 196 | 1.6856 | 0.2270 | 1.6856 | 1.2983 |
| No log | 4.8293 | 198 | 1.6794 | 0.2437 | 1.6794 | 1.2959 |
| No log | 4.8780 | 200 | 1.5514 | 0.2522 | 1.5514 | 1.2456 |
| No log | 4.9268 | 202 | 1.4239 | 0.2709 | 1.4239 | 1.1933 |
| No log | 4.9756 | 204 | 1.2668 | 0.1142 | 1.2668 | 1.1255 |
| No log | 5.0244 | 206 | 1.2415 | 0.1142 | 1.2415 | 1.1142 |
| No log | 5.0732 | 208 | 1.3106 | 0.2065 | 1.3106 | 1.1448 |
| No log | 5.1220 | 210 | 1.4415 | 0.2424 | 1.4415 | 1.2006 |
| No log | 5.1707 | 212 | 1.6183 | 0.1832 | 1.6183 | 1.2721 |
| No log | 5.2195 | 214 | 1.7023 | 0.1533 | 1.7023 | 1.3047 |
| No log | 5.2683 | 216 | 1.6420 | 0.1142 | 1.6420 | 1.2814 |
| No log | 5.3171 | 218 | 1.5048 | 0.1562 | 1.5048 | 1.2267 |
| No log | 5.3659 | 220 | 1.4030 | 0.1744 | 1.4030 | 1.1845 |
| No log | 5.4146 | 222 | 1.3956 | 0.1744 | 1.3956 | 1.1814 |
| No log | 5.4634 | 224 | 1.4510 | 0.2065 | 1.4510 | 1.2046 |
| No log | 5.5122 | 226 | 1.5505 | 0.2522 | 1.5505 | 1.2452 |
| No log | 5.5610 | 228 | 1.5876 | 0.2568 | 1.5876 | 1.2600 |
| No log | 5.6098 | 230 | 1.4723 | 0.1880 | 1.4723 | 1.2134 |
| No log | 5.6585 | 232 | 1.4053 | 0.1562 | 1.4053 | 1.1854 |
| No log | 5.7073 | 234 | 1.4550 | 0.1943 | 1.4550 | 1.2063 |
| No log | 5.7561 | 236 | 1.5212 | 0.1943 | 1.5212 | 1.2334 |
| No log | 5.8049 | 238 | 1.5631 | 0.1634 | 1.5631 | 1.2502 |
| No log | 5.8537 | 240 | 1.6752 | 0.1141 | 1.6752 | 1.2943 |
| No log | 5.9024 | 242 | 1.6585 | 0.1634 | 1.6585 | 1.2878 |
| No log | 5.9512 | 244 | 1.5822 | 0.2004 | 1.5822 | 1.2578 |
| No log | 6.0 | 246 | 1.4101 | 0.2522 | 1.4101 | 1.1875 |
| No log | 6.0488 | 248 | 1.2845 | 0.2184 | 1.2845 | 1.1333 |
| No log | 6.0976 | 250 | 1.3726 | 0.3052 | 1.3726 | 1.1716 |
| No log | 6.1463 | 252 | 1.5895 | 0.2527 | 1.5895 | 1.2608 |
| No log | 6.1951 | 254 | 1.7285 | 0.2419 | 1.7285 | 1.3147 |
| No log | 6.2439 | 256 | 1.7297 | 0.2419 | 1.7297 | 1.3152 |
| No log | 6.2927 | 258 | 1.5794 | 0.2296 | 1.5794 | 1.2567 |
| No log | 6.3415 | 260 | 1.5724 | 0.2296 | 1.5724 | 1.2539 |
| No log | 6.3902 | 262 | 1.5028 | 0.3052 | 1.5028 | 1.2259 |
| No log | 6.4390 | 264 | 1.4208 | 0.2474 | 1.4208 | 1.1920 |
| No log | 6.4878 | 266 | 1.3507 | 0.2126 | 1.3507 | 1.1622 |
| No log | 6.5366 | 268 | 1.3940 | 0.2424 | 1.3940 | 1.1807 |
| No log | 6.5854 | 270 | 1.5438 | 0.2611 | 1.5438 | 1.2425 |
| No log | 6.6341 | 272 | 1.6434 | 0.2110 | 1.6434 | 1.2819 |
| No log | 6.6829 | 274 | 1.6874 | 0.1752 | 1.6874 | 1.2990 |
| No log | 6.7317 | 276 | 1.5767 | 0.2117 | 1.5767 | 1.2557 |
| No log | 6.7805 | 278 | 1.4146 | 0.1486 | 1.4146 | 1.1893 |
| No log | 6.8293 | 280 | 1.2838 | 0.1142 | 1.2838 | 1.1330 |
| No log | 6.8780 | 282 | 1.2129 | 0.0781 | 1.2129 | 1.1013 |
| No log | 6.9268 | 284 | 1.2424 | 0.1142 | 1.2424 | 1.1146 |
| No log | 6.9756 | 286 | 1.3719 | 0.2424 | 1.3719 | 1.1713 |
| No log | 7.0244 | 288 | 1.5464 | 0.2437 | 1.5464 | 1.2435 |
| No log | 7.0732 | 290 | 1.7671 | 0.2007 | 1.7671 | 1.3293 |
| No log | 7.1220 | 292 | 1.8037 | 0.1688 | 1.8037 | 1.3430 |
| No log | 7.1707 | 294 | 1.7056 | 0.2058 | 1.7056 | 1.3060 |
| No log | 7.2195 | 296 | 1.5468 | 0.2239 | 1.5468 | 1.2437 |
| No log | 7.2683 | 298 | 1.3254 | 0.2126 | 1.3254 | 1.1513 |
| No log | 7.3171 | 300 | 1.2248 | 0.1142 | 1.2248 | 1.1067 |
| No log | 7.3659 | 302 | 1.2181 | 0.1024 | 1.2181 | 1.1037 |
| No log | 7.4146 | 304 | 1.3199 | 0.2752 | 1.3199 | 1.1489 |
| No log | 7.4634 | 306 | 1.4566 | 0.2869 | 1.4566 | 1.2069 |
| No log | 7.5122 | 308 | 1.5103 | 0.2906 | 1.5103 | 1.2289 |
| No log | 7.5610 | 310 | 1.5202 | 0.2733 | 1.5202 | 1.2330 |
| No log | 7.6098 | 312 | 1.4770 | 0.3177 | 1.4770 | 1.2153 |
| No log | 7.6585 | 314 | 1.4998 | 0.3205 | 1.4998 | 1.2247 |
| No log | 7.7073 | 316 | 1.5688 | 0.2874 | 1.5688 | 1.2525 |
| No log | 7.7561 | 318 | 1.6757 | 0.2681 | 1.6757 | 1.2945 |
| No log | 7.8049 | 320 | 1.6044 | 0.2482 | 1.6044 | 1.2667 |
| No log | 7.8537 | 322 | 1.4328 | 0.2522 | 1.4328 | 1.1970 |
| No log | 7.9024 | 324 | 1.2480 | 0.1744 | 1.2480 | 1.1171 |
| No log | 7.9512 | 326 | 1.1255 | 0.0401 | 1.1255 | 1.0609 |
| No log | 8.0 | 328 | 1.0764 | 0.0781 | 1.0764 | 1.0375 |
| No log | 8.0488 | 330 | 1.1043 | 0.1744 | 1.1043 | 1.0508 |
| No log | 8.0976 | 332 | 1.2622 | 0.3018 | 1.2622 | 1.1235 |
| No log | 8.1463 | 334 | 1.5303 | 0.2606 | 1.5303 | 1.2370 |
| No log | 8.1951 | 336 | 1.7740 | 0.3024 | 1.7740 | 1.3319 |
| No log | 8.2439 | 338 | 1.8534 | 0.2988 | 1.8534 | 1.3614 |
| No log | 8.2927 | 340 | 1.7466 | 0.2967 | 1.7466 | 1.3216 |
| No log | 8.3415 | 342 | 1.5761 | 0.2940 | 1.5761 | 1.2554 |
| No log | 8.3902 | 344 | 1.4872 | 0.2793 | 1.4872 | 1.2195 |
| No log | 8.4390 | 346 | 1.4299 | 0.2665 | 1.4299 | 1.1958 |
| No log | 8.4878 | 348 | 1.4240 | 0.2665 | 1.4240 | 1.1933 |
| No log | 8.5366 | 350 | 1.4802 | 0.2474 | 1.4802 | 1.2166 |
| No log | 8.5854 | 352 | 1.4316 | 0.2709 | 1.4316 | 1.1965 |
| No log | 8.6341 | 354 | 1.3518 | 0.2665 | 1.3518 | 1.1627 |
| No log | 8.6829 | 356 | 1.4224 | 0.2568 | 1.4224 | 1.1926 |
| No log | 8.7317 | 358 | 1.4891 | 0.3117 | 1.4891 | 1.2203 |
| No log | 8.7805 | 360 | 1.4446 | 0.2709 | 1.4446 | 1.2019 |
| No log | 8.8293 | 362 | 1.3826 | 0.2665 | 1.3826 | 1.1759 |
| No log | 8.8780 | 364 | 1.3559 | 0.2665 | 1.3559 | 1.1644 |
| No log | 8.9268 | 366 | 1.3007 | 0.1814 | 1.3007 | 1.1405 |
| No log | 8.9756 | 368 | 1.3466 | 0.2372 | 1.3466 | 1.1604 |
| No log | 9.0244 | 370 | 1.4908 | 0.2793 | 1.4908 | 1.2210 |
| No log | 9.0732 | 372 | 1.5425 | 0.3052 | 1.5425 | 1.2420 |
| No log | 9.1220 | 374 | 1.4719 | 0.2752 | 1.4719 | 1.2132 |
| No log | 9.1707 | 376 | 1.3776 | 0.2126 | 1.3776 | 1.1737 |
| No log | 9.2195 | 378 | 1.3444 | 0.1814 | 1.3444 | 1.1595 |
| No log | 9.2683 | 380 | 1.3261 | 0.1814 | 1.3261 | 1.1516 |
| No log | 9.3171 | 382 | 1.3477 | 0.2126 | 1.3477 | 1.1609 |
| No log | 9.3659 | 384 | 1.3544 | 0.2126 | 1.3544 | 1.1638 |
| No log | 9.4146 | 386 | 1.3364 | 0.2126 | 1.3364 | 1.1560 |
| No log | 9.4634 | 388 | 1.3280 | 0.2126 | 1.3280 | 1.1524 |
| No log | 9.5122 | 390 | 1.3023 | 0.2424 | 1.3023 | 1.1412 |
| No log | 9.5610 | 392 | 1.3131 | 0.2944 | 1.3131 | 1.1459 |
| No log | 9.6098 | 394 | 1.3688 | 0.2982 | 1.3688 | 1.1700 |
| No log | 9.6585 | 396 | 1.4065 | 0.3018 | 1.4065 | 1.1860 |
| No log | 9.7073 | 398 | 1.4039 | 0.3018 | 1.4039 | 1.1849 |
| No log | 9.7561 | 400 | 1.3519 | 0.2982 | 1.3519 | 1.1627 |
| No log | 9.8049 | 402 | 1.3142 | 0.2665 | 1.3142 | 1.1464 |
| No log | 9.8537 | 404 | 1.3158 | 0.2665 | 1.3158 | 1.1471 |
| No log | 9.9024 | 406 | 1.3620 | 0.2665 | 1.3620 | 1.1671 |
| No log | 9.9512 | 408 | 1.4741 | 0.2832 | 1.4741 | 1.2141 |
| No log | 10.0 | 410 | 1.4998 | 0.2832 | 1.4998 | 1.2247 |
| No log | 10.0488 | 412 | 1.5107 | 0.2832 | 1.5107 | 1.2291 |
| No log | 10.0976 | 414 | 1.5279 | 0.2832 | 1.5279 | 1.2361 |
| No log | 10.1463 | 416 | 1.5454 | 0.2793 | 1.5454 | 1.2431 |
| No log | 10.1951 | 418 | 1.5071 | 0.2793 | 1.5071 | 1.2276 |
| No log | 10.2439 | 420 | 1.4512 | 0.2982 | 1.4512 | 1.2047 |
| No log | 10.2927 | 422 | 1.3641 | 0.2982 | 1.3641 | 1.1680 |
| No log | 10.3415 | 424 | 1.2933 | 0.2709 | 1.2933 | 1.1372 |
| No log | 10.3902 | 426 | 1.2873 | 0.2424 | 1.2873 | 1.1346 |
| No log | 10.4390 | 428 | 1.3117 | 0.2424 | 1.3117 | 1.1453 |
| No log | 10.4878 | 430 | 1.3510 | 0.2424 | 1.3510 | 1.1623 |
| No log | 10.5366 | 432 | 1.4179 | 0.2239 | 1.4179 | 1.1907 |
| No log | 10.5854 | 434 | 1.4578 | 0.2832 | 1.4578 | 1.2074 |
| No log | 10.6341 | 436 | 1.4536 | 0.2568 | 1.4536 | 1.2057 |
| No log | 10.6829 | 438 | 1.4186 | 0.2424 | 1.4186 | 1.1911 |
| No log | 10.7317 | 440 | 1.4318 | 0.2126 | 1.4318 | 1.1966 |
| No log | 10.7805 | 442 | 1.4254 | 0.2126 | 1.4254 | 1.1939 |
| No log | 10.8293 | 444 | 1.4542 | 0.2126 | 1.4542 | 1.2059 |
| No log | 10.8780 | 446 | 1.4625 | 0.2239 | 1.4625 | 1.2094 |
| No log | 10.9268 | 448 | 1.5026 | 0.1880 | 1.5026 | 1.2258 |
| No log | 10.9756 | 450 | 1.4913 | 0.1562 | 1.4913 | 1.2212 |
| No log | 11.0244 | 452 | 1.5060 | 0.1024 | 1.5060 | 1.2272 |
| No log | 11.0732 | 454 | 1.5108 | 0.1744 | 1.5108 | 1.2291 |
| No log | 11.1220 | 456 | 1.5158 | 0.2004 | 1.5158 | 1.2312 |
| No log | 11.1707 | 458 | 1.5287 | 0.2522 | 1.5287 | 1.2364 |
| No log | 11.2195 | 460 | 1.4799 | 0.2832 | 1.4799 | 1.2165 |
| No log | 11.2683 | 462 | 1.3876 | 0.2982 | 1.3876 | 1.1780 |
| No log | 11.3171 | 464 | 1.2905 | 0.3072 | 1.2905 | 1.1360 |
| No log | 11.3659 | 466 | 1.2601 | 0.3072 | 1.2601 | 1.1226 |
| No log | 11.4146 | 468 | 1.3082 | 0.2709 | 1.3082 | 1.1438 |
| No log | 11.4634 | 470 | 1.3823 | 0.2982 | 1.3823 | 1.1757 |
| No log | 11.5122 | 472 | 1.4097 | 0.2832 | 1.4097 | 1.1873 |
| No log | 11.5610 | 474 | 1.3280 | 0.2982 | 1.3280 | 1.1524 |
| No log | 11.6098 | 476 | 1.2140 | 0.3460 | 1.2140 | 1.1018 |
| No log | 11.6585 | 478 | 1.1924 | 0.2730 | 1.1924 | 1.0920 |
| No log | 11.7073 | 480 | 1.2291 | 0.2837 | 1.2291 | 1.1086 |
| No log | 11.7561 | 482 | 1.3147 | 0.2869 | 1.3147 | 1.1466 |
| No log | 11.8049 | 484 | 1.3998 | 0.2694 | 1.3998 | 1.1831 |
| No log | 11.8537 | 486 | 1.4945 | 0.3429 | 1.4945 | 1.2225 |
| No log | 11.9024 | 488 | 1.4541 | 0.3429 | 1.4540 | 1.2058 |
| No log | 11.9512 | 490 | 1.3182 | 0.3086 | 1.3182 | 1.1481 |
| No log | 12.0 | 492 | 1.1752 | 0.2877 | 1.1752 | 1.0841 |
| No log | 12.0488 | 494 | 1.1071 | 0.3355 | 1.1071 | 1.0522 |
| No log | 12.0976 | 496 | 1.1069 | 0.3231 | 1.1069 | 1.0521 |
| No log | 12.1463 | 498 | 1.1869 | 0.3052 | 1.1869 | 1.0895 |
| 0.3045 | 12.1951 | 500 | 1.3081 | 0.3329 | 1.3081 | 1.1437 |
| 0.3045 | 12.2439 | 502 | 1.4234 | 0.3429 | 1.4234 | 1.1931 |
| 0.3045 | 12.2927 | 504 | 1.4174 | 0.3329 | 1.4174 | 1.1906 |
| 0.3045 | 12.3415 | 506 | 1.3559 | 0.3086 | 1.3559 | 1.1644 |
| 0.3045 | 12.3902 | 508 | 1.2393 | 0.2126 | 1.2393 | 1.1132 |
| 0.3045 | 12.4390 | 510 | 1.2199 | 0.1814 | 1.2199 | 1.1045 |
| 0.3045 | 12.4878 | 512 | 1.2639 | 0.1814 | 1.2639 | 1.1242 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
VERSIL91/13889a03-e443-4bcb-a2ab-46cfe5ea650a | VERSIL91 | 2025-01-20T23:08:56Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T23:08:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 370ef635-02c6-4a8f-be9e-f46f2205d9d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dd06633aceb12410_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dd06633aceb12410_train_data.json
type:
field_instruction: tests
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: null
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/dd06633aceb12410_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 370ef635-02c6-4a8f-be9e-f46f2205d9d9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 370ef635-02c6-4a8f-be9e-f46f2205d9d9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 370ef635-02c6-4a8f-be9e-f46f2205d9d9
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.7619 | 1 | nan |
| 0.0 | 1.5238 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/pancake-mix-illustrious-sdxl | John6666 | 2025-01-20T23:08:37Z | 67 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"characters",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-20T23:02:05Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- characters
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/896658?modelVersionId=1308210).
This model created by [Sukizou](https://civitai.com/user/Sukizou).
|
andrewmalk/kitsman | andrewmalk | 2025-01-20T23:08:11Z | 65 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-17T05:27:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: okitsman
---
# Kitsman
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `okitsman` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('andrewmalk/kitsman', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso02/1a5098f7-cebd-4eeb-a0f1-0753bde57ace | lesso02 | 2025-01-20T23:07:05Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:52:47Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1a5098f7-cebd-4eeb-a0f1-0753bde57ace
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: true
chat_template: llama3
datasets:
- data_files:
- 9ff4e3b24bf3b2a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9ff4e3b24bf3b2a4_train_data.json
type:
field_input: sentence1
field_instruction: phrase1
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso02/1a5098f7-cebd-4eeb-a0f1-0753bde57ace
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/9ff4e3b24bf3b2a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 05245b1d-e8ff-44bb-a139-f31fd23d5a4a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 05245b1d-e8ff-44bb-a139-f31fd23d5a4a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1a5098f7-cebd-4eeb-a0f1-0753bde57ace
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.5546 | 0.0011 | 1 | 4.0087 |
| 14.6587 | 0.0057 | 5 | 3.9323 |
| 11.4912 | 0.0114 | 10 | 2.9548 |
| 11.1484 | 0.0171 | 15 | 2.5602 |
| 9.782 | 0.0229 | 20 | 2.4786 |
| 8.6338 | 0.0286 | 25 | 2.4527 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHunghddddd/71740281-efde-40d2-bc98-1a144c2a49c5 | mrHunghddddd | 2025-01-20T23:06:46Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:22:29Z | ---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 71740281-efde-40d2-bc98-1a144c2a49c5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cae8a8291b672052_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cae8a8291b672052_train_data.json
type:
field_input: poem_meter
field_instruction: poem_title
field_output: poem_verses
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/71740281-efde-40d2-bc98-1a144c2a49c5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cae8a8291b672052_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d2d62167-223e-438b-b1e7-02a477624b1a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d2d62167-223e-438b-b1e7-02a477624b1a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 71740281-efde-40d2-bc98-1a144c2a49c5
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7516 | 0.1941 | 200 | 1.7143 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k14_task5_organization | MayBashendy | 2025-01-20T23:05:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-20T15:09:27Z | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k14_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k14_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6862
- Qwk: 0.5472
- Mse: 0.6862
- Rmse: 0.8284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0435 | 2 | 3.8865 | -0.0294 | 3.8865 | 1.9714 |
| No log | 0.0870 | 4 | 2.0120 | 0.0142 | 2.0120 | 1.4184 |
| No log | 0.1304 | 6 | 1.7353 | -0.0458 | 1.7353 | 1.3173 |
| No log | 0.1739 | 8 | 1.2631 | 0.0380 | 1.2631 | 1.1239 |
| No log | 0.2174 | 10 | 1.1664 | -0.0032 | 1.1664 | 1.0800 |
| No log | 0.2609 | 12 | 1.1818 | 0.0380 | 1.1818 | 1.0871 |
| No log | 0.3043 | 14 | 1.2235 | 0.0380 | 1.2235 | 1.1061 |
| No log | 0.3478 | 16 | 1.2637 | 0.0380 | 1.2637 | 1.1242 |
| No log | 0.3913 | 18 | 1.2607 | 0.0760 | 1.2607 | 1.1228 |
| No log | 0.4348 | 20 | 1.2646 | 0.1142 | 1.2646 | 1.1246 |
| No log | 0.4783 | 22 | 1.1968 | 0.1910 | 1.1968 | 1.0940 |
| No log | 0.5217 | 24 | 1.1106 | 0.1805 | 1.1106 | 1.0539 |
| No log | 0.5652 | 26 | 1.1018 | 0.1493 | 1.1018 | 1.0497 |
| No log | 0.6087 | 28 | 1.1995 | 0.0999 | 1.1995 | 1.0952 |
| No log | 0.6522 | 30 | 1.1395 | 0.1832 | 1.1395 | 1.0675 |
| No log | 0.6957 | 32 | 1.1605 | 0.0436 | 1.1605 | 1.0772 |
| No log | 0.7391 | 34 | 1.2550 | 0.0883 | 1.2550 | 1.1203 |
| No log | 0.7826 | 36 | 1.3800 | 0.0639 | 1.3800 | 1.1747 |
| No log | 0.8261 | 38 | 1.2540 | 0.0998 | 1.2540 | 1.1198 |
| No log | 0.8696 | 40 | 1.1479 | 0.1389 | 1.1479 | 1.0714 |
| No log | 0.9130 | 42 | 1.2217 | 0.1028 | 1.2217 | 1.1053 |
| No log | 0.9565 | 44 | 1.5665 | 0.0389 | 1.5665 | 1.2516 |
| No log | 1.0 | 46 | 1.6655 | 0.0516 | 1.6655 | 1.2905 |
| No log | 1.0435 | 48 | 1.4103 | 0.0598 | 1.4103 | 1.1876 |
| No log | 1.0870 | 50 | 1.0909 | 0.1821 | 1.0909 | 1.0444 |
| No log | 1.1304 | 52 | 0.9701 | 0.3117 | 0.9701 | 0.9849 |
| No log | 1.1739 | 54 | 0.9537 | 0.3414 | 0.9537 | 0.9766 |
| No log | 1.2174 | 56 | 0.9618 | 0.3562 | 0.9618 | 0.9807 |
| No log | 1.2609 | 58 | 0.9734 | 0.3557 | 0.9734 | 0.9866 |
| No log | 1.3043 | 60 | 0.9510 | 0.3562 | 0.9510 | 0.9752 |
| No log | 1.3478 | 62 | 0.9381 | 0.3733 | 0.9381 | 0.9686 |
| No log | 1.3913 | 64 | 0.9339 | 0.3014 | 0.9339 | 0.9664 |
| No log | 1.4348 | 66 | 0.9494 | 0.2935 | 0.9494 | 0.9743 |
| No log | 1.4783 | 68 | 0.9506 | 0.2391 | 0.9506 | 0.9750 |
| No log | 1.5217 | 70 | 0.9589 | 0.2114 | 0.9589 | 0.9792 |
| No log | 1.5652 | 72 | 0.9002 | 0.3014 | 0.9002 | 0.9488 |
| No log | 1.6087 | 74 | 0.8838 | 0.3414 | 0.8838 | 0.9401 |
| No log | 1.6522 | 76 | 0.8590 | 0.3817 | 0.8590 | 0.9268 |
| No log | 1.6957 | 78 | 0.8336 | 0.3519 | 0.8336 | 0.9130 |
| No log | 1.7391 | 80 | 0.8431 | 0.4 | 0.8431 | 0.9182 |
| No log | 1.7826 | 82 | 0.8430 | 0.3981 | 0.8430 | 0.9181 |
| No log | 1.8261 | 84 | 0.8405 | 0.3537 | 0.8405 | 0.9168 |
| No log | 1.8696 | 86 | 0.8561 | 0.5155 | 0.8561 | 0.9252 |
| No log | 1.9130 | 88 | 0.8821 | 0.5174 | 0.8821 | 0.9392 |
| No log | 1.9565 | 90 | 0.8903 | 0.4459 | 0.8903 | 0.9435 |
| No log | 2.0 | 92 | 1.0073 | 0.3972 | 1.0073 | 1.0036 |
| No log | 2.0435 | 94 | 1.0262 | 0.3734 | 1.0262 | 1.0130 |
| No log | 2.0870 | 96 | 0.8925 | 0.5107 | 0.8925 | 0.9447 |
| No log | 2.1304 | 98 | 0.8793 | 0.4979 | 0.8793 | 0.9377 |
| No log | 2.1739 | 100 | 1.0192 | 0.4416 | 1.0192 | 1.0096 |
| No log | 2.2174 | 102 | 0.9693 | 0.4318 | 0.9693 | 0.9845 |
| No log | 2.2609 | 104 | 0.8455 | 0.5363 | 0.8455 | 0.9195 |
| No log | 2.3043 | 106 | 0.8263 | 0.5450 | 0.8263 | 0.9090 |
| No log | 2.3478 | 108 | 0.7896 | 0.4813 | 0.7896 | 0.8886 |
| No log | 2.3913 | 110 | 0.7701 | 0.4733 | 0.7701 | 0.8776 |
| No log | 2.4348 | 112 | 0.7638 | 0.4984 | 0.7638 | 0.8740 |
| No log | 2.4783 | 114 | 0.7713 | 0.5128 | 0.7713 | 0.8782 |
| No log | 2.5217 | 116 | 0.7180 | 0.5635 | 0.7180 | 0.8474 |
| No log | 2.5652 | 118 | 0.7116 | 0.4787 | 0.7116 | 0.8436 |
| No log | 2.6087 | 120 | 0.7141 | 0.5044 | 0.7141 | 0.8450 |
| No log | 2.6522 | 122 | 0.7626 | 0.6386 | 0.7626 | 0.8733 |
| No log | 2.6957 | 124 | 0.7743 | 0.6435 | 0.7743 | 0.8800 |
| No log | 2.7391 | 126 | 0.7699 | 0.5796 | 0.7699 | 0.8774 |
| No log | 2.7826 | 128 | 0.8402 | 0.4283 | 0.8402 | 0.9166 |
| No log | 2.8261 | 130 | 1.0326 | 0.3363 | 1.0326 | 1.0161 |
| No log | 2.8696 | 132 | 1.0287 | 0.3761 | 1.0287 | 1.0143 |
| No log | 2.9130 | 134 | 0.8748 | 0.4470 | 0.8748 | 0.9353 |
| No log | 2.9565 | 136 | 0.7884 | 0.5009 | 0.7884 | 0.8879 |
| No log | 3.0 | 138 | 0.7676 | 0.5570 | 0.7676 | 0.8761 |
| No log | 3.0435 | 140 | 0.7662 | 0.5797 | 0.7662 | 0.8753 |
| No log | 3.0870 | 142 | 0.7939 | 0.5103 | 0.7939 | 0.8910 |
| No log | 3.1304 | 144 | 0.8795 | 0.4455 | 0.8795 | 0.9378 |
| No log | 3.1739 | 146 | 0.8895 | 0.4935 | 0.8895 | 0.9431 |
| No log | 3.2174 | 148 | 0.8184 | 0.5528 | 0.8184 | 0.9046 |
| No log | 3.2609 | 150 | 0.8132 | 0.4819 | 0.8132 | 0.9018 |
| No log | 3.3043 | 152 | 0.8196 | 0.4615 | 0.8196 | 0.9053 |
| No log | 3.3478 | 154 | 0.8178 | 0.4419 | 0.8178 | 0.9043 |
| No log | 3.3913 | 156 | 0.8523 | 0.4570 | 0.8523 | 0.9232 |
| No log | 3.4348 | 158 | 0.8042 | 0.4158 | 0.8042 | 0.8968 |
| No log | 3.4783 | 160 | 0.8430 | 0.3160 | 0.8430 | 0.9182 |
| No log | 3.5217 | 162 | 0.9556 | 0.3401 | 0.9556 | 0.9776 |
| No log | 3.5652 | 164 | 0.8477 | 0.3160 | 0.8477 | 0.9207 |
| No log | 3.6087 | 166 | 0.7804 | 0.5345 | 0.7804 | 0.8834 |
| No log | 3.6522 | 168 | 0.8543 | 0.4806 | 0.8543 | 0.9243 |
| No log | 3.6957 | 170 | 0.9146 | 0.2865 | 0.9146 | 0.9563 |
| No log | 3.7391 | 172 | 0.9504 | 0.2291 | 0.9504 | 0.9749 |
| No log | 3.7826 | 174 | 0.9597 | 0.2591 | 0.9597 | 0.9796 |
| No log | 3.8261 | 176 | 0.9069 | 0.3445 | 0.9069 | 0.9523 |
| No log | 3.8696 | 178 | 0.9335 | 0.3811 | 0.9335 | 0.9662 |
| No log | 3.9130 | 180 | 0.8494 | 0.3804 | 0.8494 | 0.9216 |
| No log | 3.9565 | 182 | 0.8333 | 0.4576 | 0.8333 | 0.9129 |
| No log | 4.0 | 184 | 0.8048 | 0.4730 | 0.8048 | 0.8971 |
| No log | 4.0435 | 186 | 0.7876 | 0.4461 | 0.7876 | 0.8875 |
| No log | 4.0870 | 188 | 0.8073 | 0.3719 | 0.8073 | 0.8985 |
| No log | 4.1304 | 190 | 0.7829 | 0.4128 | 0.7829 | 0.8848 |
| No log | 4.1739 | 192 | 0.7779 | 0.3996 | 0.7779 | 0.8820 |
| No log | 4.2174 | 194 | 0.7958 | 0.3184 | 0.7958 | 0.8921 |
| No log | 4.2609 | 196 | 0.7889 | 0.3676 | 0.7889 | 0.8882 |
| No log | 4.3043 | 198 | 0.7770 | 0.5163 | 0.7770 | 0.8815 |
| No log | 4.3478 | 200 | 0.7898 | 0.5766 | 0.7898 | 0.8887 |
| No log | 4.3913 | 202 | 0.7020 | 0.5874 | 0.7020 | 0.8378 |
| No log | 4.4348 | 204 | 0.7554 | 0.5618 | 0.7554 | 0.8692 |
| No log | 4.4783 | 206 | 0.7159 | 0.5379 | 0.7159 | 0.8461 |
| No log | 4.5217 | 208 | 0.7211 | 0.5330 | 0.7211 | 0.8492 |
| No log | 4.5652 | 210 | 0.8239 | 0.4902 | 0.8239 | 0.9077 |
| No log | 4.6087 | 212 | 0.7487 | 0.5498 | 0.7487 | 0.8653 |
| No log | 4.6522 | 214 | 0.7070 | 0.4898 | 0.7070 | 0.8409 |
| No log | 4.6957 | 216 | 0.7224 | 0.5213 | 0.7224 | 0.8500 |
| No log | 4.7391 | 218 | 0.8452 | 0.5414 | 0.8452 | 0.9194 |
| No log | 4.7826 | 220 | 0.9413 | 0.4574 | 0.9413 | 0.9702 |
| No log | 4.8261 | 222 | 0.8087 | 0.5231 | 0.8087 | 0.8993 |
| No log | 4.8696 | 224 | 0.7436 | 0.6007 | 0.7436 | 0.8623 |
| No log | 4.9130 | 226 | 0.7462 | 0.5902 | 0.7462 | 0.8638 |
| No log | 4.9565 | 228 | 0.7683 | 0.5763 | 0.7683 | 0.8765 |
| No log | 5.0 | 230 | 0.8623 | 0.5020 | 0.8623 | 0.9286 |
| No log | 5.0435 | 232 | 0.8875 | 0.4681 | 0.8875 | 0.9421 |
| No log | 5.0870 | 234 | 0.8004 | 0.5234 | 0.8004 | 0.8947 |
| No log | 5.1304 | 236 | 0.7856 | 0.4691 | 0.7856 | 0.8864 |
| No log | 5.1739 | 238 | 0.7886 | 0.4918 | 0.7886 | 0.8880 |
| No log | 5.2174 | 240 | 0.7940 | 0.5117 | 0.7940 | 0.8911 |
| No log | 5.2609 | 242 | 0.8967 | 0.4560 | 0.8967 | 0.9469 |
| No log | 5.3043 | 244 | 0.9099 | 0.4987 | 0.9099 | 0.9539 |
| No log | 5.3478 | 246 | 0.7790 | 0.4410 | 0.7790 | 0.8826 |
| No log | 5.3913 | 248 | 0.7415 | 0.5248 | 0.7415 | 0.8611 |
| No log | 5.4348 | 250 | 0.7559 | 0.4565 | 0.7559 | 0.8694 |
| No log | 5.4783 | 252 | 0.7639 | 0.4261 | 0.7639 | 0.8740 |
| No log | 5.5217 | 254 | 0.7792 | 0.4494 | 0.7792 | 0.8827 |
| No log | 5.5652 | 256 | 0.7704 | 0.4251 | 0.7704 | 0.8777 |
| No log | 5.6087 | 258 | 0.7568 | 0.4269 | 0.7568 | 0.8700 |
| No log | 5.6522 | 260 | 0.7523 | 0.4313 | 0.7523 | 0.8674 |
| No log | 5.6957 | 262 | 0.7222 | 0.4787 | 0.7222 | 0.8498 |
| No log | 5.7391 | 264 | 0.7149 | 0.5463 | 0.7149 | 0.8455 |
| No log | 5.7826 | 266 | 0.7193 | 0.6160 | 0.7193 | 0.8481 |
| No log | 5.8261 | 268 | 0.7691 | 0.5439 | 0.7691 | 0.8770 |
| No log | 5.8696 | 270 | 0.7113 | 0.6617 | 0.7113 | 0.8434 |
| No log | 5.9130 | 272 | 0.6821 | 0.6476 | 0.6821 | 0.8259 |
| No log | 5.9565 | 274 | 0.6982 | 0.6528 | 0.6982 | 0.8356 |
| No log | 6.0 | 276 | 0.7793 | 0.5318 | 0.7793 | 0.8828 |
| No log | 6.0435 | 278 | 0.7484 | 0.5439 | 0.7484 | 0.8651 |
| No log | 6.0870 | 280 | 0.7456 | 0.5470 | 0.7456 | 0.8635 |
| No log | 6.1304 | 282 | 0.7213 | 0.5740 | 0.7213 | 0.8493 |
| No log | 6.1739 | 284 | 0.7244 | 0.4873 | 0.7244 | 0.8511 |
| No log | 6.2174 | 286 | 0.7118 | 0.5644 | 0.7118 | 0.8437 |
| No log | 6.2609 | 288 | 0.7316 | 0.4135 | 0.7316 | 0.8553 |
| No log | 6.3043 | 290 | 0.7192 | 0.4984 | 0.7192 | 0.8481 |
| No log | 6.3478 | 292 | 0.6941 | 0.5163 | 0.6941 | 0.8331 |
| No log | 6.3913 | 294 | 0.7154 | 0.6128 | 0.7154 | 0.8458 |
| No log | 6.4348 | 296 | 0.7172 | 0.5618 | 0.7172 | 0.8469 |
| No log | 6.4783 | 298 | 0.6939 | 0.5680 | 0.6939 | 0.8330 |
| No log | 6.5217 | 300 | 0.6980 | 0.5877 | 0.6980 | 0.8354 |
| No log | 6.5652 | 302 | 0.7417 | 0.5052 | 0.7417 | 0.8612 |
| No log | 6.6087 | 304 | 0.7484 | 0.5067 | 0.7484 | 0.8651 |
| No log | 6.6522 | 306 | 0.7230 | 0.4879 | 0.7230 | 0.8503 |
| No log | 6.6957 | 308 | 0.7159 | 0.5002 | 0.7159 | 0.8461 |
| No log | 6.7391 | 310 | 0.7043 | 0.5357 | 0.7043 | 0.8392 |
| No log | 6.7826 | 312 | 0.6960 | 0.5060 | 0.6960 | 0.8343 |
| No log | 6.8261 | 314 | 0.6992 | 0.5066 | 0.6992 | 0.8362 |
| No log | 6.8696 | 316 | 0.6768 | 0.5809 | 0.6768 | 0.8227 |
| No log | 6.9130 | 318 | 0.6808 | 0.5329 | 0.6808 | 0.8251 |
| No log | 6.9565 | 320 | 0.6908 | 0.5671 | 0.6908 | 0.8312 |
| No log | 7.0 | 322 | 0.7242 | 0.5263 | 0.7242 | 0.8510 |
| No log | 7.0435 | 324 | 0.7106 | 0.5459 | 0.7106 | 0.8430 |
| No log | 7.0870 | 326 | 0.8062 | 0.4708 | 0.8062 | 0.8979 |
| No log | 7.1304 | 328 | 0.8057 | 0.4708 | 0.8057 | 0.8976 |
| No log | 7.1739 | 330 | 0.7553 | 0.5046 | 0.7553 | 0.8691 |
| No log | 7.2174 | 332 | 0.7268 | 0.5094 | 0.7268 | 0.8525 |
| No log | 7.2609 | 334 | 0.7145 | 0.5582 | 0.7145 | 0.8453 |
| No log | 7.3043 | 336 | 0.7106 | 0.5475 | 0.7106 | 0.8430 |
| No log | 7.3478 | 338 | 0.7219 | 0.5446 | 0.7219 | 0.8497 |
| No log | 7.3913 | 340 | 0.7267 | 0.5129 | 0.7267 | 0.8525 |
| No log | 7.4348 | 342 | 0.7339 | 0.4461 | 0.7339 | 0.8567 |
| No log | 7.4783 | 344 | 0.7684 | 0.4641 | 0.7684 | 0.8766 |
| No log | 7.5217 | 346 | 0.7471 | 0.4641 | 0.7471 | 0.8644 |
| No log | 7.5652 | 348 | 0.7191 | 0.5352 | 0.7191 | 0.8480 |
| No log | 7.6087 | 350 | 0.7407 | 0.5953 | 0.7407 | 0.8606 |
| No log | 7.6522 | 352 | 0.7185 | 0.5683 | 0.7185 | 0.8477 |
| No log | 7.6957 | 354 | 0.7478 | 0.5476 | 0.7478 | 0.8647 |
| No log | 7.7391 | 356 | 0.7586 | 0.5197 | 0.7586 | 0.8710 |
| No log | 7.7826 | 358 | 0.7467 | 0.4996 | 0.7467 | 0.8641 |
| No log | 7.8261 | 360 | 0.8009 | 0.4388 | 0.8009 | 0.8949 |
| No log | 7.8696 | 362 | 0.8052 | 0.3269 | 0.8052 | 0.8973 |
| No log | 7.9130 | 364 | 0.7747 | 0.4279 | 0.7747 | 0.8802 |
| No log | 7.9565 | 366 | 0.7503 | 0.4660 | 0.7503 | 0.8662 |
| No log | 8.0 | 368 | 0.7182 | 0.4918 | 0.7182 | 0.8475 |
| No log | 8.0435 | 370 | 0.6882 | 0.5120 | 0.6882 | 0.8296 |
| No log | 8.0870 | 372 | 0.6722 | 0.6187 | 0.6722 | 0.8199 |
| No log | 8.1304 | 374 | 0.7175 | 0.6081 | 0.7175 | 0.8470 |
| No log | 8.1739 | 376 | 0.7358 | 0.6071 | 0.7358 | 0.8578 |
| No log | 8.2174 | 378 | 0.7982 | 0.5398 | 0.7982 | 0.8934 |
| No log | 8.2609 | 380 | 0.7700 | 0.6218 | 0.7700 | 0.8775 |
| No log | 8.3043 | 382 | 0.6887 | 0.5463 | 0.6887 | 0.8299 |
| No log | 8.3478 | 384 | 0.8329 | 0.4508 | 0.8329 | 0.9127 |
| No log | 8.3913 | 386 | 0.9849 | 0.5184 | 0.9849 | 0.9924 |
| No log | 8.4348 | 388 | 0.9149 | 0.4854 | 0.9149 | 0.9565 |
| No log | 8.4783 | 390 | 0.7384 | 0.5012 | 0.7384 | 0.8593 |
| No log | 8.5217 | 392 | 0.7104 | 0.5473 | 0.7104 | 0.8428 |
| No log | 8.5652 | 394 | 0.8558 | 0.4894 | 0.8558 | 0.9251 |
| No log | 8.6087 | 396 | 0.8252 | 0.4894 | 0.8252 | 0.9084 |
| No log | 8.6522 | 398 | 0.7126 | 0.5186 | 0.7126 | 0.8441 |
| No log | 8.6957 | 400 | 0.6932 | 0.5432 | 0.6932 | 0.8326 |
| No log | 8.7391 | 402 | 0.7075 | 0.4968 | 0.7075 | 0.8412 |
| No log | 8.7826 | 404 | 0.7033 | 0.5432 | 0.7033 | 0.8387 |
| No log | 8.8261 | 406 | 0.7082 | 0.5536 | 0.7082 | 0.8415 |
| No log | 8.8696 | 408 | 0.7080 | 0.4760 | 0.7080 | 0.8414 |
| No log | 8.9130 | 410 | 0.6967 | 0.4760 | 0.6967 | 0.8347 |
| No log | 8.9565 | 412 | 0.6681 | 0.5822 | 0.6681 | 0.8174 |
| No log | 9.0 | 414 | 0.6506 | 0.6046 | 0.6506 | 0.8066 |
| No log | 9.0435 | 416 | 0.6532 | 0.6219 | 0.6532 | 0.8082 |
| No log | 9.0870 | 418 | 0.6514 | 0.6219 | 0.6514 | 0.8071 |
| No log | 9.1304 | 420 | 0.6505 | 0.5644 | 0.6505 | 0.8066 |
| No log | 9.1739 | 422 | 0.6544 | 0.5886 | 0.6544 | 0.8090 |
| No log | 9.2174 | 424 | 0.6797 | 0.5597 | 0.6797 | 0.8245 |
| No log | 9.2609 | 426 | 0.7490 | 0.5137 | 0.7490 | 0.8655 |
| No log | 9.3043 | 428 | 0.7794 | 0.4898 | 0.7794 | 0.8828 |
| No log | 9.3478 | 430 | 0.7201 | 0.5400 | 0.7201 | 0.8486 |
| No log | 9.3913 | 432 | 0.6806 | 0.5432 | 0.6806 | 0.8250 |
| No log | 9.4348 | 434 | 0.6772 | 0.5432 | 0.6772 | 0.8229 |
| No log | 9.4783 | 436 | 0.6891 | 0.5089 | 0.6891 | 0.8301 |
| No log | 9.5217 | 438 | 0.7677 | 0.5428 | 0.7677 | 0.8762 |
| No log | 9.5652 | 440 | 0.7609 | 0.5451 | 0.7609 | 0.8723 |
| No log | 9.6087 | 442 | 0.6901 | 0.5570 | 0.6901 | 0.8307 |
| No log | 9.6522 | 444 | 0.6739 | 0.5188 | 0.6739 | 0.8209 |
| No log | 9.6957 | 446 | 0.6775 | 0.5074 | 0.6775 | 0.8231 |
| No log | 9.7391 | 448 | 0.6750 | 0.5516 | 0.6750 | 0.8216 |
| No log | 9.7826 | 450 | 0.7319 | 0.5279 | 0.7319 | 0.8555 |
| No log | 9.8261 | 452 | 0.7593 | 0.5331 | 0.7593 | 0.8714 |
| No log | 9.8696 | 454 | 0.6942 | 0.5098 | 0.6942 | 0.8332 |
| No log | 9.9130 | 456 | 0.6676 | 0.5771 | 0.6676 | 0.8171 |
| No log | 9.9565 | 458 | 0.6685 | 0.5783 | 0.6685 | 0.8176 |
| No log | 10.0 | 460 | 0.6652 | 0.6307 | 0.6652 | 0.8156 |
| No log | 10.0435 | 462 | 0.7046 | 0.5395 | 0.7046 | 0.8394 |
| No log | 10.0870 | 464 | 0.7331 | 0.5470 | 0.7331 | 0.8562 |
| No log | 10.1304 | 466 | 0.7124 | 0.4494 | 0.7124 | 0.8440 |
| No log | 10.1739 | 468 | 0.7124 | 0.4893 | 0.7124 | 0.8441 |
| No log | 10.2174 | 470 | 0.7132 | 0.4923 | 0.7132 | 0.8445 |
| No log | 10.2609 | 472 | 0.7016 | 0.5415 | 0.7016 | 0.8376 |
| No log | 10.3043 | 474 | 0.6915 | 0.5554 | 0.6915 | 0.8316 |
| No log | 10.3478 | 476 | 0.6783 | 0.6177 | 0.6783 | 0.8236 |
| No log | 10.3913 | 478 | 0.7239 | 0.5862 | 0.7239 | 0.8508 |
| No log | 10.4348 | 480 | 0.7312 | 0.5958 | 0.7312 | 0.8551 |
| No log | 10.4783 | 482 | 0.6749 | 0.6198 | 0.6749 | 0.8215 |
| No log | 10.5217 | 484 | 0.6417 | 0.6335 | 0.6417 | 0.8011 |
| No log | 10.5652 | 486 | 0.6504 | 0.5084 | 0.6504 | 0.8065 |
| No log | 10.6087 | 488 | 0.6547 | 0.4968 | 0.6547 | 0.8091 |
| No log | 10.6522 | 490 | 0.6450 | 0.5529 | 0.6450 | 0.8031 |
| No log | 10.6957 | 492 | 0.6673 | 0.6073 | 0.6673 | 0.8169 |
| No log | 10.7391 | 494 | 0.6924 | 0.5973 | 0.6924 | 0.8321 |
| No log | 10.7826 | 496 | 0.6905 | 0.6109 | 0.6905 | 0.8310 |
| No log | 10.8261 | 498 | 0.6965 | 0.5833 | 0.6965 | 0.8346 |
| 0.2902 | 10.8696 | 500 | 0.7077 | 0.6209 | 0.7077 | 0.8412 |
| 0.2902 | 10.9130 | 502 | 0.6775 | 0.6147 | 0.6775 | 0.8231 |
| 0.2902 | 10.9565 | 504 | 0.6453 | 0.6500 | 0.6453 | 0.8033 |
| 0.2902 | 11.0 | 506 | 0.6393 | 0.6175 | 0.6393 | 0.7996 |
| 0.2902 | 11.0435 | 508 | 0.6427 | 0.6057 | 0.6427 | 0.8017 |
| 0.2902 | 11.0870 | 510 | 0.6740 | 0.5932 | 0.6740 | 0.8209 |
| 0.2902 | 11.1304 | 512 | 0.7058 | 0.5654 | 0.7058 | 0.8401 |
| 0.2902 | 11.1739 | 514 | 0.6862 | 0.5472 | 0.6862 | 0.8284 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mrhunghd/3ec0da79-0eab-4cb1-a6c3-1edba33530b8 | mrhunghd | 2025-01-20T23:02:42Z | 6 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:58:34Z | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ec0da79-0eab-4cb1-a6c3-1edba33530b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b096fb12091d0a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b096fb12091d0a7_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/3ec0da79-0eab-4cb1-a6c3-1edba33530b8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b096fb12091d0a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2594ef14-2fe6-455c-8347-c1d0fb26863f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2594ef14-2fe6-455c-8347-c1d0fb26863f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3ec0da79-0eab-4cb1-a6c3-1edba33530b8
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.3655 | 0.2159 | 200 | 2.1761 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nblinh63/f1b226a4-cdd1-4308-ac2d-1fa4ebe01843 | nblinh63 | 2025-01-20T23:02:37Z | 6 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:58:31Z | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f1b226a4-cdd1-4308-ac2d-1fa4ebe01843
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b096fb12091d0a7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b096fb12091d0a7_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/f1b226a4-cdd1-4308-ac2d-1fa4ebe01843
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b096fb12091d0a7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2594ef14-2fe6-455c-8347-c1d0fb26863f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2594ef14-2fe6-455c-8347-c1d0fb26863f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f1b226a4-cdd1-4308-ac2d-1fa4ebe01843
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.3858 | 0.2159 | 200 | 2.1806 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dzanbek/f58f03f7-8fc0-41ef-9d07-5565794dc71c | dzanbek | 2025-01-20T23:02:31Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T22:22:46Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f58f03f7-8fc0-41ef-9d07-5565794dc71c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6e60a538f672529c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6e60a538f672529c_train_data.json
type:
field_input: communityName
field_instruction: label
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dzanbek/f58f03f7-8fc0-41ef-9d07-5565794dc71c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/6e60a538f672529c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3eb92360-4e77-4cc9-9ffa-0e03d7ea7423
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3eb92360-4e77-4cc9-9ffa-0e03d7ea7423
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f58f03f7-8fc0-41ef-9d07-5565794dc71c
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0012 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/0ff89366-d444-4953-ad52-fc6a503d92cb | mrHungddddh | 2025-01-20T23:01:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T21:58:35Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ff89366-d444-4953-ad52-fc6a503d92cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f5f73d0b6f6fe1d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f5f73d0b6f6fe1d_train_data.json
type:
field_input: messages
field_instruction: system
field_output: reference
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/0ff89366-d444-4953-ad52-fc6a503d92cb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5f5f73d0b6f6fe1d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a440f8b-4ef2-40c5-aaae-529ca715837e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a440f8b-4ef2-40c5-aaae-529ca715837e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0ff89366-d444-4953-ad52-fc6a503d92cb
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.1264 | 0.0263 | 200 | 1.4651 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/b9a1e459-4362-4834-8b97-f53708a182cd | VERSIL91 | 2025-01-20T23:00:32Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-20T22:53:20Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b9a1e459-4362-4834-8b97-f53708a182cd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 804c65db1d2351c5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/804c65db1d2351c5_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/b9a1e459-4362-4834-8b97-f53708a182cd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/804c65db1d2351c5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9a1e459-4362-4834-8b97-f53708a182cd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9a1e459-4362-4834-8b97-f53708a182cd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b9a1e459-4362-4834-8b97-f53708a182cd
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0126 | 13 | nan |
| 0.0 | 0.0252 | 26 | nan |
| 0.0 | 0.0377 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/FastApply-32B-Instruct-GGUF | mradermacher | 2025-01-20T23:00:06Z | 302 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:tabnine/FastApply-32B-Instruct",
"base_model:quantized:tabnine/FastApply-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-20T20:47:56Z | ---
base_model: tabnine/FastApply-32B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tabnine/FastApply-32B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/FastApply-32B-Instruct-GGUF/resolve/main/FastApply-32B-Instruct.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nhunglaaaaaaa/31738ab5-8e3e-4c3d-a223-8d795b64609e | nhunglaaaaaaa | 2025-01-20T22:58:35Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-20T22:05:54Z | ---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31738ab5-8e3e-4c3d-a223-8d795b64609e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212d5e0168a48c19_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212d5e0168a48c19_train_data.json
type:
field_instruction: context_en
field_output: question_en
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/31738ab5-8e3e-4c3d-a223-8d795b64609e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/212d5e0168a48c19_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1e1fd096-ba6e-478c-9e4b-c08c22fc3c74
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1e1fd096-ba6e-478c-9e4b-c08c22fc3c74
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 31738ab5-8e3e-4c3d-a223-8d795b64609e
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0691 | 0.0026 | 200 | 0.8120 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik1987/0b3322d1-4a39-4579-a195-f5358131b723 | dimasik1987 | 2025-01-20T22:58:05Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-20T21:58:35Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0b3322d1-4a39-4579-a195-f5358131b723
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f5f73d0b6f6fe1d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f5f73d0b6f6fe1d_train_data.json
type:
field_input: messages
field_instruction: system
field_output: reference
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/0b3322d1-4a39-4579-a195-f5358131b723
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/5f5f73d0b6f6fe1d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a440f8b-4ef2-40c5-aaae-529ca715837e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a440f8b-4ef2-40c5-aaae-529ca715837e
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 0b3322d1-4a39-4579-a195-f5358131b723
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.2169 |
| 3.8291 | 0.0013 | 5 | 1.0084 |
| 3.1617 | 0.0026 | 10 | 0.8943 |
| 3.1678 | 0.0039 | 15 | 0.8525 |
| 3.2285 | 0.0053 | 20 | 0.8367 |
| 3.3153 | 0.0066 | 25 | 0.8247 |
| 3.5174 | 0.0079 | 30 | 0.8223 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lilmeaty/xfsfsfsf-4bit | lilmeaty | 2025-01-20T22:57:54Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-20T07:50:13Z | ---
license: apache-2.0
library_name: transformers
--- |
prxy5607/feb3717a-696e-4e00-8c82-54646ee762f9 | prxy5607 | 2025-01-20T22:56:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | 2025-01-20T21:36:32Z | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: feb3717a-696e-4e00-8c82-54646ee762f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- aaf4cd02348b6ba9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aaf4cd02348b6ba9_train_data.json
type:
field_instruction: code
field_output: docstring
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5607/feb3717a-696e-4e00-8c82-54646ee762f9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/aaf4cd02348b6ba9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6b80479d-4e03-4f5b-b68e-f811da024a88
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6b80479d-4e03-4f5b-b68e-f811da024a88
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# feb3717a-696e-4e00-8c82-54646ee762f9
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6902 | 0.0002 | 1 | 3.2009 |
| 1.0728 | 0.0092 | 50 | 1.3635 |
| 1.2867 | 0.0185 | 100 | 1.2835 |
| 1.3301 | 0.0277 | 150 | 1.2678 |
| 1.3324 | 0.0369 | 200 | 1.2623 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits