modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
marialvsantiago/b1e948f3-befb-411f-b79e-487e570c1f0b | marialvsantiago | 2025-01-23T09:17:39Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-01-23T08:14:57Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1e948f3-befb-411f-b79e-487e570c1f0b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: marialvsantiago/b1e948f3-befb-411f-b79e-487e570c1f0b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# b1e948f3-befb-411f-b79e-487e570c1f0b
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.9885 |
| 3.7298 | 0.0005 | 5 | 3.9294 |
| 3.753 | 0.0009 | 10 | 3.8421 |
| 3.6013 | 0.0014 | 15 | 3.7885 |
| 3.6215 | 0.0018 | 20 | 3.7505 |
| 3.6442 | 0.0023 | 25 | 3.7311 |
| 3.8128 | 0.0027 | 30 | 3.7273 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trangtrannnnn/8da8fcb6-c85e-41aa-80c3-6ef745c2de3b | trangtrannnnn | 2025-01-23T09:16:57Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T09:06:18Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8da8fcb6-c85e-41aa-80c3-6ef745c2de3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f23f65f18453ce63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f23f65f18453ce63_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: trangtrannnnn/8da8fcb6-c85e-41aa-80c3-6ef745c2de3b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f23f65f18453ce63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3ed4752c-a781-4211-a0af-d8ccce51292d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3ed4752c-a781-4211-a0af-d8ccce51292d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8da8fcb6-c85e-41aa-80c3-6ef745c2de3b
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0304 | 0.0786 | 200 | 0.0528 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/8aa98ece-8d0e-4caf-8c62-223dcde8038b | ClarenceDan | 2025-01-23T09:16:37Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T09:12:36Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8aa98ece-8d0e-4caf-8c62-223dcde8038b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c0c789f5fa3834ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c0c789f5fa3834ea_train_data.json
type:
field_instruction: prompt
field_output: reference_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/8aa98ece-8d0e-4caf-8c62-223dcde8038b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c0c789f5fa3834ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ef6e27c9-9bb9-486c-9803-a4459d5ec01c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ef6e27c9-9bb9-486c-9803-a4459d5ec01c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8aa98ece-8d0e-4caf-8c62-223dcde8038b
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2992 | 0.0002 | 1 | 1.2072 |
| 0.8898 | 0.0005 | 3 | 1.2056 |
| 1.0227 | 0.0010 | 6 | 1.1918 |
| 1.0777 | 0.0015 | 9 | 1.1649 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/2c81b2dc-012e-48d8-8664-c6a2fc419042 | adammandic87 | 2025-01-23T09:16:33Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-23T09:10:27Z | ---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2c81b2dc-012e-48d8-8664-c6a2fc419042
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 48a327932f2bcac8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/48a327932f2bcac8_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/2c81b2dc-012e-48d8-8664-c6a2fc419042
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/48a327932f2bcac8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1b46ba65-c297-42f8-b3b7-ea08b72dc3f6
wandb_project: Birthday-SN56-13-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1b46ba65-c297-42f8-b3b7-ea08b72dc3f6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2c81b2dc-012e-48d8-8664-c6a2fc419042
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9302 | 0.0000 | 1 | 11.9314 |
| 11.9314 | 0.0001 | 3 | 11.9314 |
| 11.9318 | 0.0001 | 6 | 11.9314 |
| 11.9339 | 0.0002 | 9 | 11.9314 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrhunghd/b0c3d002-9877-44e9-ac95-879f42303c6a | mrhunghd | 2025-01-23T09:13:18Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:58:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b0c3d002-9877-44e9-ac95-879f42303c6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b634ec435872dc54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b634ec435872dc54_train_data.json
type:
field_input: answers
field_instruction: question
field_output: gpt_answer_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/b0c3d002-9877-44e9-ac95-879f42303c6a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b634ec435872dc54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b0c3d002-9877-44e9-ac95-879f42303c6a
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 95
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1825 | 1.0 | 95 | 0.4100 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/1bdbe1da-ca45-4558-9b24-c18ed248c6c0 | mrHungddddh | 2025-01-23T09:13:05Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:58:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1bdbe1da-ca45-4558-9b24-c18ed248c6c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b634ec435872dc54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b634ec435872dc54_train_data.json
type:
field_input: answers
field_instruction: question
field_output: gpt_answer_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/1bdbe1da-ca45-4558-9b24-c18ed248c6c0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b634ec435872dc54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1bdbe1da-ca45-4558-9b24-c18ed248c6c0
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 95
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2147 | 1.0 | 95 | 0.4109 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
iach/judgegguf2 | iach | 2025-01-23T09:12:40Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T09:12:04Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** iach
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kostiantynk/1e9c3cac-b5d2-468d-bde3-a09a359627e6 | kostiantynk | 2025-01-23T09:11:29Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T09:09:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1e9c3cac-b5d2-468d-bde3-a09a359627e6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f23f65f18453ce63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f23f65f18453ce63_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/1e9c3cac-b5d2-468d-bde3-a09a359627e6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f23f65f18453ce63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3ed4752c-a781-4211-a0af-d8ccce51292d
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3ed4752c-a781-4211-a0af-d8ccce51292d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1e9c3cac-b5d2-468d-bde3-a09a359627e6
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0512 | 0.0004 | 1 | nan |
| 0.2165 | 0.0012 | 3 | nan |
| 0.2621 | 0.0024 | 6 | nan |
| 0.0 | 0.0035 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/f6714636-8185-41c3-a4e6-d4d7fbab4c42 | daniel40 | 2025-01-23T09:09:09Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-23T09:06:54Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f6714636-8185-41c3-a4e6-d4d7fbab4c42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8570af5c23ba879e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8570af5c23ba879e_train_data.json
type:
field_input: text_so_far
field_instruction: user_question
field_output: proposition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/f6714636-8185-41c3-a4e6-d4d7fbab4c42
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8570af5c23ba879e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f8a5a83-8801-4be0-bff4-5bdb7d8ba6e7
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f8a5a83-8801-4be0-bff4-5bdb7d8ba6e7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f6714636-8185-41c3-a4e6-d4d7fbab4c42
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0008 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
laquythang/5f27d691-9327-4043-bdeb-3d370b42cea8 | laquythang | 2025-01-23T09:09:04Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:58:49Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5f27d691-9327-4043-bdeb-3d370b42cea8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b634ec435872dc54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b634ec435872dc54_train_data.json
type:
field_input: answers
field_instruction: question
field_output: gpt_answer_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/5f27d691-9327-4043-bdeb-3d370b42cea8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b634ec435872dc54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5f27d691-9327-4043-bdeb-3d370b42cea8
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 95
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2268 | 1.0 | 95 | 0.4094 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/0b61fbd9-2bde-494a-8124-6299d564c602 | adammandic87 | 2025-01-23T09:08:59Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-23T09:06:40Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0b61fbd9-2bde-494a-8124-6299d564c602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8570af5c23ba879e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8570af5c23ba879e_train_data.json
type:
field_input: text_so_far
field_instruction: user_question
field_output: proposition
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/0b61fbd9-2bde-494a-8124-6299d564c602
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8570af5c23ba879e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f8a5a83-8801-4be0-bff4-5bdb7d8ba6e7
wandb_project: Birthday-SN56-13-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f8a5a83-8801-4be0-bff4-5bdb7d8ba6e7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0b61fbd9-2bde-494a-8124-6299d564c602
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0008 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nathanialhunt/aa8f7bfd-3808-44de-9ace-1e9c624d1351 | nathanialhunt | 2025-01-23T09:08:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T09:06:30Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa8f7bfd-3808-44de-9ace-1e9c624d1351
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f23f65f18453ce63_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f23f65f18453ce63_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: original_instruction
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/aa8f7bfd-3808-44de-9ace-1e9c624d1351
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f23f65f18453ce63_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3ed4752c-a781-4211-a0af-d8ccce51292d
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3ed4752c-a781-4211-a0af-d8ccce51292d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aa8f7bfd-3808-44de-9ace-1e9c624d1351
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0512 | 0.0004 | 1 | nan |
| 0.2165 | 0.0012 | 3 | nan |
| 0.2621 | 0.0024 | 6 | nan |
| 0.0 | 0.0035 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
datlaaaaaaa/a753ec66-e22a-44b8-b669-375cceaf4cbc | datlaaaaaaa | 2025-01-23T09:07:29Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:57:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a753ec66-e22a-44b8-b669-375cceaf4cbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b634ec435872dc54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b634ec435872dc54_train_data.json
type:
field_input: answers
field_instruction: question
field_output: gpt_answer_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/a753ec66-e22a-44b8-b669-375cceaf4cbc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b634ec435872dc54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dcfe088e-1bb0-47a3-a371-1d236eecf8c5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a753ec66-e22a-44b8-b669-375cceaf4cbc
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 95
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2132 | 1.0 | 95 | 0.4156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jsevisal/ModernEMO-base-multilabel | Jsevisal | 2025-01-23T09:07:28Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"dataset:Jsevisal/go_emotions_ekman",
"dataset:google-research-datasets/go_emotions",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-15T11:07:06Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- roc_auc
model-index:
- name: ModernEMO-base
results: []
datasets:
- Jsevisal/go_emotions_ekman
- google-research-datasets/go_emotions
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernEMO-base
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on [Jsevisal/go_emotions_ekman](https://huggingface.co/datasets/Jsevisal/go_emotions_ekman)
It achieves the following results on the evaluation set:
- Loss: 0.2224
- F1: 0.7037
- Roc Auc: 0.8143
- Accuracy: 0.6226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.2162 | 1.0 | 2714 | 0.2049 | 0.6920 | 0.7979 | 0.6010 |
| 0.1553 | 2.0 | 5428 | 0.2224 | 0.7037 | 0.8143 | 0.6226 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
nat-hunt/21ca9765-1099-48f9-8bca-6ed390dbbed0 | nat-hunt | 2025-01-23T09:07:03Z | 14 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-23T08:59:19Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 21ca9765-1099-48f9-8bca-6ed390dbbed0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aed51b8e2c089967_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aed51b8e2c089967_train_data.json
type:
field_input: instance_id
field_instruction: prompt_msg
field_output: truth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/21ca9765-1099-48f9-8bca-6ed390dbbed0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Birthday-SN56-4-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 21ca9765-1099-48f9-8bca-6ed390dbbed0
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0011 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik87/4476dffd-3057-45fa-82b5-d2d95337b4e1 | dimasik87 | 2025-01-23T09:04:19Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:52:17Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4476dffd-3057-45fa-82b5-d2d95337b4e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f105d22f7d07b492_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f105d22f7d07b492_train_data.json
type:
field_input: Topic1
field_instruction: Topic2
field_output: Text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/4476dffd-3057-45fa-82b5-d2d95337b4e1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/f105d22f7d07b492_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7e4edfac-6e94-4d57-9fbf-2a43fc8a9526
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7e4edfac-6e94-4d57-9fbf-2a43fc8a9526
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 4476dffd-3057-45fa-82b5-d2d95337b4e1
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.8703 |
| 2.6082 | 0.0008 | 5 | 2.8087 |
| 2.4777 | 0.0016 | 10 | 2.7376 |
| 2.486 | 0.0023 | 15 | 2.6829 |
| 2.4957 | 0.0031 | 20 | 2.6511 |
| 2.5525 | 0.0039 | 25 | 2.6337 |
| 2.6802 | 0.0047 | 30 | 2.6303 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/77c554e8-7900-4e04-ad02-46521cc16103 | kk-aivio | 2025-01-23T09:04:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:56:25Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 77c554e8-7900-4e04-ad02-46521cc16103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f105d22f7d07b492_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f105d22f7d07b492_train_data.json
type:
field_input: Topic1
field_instruction: Topic2
field_output: Text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/77c554e8-7900-4e04-ad02-46521cc16103
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f105d22f7d07b492_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7e4edfac-6e94-4d57-9fbf-2a43fc8a9526
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7e4edfac-6e94-4d57-9fbf-2a43fc8a9526
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 77c554e8-7900-4e04-ad02-46521cc16103
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4142 | 0.0001 | 1 | 2.6656 |
| 2.5214 | 0.0002 | 3 | 2.6643 |
| 2.6031 | 0.0005 | 6 | 2.6514 |
| 2.5663 | 0.0007 | 9 | 2.6268 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kevinbazira/aya-expanse-32b-gptq-4bit | kevinbazira | 2025-01-23T09:02:52Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"pytorch",
"gptq",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"arxiv:2210.17323",
"base_model:CohereForAI/aya-expanse-32b",
"base_model:quantized:CohereForAI/aya-expanse-32b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | 2025-01-23T05:27:10Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
library_name: transformers
tags:
- cohere
- pytorch
- gptq
model_name: aya-expanse-32b-gptq-4bit
base_model: CohereForAI/aya-expanse-32b
inference: false
model_creator: Cohere For AI
pipeline_tag: text-generation
quantized_by: kevinbazira
---
# aya-expanse-32b-gptq-4bit
This repository contains a quantized version of the `CohereForAI/aya-expanse-32b` model using the [GPTQ](https://huggingface.co/docs/transformers/en/quantization/gptq) method in 4-bit precision.
## Model Summary
- **Quantized Model**: [kevinbazira/aya-expanse-32b-gptq-4bit](https://huggingface.co/kevinbazira/aya-expanse-32b-gptq-4bit)
- **Quantization Method**: [GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers](https://arxiv.org/pdf/2210.17323)
- **Dataset**: [c4](https://huggingface.co/datasets/legacy-datasets/c4)
- **Precision**: 4-bit
- **Original Model**: [CohereForAI/aya-expanse-32b](https://huggingface.co/CohereForAI/aya-expanse-32b)
## How to Use the Quantized Model
### 1. Install the necessary packages
Before using the quantized model, please ensure your environment has:
- [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ)
### 2. Run inference
Load and use the quantized model as shown below in Python:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Set up device
device = torch.device('cuda:1') # Remember to use the correct device here
# Load model and tokenizer
model_name = "kevinbazira/aya-expanse-32b-gptq-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map={"": device.index}
)
# Prepare input
# https://huggingface.co/docs/transformers/en/pad_truncation
input_text = "Add your prompt here."
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding="max_length", max_length=64)
inputs = {key: value.to(device) for key, value in inputs.items()}
# Perform text generation
# https://huggingface.co/docs/transformers/en/main_classes/text_generation
outputs = model.generate(
**inputs,
num_return_sequences=1,
min_new_tokens=64,
max_new_tokens=64,
do_sample=False,
use_cache=True,
num_beams=1
)
# Decode and print the output
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## More Information
- **Original Model**: For details about the original model's architecture, training dataset, and performance, please visit the CohereForAI [aya-expanse-32b model card](https://huggingface.co/CohereForAI/aya-expanse-32b).
- **Support or inquiries**: If you run into any issues or have questions about the quantized model, feel free to reach me via email: `[email protected]`. I'll be happy to help!
|
openfree/pepe | openfree | 2025-01-23T09:02:00Z | 1,105 | 49 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-23T03:19:08Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A person in a bustling cafe pepe
output:
url: samples/1737602345329__000001000_0.jpg
- text: A pepe
output:
url: samples/pepe1.webp
- text: A pepe
output:
url: samples/pepe2.webp
- text: A pepe
output:
url: samples/pepe3.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: pepe
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pepe
<Gallery />
## Trigger words
You should use `pepe` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/openfree/pepe/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('openfree/pepe', weight_name='pepe.safetensors')
image = pipeline('A person in a bustling cafe pepe').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mrferr3t/abdda181-c62a-4c7b-b90d-13b20566569f | mrferr3t | 2025-01-23T09:00:24Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-128k",
"region:us"
] | null | 2025-01-23T08:46:20Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: abdda181-c62a-4c7b-b90d-13b20566569f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2357a9464c66e908_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2357a9464c66e908_train_data.json
type:
field_input: examples
field_instruction: prompt
field_output: statement
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/abdda181-c62a-4c7b-b90d-13b20566569f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2357a9464c66e908_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bcad01e7-1b2a-4250-8b8b-0be1c30000d4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bcad01e7-1b2a-4250-8b8b-0be1c30000d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# abdda181-c62a-4c7b-b90d-13b20566569f
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.8209 | 0.0003 | 1 | 4.1906 |
| 15.6407 | 0.0008 | 3 | 4.1830 |
| 15.704 | 0.0016 | 6 | 4.0431 |
| 13.637 | 0.0023 | 9 | 3.4428 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jimi0209/Yinka-Q5_K_M-GGUF | jimi0209 | 2025-01-23T08:58:02Z | 17 | 0 | null | [
"gguf",
"mteb",
"llama-cpp",
"gguf-my-repo",
"base_model:Classical/Yinka",
"base_model:quantized:Classical/Yinka",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-01-23T08:57:57Z | ---
tags:
- mteb
- llama-cpp
- gguf-my-repo
base_model: Classical/Yinka
model-index:
- name: checkpoint-1431
results:
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 56.306314279047875
- type: cos_sim_spearman
value: 61.020227685004016
- type: euclidean_pearson
value: 58.61821670933433
- type: euclidean_spearman
value: 60.131457106640674
- type: manhattan_pearson
value: 58.6189460369694
- type: manhattan_spearman
value: 60.126350618526224
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 55.8612958476143
- type: cos_sim_spearman
value: 59.01977664864512
- type: euclidean_pearson
value: 62.028094897243655
- type: euclidean_spearman
value: 58.6046814257705
- type: manhattan_pearson
value: 62.02580042431887
- type: manhattan_spearman
value: 58.60626890004892
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.496
- type: f1
value: 46.673963383873065
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.73971622592535
- type: cos_sim_spearman
value: 72.76102992060764
- type: euclidean_pearson
value: 71.04525865868672
- type: euclidean_spearman
value: 72.4032852155075
- type: manhattan_pearson
value: 71.03693009336658
- type: manhattan_spearman
value: 72.39635701224252
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 56.34751074520767
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 48.4856662121073
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 89.26384109024997
- type: mrr
value: 91.27261904761905
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 90.0464058154547
- type: mrr
value: 92.06480158730159
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 27.236
- type: map_at_10
value: 40.778
- type: map_at_100
value: 42.692
- type: map_at_1000
value: 42.787
- type: map_at_3
value: 36.362
- type: map_at_5
value: 38.839
- type: mrr_at_1
value: 41.335
- type: mrr_at_10
value: 49.867
- type: mrr_at_100
value: 50.812999999999995
- type: mrr_at_1000
value: 50.848000000000006
- type: mrr_at_3
value: 47.354
- type: mrr_at_5
value: 48.718
- type: ndcg_at_1
value: 41.335
- type: ndcg_at_10
value: 47.642
- type: ndcg_at_100
value: 54.855
- type: ndcg_at_1000
value: 56.449000000000005
- type: ndcg_at_3
value: 42.203
- type: ndcg_at_5
value: 44.416
- type: precision_at_1
value: 41.335
- type: precision_at_10
value: 10.568
- type: precision_at_100
value: 1.6400000000000001
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.998
- type: precision_at_5
value: 17.389
- type: recall_at_1
value: 27.236
- type: recall_at_10
value: 58.80800000000001
- type: recall_at_100
value: 88.411
- type: recall_at_1000
value: 99.032
- type: recall_at_3
value: 42.253
- type: recall_at_5
value: 49.118
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.03728202044498
- type: cos_sim_ap
value: 92.49469583272597
- type: cos_sim_f1
value: 86.74095974528088
- type: cos_sim_precision
value: 84.43657294664601
- type: cos_sim_recall
value: 89.17465513210195
- type: dot_accuracy
value: 72.21888153938664
- type: dot_ap
value: 80.59377163340332
- type: dot_f1
value: 74.96686040583258
- type: dot_precision
value: 66.4737793851718
- type: dot_recall
value: 85.94809445873275
- type: euclidean_accuracy
value: 85.47203848466627
- type: euclidean_ap
value: 91.89152584749868
- type: euclidean_f1
value: 86.38105975197294
- type: euclidean_precision
value: 83.40953625081646
- type: euclidean_recall
value: 89.5721299976619
- type: manhattan_accuracy
value: 85.3758268190018
- type: manhattan_ap
value: 91.88989707722311
- type: manhattan_f1
value: 86.39767519839052
- type: manhattan_precision
value: 82.76231263383298
- type: manhattan_recall
value: 90.36707972878185
- type: max_accuracy
value: 86.03728202044498
- type: max_ap
value: 92.49469583272597
- type: max_f1
value: 86.74095974528088
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 74.34100000000001
- type: map_at_10
value: 82.49499999999999
- type: map_at_100
value: 82.64200000000001
- type: map_at_1000
value: 82.643
- type: map_at_3
value: 81.142
- type: map_at_5
value: 81.95400000000001
- type: mrr_at_1
value: 74.71
- type: mrr_at_10
value: 82.553
- type: mrr_at_100
value: 82.699
- type: mrr_at_1000
value: 82.70100000000001
- type: mrr_at_3
value: 81.279
- type: mrr_at_5
value: 82.069
- type: ndcg_at_1
value: 74.605
- type: ndcg_at_10
value: 85.946
- type: ndcg_at_100
value: 86.607
- type: ndcg_at_1000
value: 86.669
- type: ndcg_at_3
value: 83.263
- type: ndcg_at_5
value: 84.71600000000001
- type: precision_at_1
value: 74.605
- type: precision_at_10
value: 9.758
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.996000000000002
- type: precision_at_5
value: 18.736
- type: recall_at_1
value: 74.34100000000001
- type: recall_at_10
value: 96.523
- type: recall_at_100
value: 99.473
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 92.83500000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.950000000000003
- type: map_at_10
value: 82.408
- type: map_at_100
value: 85.057
- type: map_at_1000
value: 85.09100000000001
- type: map_at_3
value: 57.635999999999996
- type: map_at_5
value: 72.48
- type: mrr_at_1
value: 92.15
- type: mrr_at_10
value: 94.554
- type: mrr_at_100
value: 94.608
- type: mrr_at_1000
value: 94.61
- type: mrr_at_3
value: 94.292
- type: mrr_at_5
value: 94.459
- type: ndcg_at_1
value: 92.15
- type: ndcg_at_10
value: 89.108
- type: ndcg_at_100
value: 91.525
- type: ndcg_at_1000
value: 91.82900000000001
- type: ndcg_at_3
value: 88.44
- type: ndcg_at_5
value: 87.271
- type: precision_at_1
value: 92.15
- type: precision_at_10
value: 42.29
- type: precision_at_100
value: 4.812
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 79.14999999999999
- type: precision_at_5
value: 66.64
- type: recall_at_1
value: 26.950000000000003
- type: recall_at_10
value: 89.832
- type: recall_at_100
value: 97.921
- type: recall_at_1000
value: 99.471
- type: recall_at_3
value: 59.562000000000005
- type: recall_at_5
value: 76.533
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 53.5
- type: map_at_10
value: 63.105999999999995
- type: map_at_100
value: 63.63100000000001
- type: map_at_1000
value: 63.641999999999996
- type: map_at_3
value: 60.617
- type: map_at_5
value: 62.132
- type: mrr_at_1
value: 53.5
- type: mrr_at_10
value: 63.105999999999995
- type: mrr_at_100
value: 63.63100000000001
- type: mrr_at_1000
value: 63.641999999999996
- type: mrr_at_3
value: 60.617
- type: mrr_at_5
value: 62.132
- type: ndcg_at_1
value: 53.5
- type: ndcg_at_10
value: 67.92200000000001
- type: ndcg_at_100
value: 70.486
- type: ndcg_at_1000
value: 70.777
- type: ndcg_at_3
value: 62.853
- type: ndcg_at_5
value: 65.59899999999999
- type: precision_at_1
value: 53.5
- type: precision_at_10
value: 8.309999999999999
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.1
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 53.5
- type: recall_at_10
value: 83.1
- type: recall_at_100
value: 95.1
- type: recall_at_1000
value: 97.39999999999999
- type: recall_at_3
value: 69.3
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 51.773759138130046
- type: f1
value: 40.38600802756481
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 88.48030018761726
- type: ap
value: 59.2732541555627
- type: f1
value: 83.58836007358619
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 73.67511194245922
- type: cos_sim_spearman
value: 79.43347759067298
- type: euclidean_pearson
value: 79.04491504318766
- type: euclidean_spearman
value: 79.14478545356785
- type: manhattan_pearson
value: 79.03365022867428
- type: manhattan_spearman
value: 79.13172717619908
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 67.184
- type: map_at_10
value: 76.24600000000001
- type: map_at_100
value: 76.563
- type: map_at_1000
value: 76.575
- type: map_at_3
value: 74.522
- type: map_at_5
value: 75.598
- type: mrr_at_1
value: 69.47
- type: mrr_at_10
value: 76.8
- type: mrr_at_100
value: 77.082
- type: mrr_at_1000
value: 77.093
- type: mrr_at_3
value: 75.29400000000001
- type: mrr_at_5
value: 76.24
- type: ndcg_at_1
value: 69.47
- type: ndcg_at_10
value: 79.81099999999999
- type: ndcg_at_100
value: 81.187
- type: ndcg_at_1000
value: 81.492
- type: ndcg_at_3
value: 76.536
- type: ndcg_at_5
value: 78.367
- type: precision_at_1
value: 69.47
- type: precision_at_10
value: 9.599
- type: precision_at_100
value: 1.026
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.777
- type: precision_at_5
value: 18.232
- type: recall_at_1
value: 67.184
- type: recall_at_10
value: 90.211
- type: recall_at_100
value: 96.322
- type: recall_at_1000
value: 98.699
- type: recall_at_3
value: 81.556
- type: recall_at_5
value: 85.931
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.96032279757901
- type: f1
value: 73.48052314033545
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.64357767316744
- type: f1
value: 83.58250539497922
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 56.00000000000001
- type: map_at_10
value: 62.066
- type: map_at_100
value: 62.553000000000004
- type: map_at_1000
value: 62.598
- type: map_at_3
value: 60.4
- type: map_at_5
value: 61.370000000000005
- type: mrr_at_1
value: 56.2
- type: mrr_at_10
value: 62.166
- type: mrr_at_100
value: 62.653000000000006
- type: mrr_at_1000
value: 62.699000000000005
- type: mrr_at_3
value: 60.5
- type: mrr_at_5
value: 61.47
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 65.199
- type: ndcg_at_100
value: 67.79899999999999
- type: ndcg_at_1000
value: 69.056
- type: ndcg_at_3
value: 61.814
- type: ndcg_at_5
value: 63.553000000000004
- type: precision_at_1
value: 56.00000000000001
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 0.878
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 21.967
- type: precision_at_5
value: 14.02
- type: recall_at_1
value: 56.00000000000001
- type: recall_at_10
value: 75.1
- type: recall_at_100
value: 87.8
- type: recall_at_1000
value: 97.7
- type: recall_at_3
value: 65.9
- type: recall_at_5
value: 70.1
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 32.74158258279793
- type: mrr
value: 31.56071428571428
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 78.96666666666667
- type: f1
value: 78.82528563818045
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.54087709799674
- type: cos_sim_ap
value: 87.26170197077586
- type: cos_sim_f1
value: 84.7609561752988
- type: cos_sim_precision
value: 80.20735155513667
- type: cos_sim_recall
value: 89.86272439281943
- type: dot_accuracy
value: 72.22523010286952
- type: dot_ap
value: 79.51975358187732
- type: dot_f1
value: 76.32183908045977
- type: dot_precision
value: 67.58957654723126
- type: dot_recall
value: 87.64519535374869
- type: euclidean_accuracy
value: 82.0249052517596
- type: euclidean_ap
value: 85.32829948726406
- type: euclidean_f1
value: 83.24924318869829
- type: euclidean_precision
value: 79.71014492753623
- type: euclidean_recall
value: 87.11721224920802
- type: manhattan_accuracy
value: 82.13318895506227
- type: manhattan_ap
value: 85.28856869288006
- type: manhattan_f1
value: 83.34946757018393
- type: manhattan_precision
value: 76.94369973190348
- type: manhattan_recall
value: 90.91869060190075
- type: max_accuracy
value: 83.54087709799674
- type: max_ap
value: 87.26170197077586
- type: max_f1
value: 84.7609561752988
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 94.56
- type: ap
value: 92.80848436710805
- type: f1
value: 94.54951966576111
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 39.0866558287863
- type: cos_sim_spearman
value: 45.9211126233312
- type: euclidean_pearson
value: 44.86568743222145
- type: euclidean_spearman
value: 45.63882757207507
- type: manhattan_pearson
value: 44.89480036909126
- type: manhattan_spearman
value: 45.65929449046206
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 43.04701793979569
- type: cos_sim_spearman
value: 44.87491033760315
- type: euclidean_pearson
value: 36.2004061032567
- type: euclidean_spearman
value: 41.44823909683865
- type: manhattan_pearson
value: 36.136113427955095
- type: manhattan_spearman
value: 41.39225495993949
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 61.65611315777857
- type: cos_sim_spearman
value: 64.4067673105648
- type: euclidean_pearson
value: 61.814977248797184
- type: euclidean_spearman
value: 63.99473350700169
- type: manhattan_pearson
value: 61.684304629588624
- type: manhattan_spearman
value: 63.97831213239316
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 76.57324933064379
- type: cos_sim_spearman
value: 79.23602286949782
- type: euclidean_pearson
value: 80.28226284310948
- type: euclidean_spearman
value: 80.32210477608423
- type: manhattan_pearson
value: 80.27262188617811
- type: manhattan_spearman
value: 80.31619185039723
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 67.05266891356277
- type: mrr
value: 77.1906333623497
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 28.212
- type: map_at_10
value: 78.932
- type: map_at_100
value: 82.51899999999999
- type: map_at_1000
value: 82.575
- type: map_at_3
value: 55.614
- type: map_at_5
value: 68.304
- type: mrr_at_1
value: 91.211
- type: mrr_at_10
value: 93.589
- type: mrr_at_100
value: 93.659
- type: mrr_at_1000
value: 93.662
- type: mrr_at_3
value: 93.218
- type: mrr_at_5
value: 93.453
- type: ndcg_at_1
value: 91.211
- type: ndcg_at_10
value: 86.24000000000001
- type: ndcg_at_100
value: 89.614
- type: ndcg_at_1000
value: 90.14
- type: ndcg_at_3
value: 87.589
- type: ndcg_at_5
value: 86.265
- type: precision_at_1
value: 91.211
- type: precision_at_10
value: 42.626
- type: precision_at_100
value: 5.043
- type: precision_at_1000
value: 0.517
- type: precision_at_3
value: 76.42
- type: precision_at_5
value: 64.045
- type: recall_at_1
value: 28.212
- type: recall_at_10
value: 85.223
- type: recall_at_100
value: 96.229
- type: recall_at_1000
value: 98.849
- type: recall_at_3
value: 57.30800000000001
- type: recall_at_5
value: 71.661
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 54.385000000000005
- type: f1
value: 52.38762400903556
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 74.55283855942916
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 68.55115316700493
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 58.8
- type: map_at_10
value: 69.035
- type: map_at_100
value: 69.52000000000001
- type: map_at_1000
value: 69.529
- type: map_at_3
value: 67.417
- type: map_at_5
value: 68.407
- type: mrr_at_1
value: 58.8
- type: mrr_at_10
value: 69.035
- type: mrr_at_100
value: 69.52000000000001
- type: mrr_at_1000
value: 69.529
- type: mrr_at_3
value: 67.417
- type: mrr_at_5
value: 68.407
- type: ndcg_at_1
value: 58.8
- type: ndcg_at_10
value: 73.395
- type: ndcg_at_100
value: 75.62
- type: ndcg_at_1000
value: 75.90299999999999
- type: ndcg_at_3
value: 70.11800000000001
- type: ndcg_at_5
value: 71.87400000000001
- type: precision_at_1
value: 58.8
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.967000000000002
- type: precision_at_5
value: 16.42
- type: recall_at_1
value: 58.8
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.89999999999999
- type: recall_at_1000
value: 99.2
- type: recall_at_3
value: 77.9
- type: recall_at_5
value: 82.1
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.42
- type: ap
value: 75.35978503182068
- type: f1
value: 88.01006394348263
---
# jimi0209/Yinka-Q5_K_M-GGUF
This model was converted to GGUF format from [`Classical/Yinka`](https://huggingface.co/Classical/Yinka) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Classical/Yinka) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jimi0209/Yinka-Q5_K_M-GGUF --hf-file yinka-q5_k_m.gguf -c 2048
```
|
FatihC06/wav2vec2-large-xlsr-53-common_voice-turkish | FatihC06 | 2025-01-23T08:57:35Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-22T09:19:26Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-cv7-istech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cv7-istech
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7978
- Wer: 0.7496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1946 | 0.0803 | 20 | 6.7455 | 1.0 |
| 5.1957 | 0.1606 | 40 | 3.7715 | 1.0 |
| 3.5889 | 0.2410 | 60 | 3.4781 | 1.0 |
| 3.3656 | 0.3213 | 80 | 3.3373 | 1.0 |
| 3.5262 | 0.4016 | 100 | 3.3093 | 1.0 |
| 3.1816 | 0.4819 | 120 | 3.2444 | 1.0 |
| 3.3344 | 0.5622 | 140 | 3.2887 | 1.0 |
| 3.2522 | 0.6426 | 160 | 3.2321 | 1.0 |
| 3.1819 | 0.7229 | 180 | 3.2538 | 1.0 |
| 3.3665 | 0.8032 | 200 | 3.2018 | 1.0 |
| 3.1257 | 0.8835 | 220 | 3.1751 | 1.0 |
| 3.2982 | 0.9639 | 240 | 3.1929 | 1.0 |
| 3.2181 | 1.0442 | 260 | 3.1781 | 1.0 |
| 3.1455 | 1.1245 | 280 | 3.3508 | 1.0 |
| 3.2722 | 1.2048 | 300 | 3.2343 | 1.0 |
| 3.1217 | 1.2851 | 320 | 3.1378 | 0.9999 |
| 3.2419 | 1.3655 | 340 | 3.2108 | 0.9999 |
| 3.1825 | 1.4458 | 360 | 3.1363 | 0.9999 |
| 3.1316 | 1.5261 | 380 | 3.1685 | 0.9999 |
| 3.2373 | 1.6064 | 400 | 3.1731 | 1.0 |
| 3.0634 | 1.6867 | 420 | 3.1083 | 0.9999 |
| 3.1367 | 1.7671 | 440 | 3.1582 | 0.9999 |
| 3.0831 | 1.8474 | 460 | 3.0410 | 0.9999 |
| 2.9524 | 1.9277 | 480 | 2.9578 | 0.9999 |
| 3.1076 | 2.0080 | 500 | 2.9558 | 0.9999 |
| 2.8085 | 2.0884 | 520 | 2.8155 | 0.9999 |
| 2.7768 | 2.1687 | 540 | 2.8155 | 0.9998 |
| 2.6186 | 2.2490 | 560 | 2.5386 | 1.0 |
| 2.3452 | 2.3293 | 580 | 2.3387 | 0.9994 |
| 2.4213 | 2.4096 | 600 | 2.2745 | 0.9997 |
| 1.8149 | 2.4900 | 620 | 1.8076 | 0.9906 |
| 1.747 | 2.5703 | 640 | 1.8517 | 0.9962 |
| 1.6369 | 2.6506 | 660 | 1.4838 | 0.9662 |
| 1.3732 | 2.7309 | 680 | 1.4990 | 0.9852 |
| 1.6638 | 2.8112 | 700 | 1.3854 | 0.9383 |
| 1.1034 | 2.8916 | 720 | 1.2258 | 0.9112 |
| 1.3003 | 2.9719 | 740 | 1.3696 | 0.9174 |
| 1.2128 | 3.0522 | 760 | 1.1576 | 0.8966 |
| 0.9759 | 3.1325 | 780 | 1.1325 | 0.8838 |
| 1.2551 | 3.2129 | 800 | 1.1489 | 0.8831 |
| 0.8371 | 3.2932 | 820 | 1.0224 | 0.8460 |
| 1.0434 | 3.3735 | 840 | 1.0927 | 0.8680 |
| 1.0386 | 3.4538 | 860 | 0.9731 | 0.8383 |
| 0.879 | 3.5341 | 880 | 0.9916 | 0.8428 |
| 1.1771 | 3.6145 | 900 | 0.9857 | 0.8406 |
| 0.6948 | 3.6948 | 920 | 0.9493 | 0.8203 |
| 0.9657 | 3.7751 | 940 | 0.9960 | 0.8364 |
| 0.8991 | 3.8554 | 960 | 0.8949 | 0.8013 |
| 0.7712 | 3.9357 | 980 | 0.9039 | 0.8096 |
| 1.0684 | 4.0161 | 1000 | 0.9024 | 0.8012 |
| 0.5559 | 4.0964 | 1020 | 0.8611 | 0.7872 |
| 0.8446 | 4.1767 | 1040 | 0.9017 | 0.8021 |
| 0.7911 | 4.2570 | 1060 | 0.8596 | 0.7848 |
| 0.6178 | 4.3373 | 1080 | 0.8612 | 0.7778 |
| 0.9147 | 4.4177 | 1100 | 0.8654 | 0.7756 |
| 0.5448 | 4.4980 | 1120 | 0.8222 | 0.7649 |
| 0.7812 | 4.5783 | 1140 | 0.8337 | 0.7697 |
| 0.6784 | 4.6586 | 1160 | 0.8146 | 0.7588 |
| 0.6022 | 4.7390 | 1180 | 0.8077 | 0.7538 |
| 0.8592 | 4.8193 | 1200 | 0.8121 | 0.7621 |
| 0.4884 | 4.8996 | 1220 | 0.7982 | 0.7487 |
| 0.7429 | 4.9799 | 1240 | 0.7978 | 0.7496 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.20.1
|
trenden/f09dac8b-4d58-4cb1-a019-a071c9b822ac | trenden | 2025-01-23T08:57:20Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-23T08:49:35Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f09dac8b-4d58-4cb1-a019-a071c9b822ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aed51b8e2c089967_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aed51b8e2c089967_train_data.json
type:
field_input: instance_id
field_instruction: prompt_msg
field_output: truth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/f09dac8b-4d58-4cb1-a019-a071c9b822ac
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f09dac8b-4d58-4cb1-a019-a071c9b822ac
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0011 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/321b91f0-9f11-4b82-99c3-5b2c4c60a3d6 | Best000 | 2025-01-23T08:55:33Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-128k",
"region:us"
] | null | 2025-01-23T08:44:41Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 321b91f0-9f11-4b82-99c3-5b2c4c60a3d6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2357a9464c66e908_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2357a9464c66e908_train_data.json
type:
field_input: examples
field_instruction: prompt
field_output: statement
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/321b91f0-9f11-4b82-99c3-5b2c4c60a3d6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2357a9464c66e908_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bcad01e7-1b2a-4250-8b8b-0be1c30000d4
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bcad01e7-1b2a-4250-8b8b-0be1c30000d4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 321b91f0-9f11-4b82-99c3-5b2c4c60a3d6
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.8209 | 0.0003 | 1 | 4.1906 |
| 15.645 | 0.0008 | 3 | 4.1847 |
| 15.7295 | 0.0016 | 6 | 4.0535 |
| 13.6724 | 0.0023 | 9 | 3.4595 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk1205/2a5785f0-76a4-4861-a692-a563d9cc6599 | kostiantynk1205 | 2025-01-23T08:55:01Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-23T08:47:18Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a5785f0-76a4-4861-a692-a563d9cc6599
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aed51b8e2c089967_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aed51b8e2c089967_train_data.json
type:
field_input: instance_id
field_instruction: prompt_msg
field_output: truth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/2a5785f0-76a4-4861-a692-a563d9cc6599
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2a5785f0-76a4-4861-a692-a563d9cc6599
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0011 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fbvids/sophie | fbvids | 2025-01-23T08:54:49Z | 11 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-23T08:09:05Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sophie
---
# Sophie
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sophie` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fbvids/sophie', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
joboffer/cf85ffc4-2140-4e4b-ac20-53b6dfa97f6f | joboffer | 2025-01-23T08:52:00Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-01-23T08:15:56Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf85ffc4-2140-4e4b-ac20-53b6dfa97f6f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: joboffer/cf85ffc4-2140-4e4b-ac20-53b6dfa97f6f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# cf85ffc4-2140-4e4b-ac20-53b6dfa97f6f
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.9885 |
| 3.7299 | 0.0005 | 5 | 3.9300 |
| 3.753 | 0.0009 | 10 | 3.8417 |
| 3.6004 | 0.0014 | 15 | 3.7883 |
| 3.6217 | 0.0018 | 20 | 3.7506 |
| 3.6446 | 0.0023 | 25 | 3.7313 |
| 3.8136 | 0.0027 | 30 | 3.7276 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso14/a366f008-254d-4e0d-b4a6-0ba254c2c486 | lesso14 | 2025-01-23T08:51:49Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | 2025-01-23T07:19:45Z | ---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a366f008-254d-4e0d-b4a6-0ba254c2c486
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-koNqa-test-v1
bf16: true
chat_template: llama3
datasets:
- data_files:
- 0470cc49f434ca45_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0470cc49f434ca45_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: responseA
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso14/a366f008-254d-4e0d-b4a6-0ba254c2c486
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/0470cc49f434ca45_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db1a33bc-9f36-4a09-a66d-2395320ddb3b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: db1a33bc-9f36-4a09-a66d-2395320ddb3b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a366f008-254d-4e0d-b4a6-0ba254c2c486
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
| 0.0 | 0.0009 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/6cd9ed86-39a0-4255-a844-5c12fb812b39 | great0001 | 2025-01-23T08:49:44Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-01-23T08:47:26Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6cd9ed86-39a0-4255-a844-5c12fb812b39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7031e972306f161_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7031e972306f161_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/6cd9ed86-39a0-4255-a844-5c12fb812b39
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7031e972306f161_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6cd9ed86-39a0-4255-a844-5c12fb812b39
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.2919 | 0.0014 | 1 | 1.6530 |
| 6.6954 | 0.0043 | 3 | 1.6506 |
| 6.024 | 0.0087 | 6 | 1.6230 |
| 5.4921 | 0.0130 | 9 | 1.5476 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/SJT-4B-v1.1-i1-GGUF | mradermacher | 2025-01-23T08:46:51Z | 366 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"ja",
"base_model:Sakalti/SJT-4B-v1.1",
"base_model:quantized:Sakalti/SJT-4B-v1.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-23T07:38:54Z | ---
base_model: Sakalti/SJT-4B-v1.1
language:
- en
- ja
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sakalti/SJT-4B-v1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF/resolve/main/SJT-4B-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nat-hunt/74b32592-698b-46b8-9cb5-eb9cb124b48f | nat-hunt | 2025-01-23T08:45:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-23T08:06:32Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 74b32592-698b-46b8-9cb5-eb9cb124b48f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 41fd924b32233ba5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/41fd924b32233ba5_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/74b32592-698b-46b8-9cb5-eb9cb124b48f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/41fd924b32233ba5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 67b10ffb-1db7-4fd6-a9cc-dda913632150
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 67b10ffb-1db7-4fd6-a9cc-dda913632150
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 74b32592-698b-46b8-9cb5-eb9cb124b48f
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.9609 | 0.0000 | 1 | 2.6664 |
| 8.6378 | 0.0001 | 3 | 2.6524 |
| 14.6893 | 0.0001 | 6 | 2.5499 |
| 7.2402 | 0.0002 | 9 | 2.3880 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bigband/MightyLakshmi | bigband | 2025-01-23T08:44:50Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-01-23T08:41:33Z | ---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license. |
denbeo/640e9db9-39a5-4636-87d6-8ec31984668b | denbeo | 2025-01-23T08:44:37Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:14:17Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 640e9db9-39a5-4636-87d6-8ec31984668b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/640e9db9-39a5-4636-87d6-8ec31984668b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 640e9db9-39a5-4636-87d6-8ec31984668b
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.013 | 0.0090 | 200 | 2.6144 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prithivMLmods/Phi-4-Super | prithivMLmods | 2025-01-23T08:43:55Z | 150 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:LightningRodLabs/Flashlight-v1.0",
"base_model:merge:LightningRodLabs/Flashlight-v1.0",
"base_model:Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ",
"base_model:merge:Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ",
"base_model:bunnycore/Phi-4-RP-V0.2",
"base_model:merge:bunnycore/Phi-4-RP-V0.2",
"base_model:mudler/LocalAI-functioncall-phi-4-v0.3",
"base_model:merge:mudler/LocalAI-functioncall-phi-4-v0.3",
"base_model:prithivMLmods/Phi-4-Empathetic",
"base_model:merge:prithivMLmods/Phi-4-Empathetic",
"base_model:prithivMLmods/Phi-4-Math-IO",
"base_model:merge:prithivMLmods/Phi-4-Math-IO",
"base_model:prithivMLmods/Phi-4-QwQ",
"base_model:merge:prithivMLmods/Phi-4-QwQ",
"base_model:prithivMLmods/Phi-4-o1",
"base_model:merge:prithivMLmods/Phi-4-o1",
"base_model:unsloth/phi-4",
"base_model:merge:unsloth/phi-4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-23T08:03:52Z | ---
base_model:
- prithivMLmods/Phi-4-QwQ
- prithivMLmods/Phi-4-Math-IO
- Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ
- prithivMLmods/Phi-4-o1
- bunnycore/Phi-4-RP-V0.2
- prithivMLmods/Phi-4-Empathetic
- LightningRodLabs/Flashlight-v1.0
- mudler/LocalAI-functioncall-phi-4-v0.3
- unsloth/phi-4
library_name: transformers
tags:
- mergekit
- merge
---
# **Phi4-Super**
[Phi-4-Super finetuned] from Microsoft's Phi-4 is a state-of-the-art open model developed with a focus on responsible problem solving and advanced reasoning capabilities. Built upon a diverse blend of synthetic datasets, carefully filtered public domain websites, and high-quality academic books and Q&A datasets, Phi-4-Super ensures that small, capable models are trained with datasets of exceptional depth and precision.
Phi-4-Super adopts a robust safety post-training approach using open-source and in-house synthetic datasets. This involves a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization) techniques, ensuring helpful and harmless outputs across various safety categories.
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [unsloth/phi-4](https://huggingface.co/unsloth/phi-4) as a base.
### Models Merged
The following models were included in the merge:
* [prithivMLmods/Phi-4-QwQ](https://huggingface.co/prithivMLmods/Phi-4-QwQ)
* [prithivMLmods/Phi-4-Math-IO](https://huggingface.co/prithivMLmods/Phi-4-Math-IO)
* [Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ](https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ)
* [prithivMLmods/Phi-4-o1](https://huggingface.co/prithivMLmods/Phi-4-o1)
* [bunnycore/Phi-4-RP-V0.2](https://huggingface.co/bunnycore/Phi-4-RP-V0.2)
* [prithivMLmods/Phi-4-Empathetic](https://huggingface.co/prithivMLmods/Phi-4-Empathetic)
* [LightningRodLabs/Flashlight-v1.0](https://huggingface.co/LightningRodLabs/Flashlight-v1.0)
* [mudler/LocalAI-functioncall-phi-4-v0.3](https://huggingface.co/mudler/LocalAI-functioncall-phi-4-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: prithivMLmods/Phi-4-o1
- model: prithivMLmods/Phi-4-Empathetic
- model: prithivMLmods/Phi-4-Math-IO
- model: prithivMLmods/Phi-4-QwQ
- model: LightningRodLabs/Flashlight-v1.0
- model: Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ
- model: mudler/LocalAI-functioncall-phi-4-v0.3
- model: bunnycore/Phi-4-RP-V0.2
- model: unsloth/phi-4
merge_method: model_stock
base_model: unsloth/phi-4
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
tokenizer_source: "unsloth/phi-4"
```
|
JacksonBrune/30b5e6b9-e1a9-4c34-a60c-873760c83a2b | JacksonBrune | 2025-01-23T08:43:50Z | 7 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-01-23T08:41:37Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 30b5e6b9-e1a9-4c34-a60c-873760c83a2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7031e972306f161_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7031e972306f161_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/30b5e6b9-e1a9-4c34-a60c-873760c83a2b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7031e972306f161_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 30b5e6b9-e1a9-4c34-a60c-873760c83a2b
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.2919 | 0.0014 | 1 | 1.6530 |
| 6.6899 | 0.0043 | 3 | 1.6504 |
| 6.0129 | 0.0087 | 6 | 1.6232 |
| 5.5219 | 0.0130 | 9 | 1.5512 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nexspear/23ee6ebf-505d-4260-a699-baa6331ce709 | Nexspear | 2025-01-23T08:43:28Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-23T06:49:50Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 23ee6ebf-505d-4260-a699-baa6331ce709
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c80c7d78c247d894_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c80c7d78c247d894_train_data.json
type:
field_instruction: text
field_output: target
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: Nexspear/23ee6ebf-505d-4260-a699-baa6331ce709
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/c80c7d78c247d894_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a0542c04-33bc-424f-bb1e-c73bc012f9b8
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: a0542c04-33bc-424f-bb1e-c73bc012f9b8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 23ee6ebf-505d-4260-a699-baa6331ce709
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.4496 |
| 2.4529 | 0.0011 | 9 | 2.3235 |
| 2.0048 | 0.0021 | 18 | 1.9787 |
| 1.7957 | 0.0032 | 27 | 1.8434 |
| 1.7441 | 0.0042 | 36 | 1.7975 |
| 1.859 | 0.0053 | 45 | 1.7744 |
| 1.7775 | 0.0063 | 54 | 1.7572 |
| 1.6574 | 0.0074 | 63 | 1.7478 |
| 1.8685 | 0.0085 | 72 | 1.7420 |
| 1.7749 | 0.0095 | 81 | 1.7390 |
| 1.6201 | 0.0106 | 90 | 1.7375 |
| 1.7058 | 0.0116 | 99 | 1.7373 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Stopwolf/whisper-small-sr | Stopwolf | 2025-01-23T08:43:01Z | 79 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sr",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-09-27T18:55:12Z | ---
language:
- sr
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Serbian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: sr
split: test
args: sr
metrics:
- name: Wer
type: wer
value: 17.41963509991312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Serbian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4671
- Wer Ortho: 27.4565
- Wer: 17.4196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1403 | 1.44 | 250 | 0.2809 | 28.8913 | 19.2224 |
| 0.0664 | 2.87 | 500 | 0.2858 | 27.3696 | 17.9626 |
| 0.0315 | 4.31 | 750 | 0.3152 | 27.9348 | 17.4631 |
| 0.0174 | 5.75 | 1000 | 0.3578 | 28.1522 | 17.9844 |
| 0.0067 | 7.18 | 1250 | 0.4018 | 27.9130 | 17.9626 |
| 0.0015 | 8.62 | 1500 | 0.4535 | 28.6739 | 17.5717 |
| 0.0008 | 10.06 | 1750 | 0.4558 | 27.2174 | 17.1807 |
| 0.0005 | 11.49 | 2000 | 0.4585 | 27.4348 | 17.4848 |
| 0.0005 | 12.93 | 2250 | 0.4651 | 27.3478 | 17.3979 |
| 0.0005 | 14.37 | 2500 | 0.4671 | 27.4565 | 17.4196 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
brixeus/e2dca2f4-686c-4951-85d8-9eaa25e7c7f4 | brixeus | 2025-01-23T08:42:07Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-01-23T08:31:09Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e2dca2f4-686c-4951-85d8-9eaa25e7c7f4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7031e972306f161_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7031e972306f161_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: brixeus/e2dca2f4-686c-4951-85d8-9eaa25e7c7f4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e7031e972306f161_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
wandb_project: Gradients-On-Three
wandb_run: your_name
wandb_runid: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# e2dca2f4-686c-4951-85d8-9eaa25e7c7f4
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0058 | 1 | 1.6373 |
| 6.5117 | 0.0520 | 9 | 1.5940 |
| 5.6145 | 0.1040 | 18 | 1.4199 |
| 5.1259 | 0.1561 | 27 | 1.3104 |
| 4.6888 | 0.2081 | 36 | 1.2404 |
| 4.609 | 0.2601 | 45 | 1.1902 |
| 4.6249 | 0.3121 | 54 | 1.1553 |
| 4.4169 | 0.3642 | 63 | 1.1320 |
| 4.6411 | 0.4162 | 72 | 1.1186 |
| 4.4663 | 0.4682 | 81 | 1.1105 |
| 4.6312 | 0.5202 | 90 | 1.1077 |
| 4.1999 | 0.5723 | 99 | 1.1074 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF | mradermacher | 2025-01-23T08:40:44Z | 189 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/Experiment26Neuralsirkrishna_Experiment29Experiment24",
"base_model:quantized:MaziyarPanahi/Experiment26Neuralsirkrishna_Experiment29Experiment24",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-23T08:10:21Z | ---
base_model: MaziyarPanahi/Experiment26Neuralsirkrishna_Experiment29Experiment24
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: Experiment26Neuralsirkrishna_Experiment29Experiment24
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MaziyarPanahi/Experiment26Neuralsirkrishna_Experiment29Experiment24
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Experiment26Neuralsirkrishna_Experiment29Experiment24-GGUF/resolve/main/Experiment26Neuralsirkrishna_Experiment29Experiment24.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dimasik87/938e68e3-f682-4b41-b3a1-ee0d4b500d37 | dimasik87 | 2025-01-23T08:38:48Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-01-23T08:16:01Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 938e68e3-f682-4b41-b3a1-ee0d4b500d37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik87/938e68e3-f682-4b41-b3a1-ee0d4b500d37
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 938e68e3-f682-4b41-b3a1-ee0d4b500d37
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.9885 |
| 3.7301 | 0.0005 | 5 | 3.9296 |
| 3.7533 | 0.0009 | 10 | 3.8417 |
| 3.6008 | 0.0014 | 15 | 3.7884 |
| 3.6217 | 0.0018 | 20 | 3.7505 |
| 3.6454 | 0.0023 | 25 | 3.7312 |
| 3.8127 | 0.0027 | 30 | 3.7274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gavrilstep/2e521cb6-774d-49a6-897c-103cfc24d014 | gavrilstep | 2025-01-23T08:38:26Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T03:45:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2e521cb6-774d-49a6-897c-103cfc24d014
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-14B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 466324cc3cdc8c11_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/466324cc3cdc8c11_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/2e521cb6-774d-49a6-897c-103cfc24d014
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/466324cc3cdc8c11_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 676d9f91-4116-4f6e-8ff1-694522a1ba61
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 676d9f91-4116-4f6e-8ff1-694522a1ba61
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2e521cb6-774d-49a6-897c-103cfc24d014
This model is a fine-tuned version of [unsloth/Qwen2.5-14B-Instruct](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 5 | nan |
| 0.0 | 0.0006 | 10 | nan |
| 0.0 | 0.0008 | 15 | nan |
| 0.0 | 0.0011 | 20 | nan |
| 0.0 | 0.0014 | 25 | nan |
| 0.0 | 0.0017 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thakkkkkk/27f857fd-f3e5-449b-9bc1-87648b5d32f7 | thakkkkkk | 2025-01-23T08:38:16Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:14:47Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27f857fd-f3e5-449b-9bc1-87648b5d32f7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/27f857fd-f3e5-449b-9bc1-87648b5d32f7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 27f857fd-f3e5-449b-9bc1-87648b5d32f7
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0465 | 0.0180 | 200 | 2.5670 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/131bc2f3-4557-427f-9c4f-d07d9475ec62 | JacksonBrune | 2025-01-23T08:38:02Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-01-23T08:25:01Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 131bc2f3-4557-427f-9c4f-d07d9475ec62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/131bc2f3-4557-427f-9c4f-d07d9475ec62
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 131bc2f3-4557-427f-9c4f-d07d9475ec62
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.5719 | 0.0000 | 1 | 3.7405 |
| 2.883 | 0.0001 | 3 | 3.7268 |
| 3.6954 | 0.0003 | 6 | 3.5764 |
| 3.3368 | 0.0004 | 9 | 3.3030 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nat-hunt/4918b36b-6d20-4eda-ba0a-aaa70c87e434 | nat-hunt | 2025-01-23T08:37:25Z | 11 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:36:31Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4918b36b-6d20-4eda-ba0a-aaa70c87e434
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1f09a8c91516b26_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1f09a8c91516b26_train_data.json
type:
field_instruction: input_field
field_output: output_field
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/4918b36b-6d20-4eda-ba0a-aaa70c87e434
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1f09a8c91516b26_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 925de7c1-8903-4426-93e7-8a873f15c09b
wandb_project: Birthday-SN56-4-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 925de7c1-8903-4426-93e7-8a873f15c09b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4918b36b-6d20-4eda-ba0a-aaa70c87e434
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.0434 | 0.0253 | 1 | 1.9323 |
| 7.3627 | 0.0759 | 3 | 1.8602 |
| 5.6978 | 0.1519 | 6 | 1.3925 |
| 4.7539 | 0.2278 | 9 | 1.1210 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ryanlu522/Qwen2-VL-7B-Instruct-IQ3_M-GGUF | ryanlu522 | 2025-01-23T08:36:16Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2025-01-23T08:28:36Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: Qwen/Qwen2-VL-7B-Instruct
---
# ryanlu522/Qwen2-VL-7B-Instruct-IQ3_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ3_M-GGUF --hf-file qwen2-vl-7b-instruct-iq3_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ3_M-GGUF --hf-file qwen2-vl-7b-instruct-iq3_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ3_M-GGUF --hf-file qwen2-vl-7b-instruct-iq3_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ3_M-GGUF --hf-file qwen2-vl-7b-instruct-iq3_m-imat.gguf -c 2048
```
|
nttx/1c1b04c5-41b8-4dfa-967a-9c512ab5c617 | nttx | 2025-01-23T08:35:45Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"region:us"
] | null | 2025-01-23T08:12:54Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c1b04c5-41b8-4dfa-967a-9c512ab5c617
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 17a4766fa4748b36_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17a4766fa4748b36_train_data.json
type:
field_input: text
field_instruction: leadin
field_output: heading
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/1c1b04c5-41b8-4dfa-967a-9c512ab5c617
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/17a4766fa4748b36_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 464732d7-8f75-4034-bba8-31e12a8da780
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 464732d7-8f75-4034-bba8-31e12a8da780
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c1b04c5-41b8-4dfa-967a-9c512ab5c617
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4014 | 0.0002 | 1 | 3.6039 |
| 2.2961 | 0.0090 | 50 | 2.4793 |
| 2.3538 | 0.0180 | 100 | 2.3410 |
| 2.0825 | 0.0270 | 150 | 2.2582 |
| 2.2056 | 0.0360 | 200 | 2.2381 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso03/2cc4ba89-4f17-48fc-9768-fff4f872d76c | lesso03 | 2025-01-23T08:35:13Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:31:26Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2cc4ba89-4f17-48fc-9768-fff4f872d76c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
datasets:
- data_files:
- e7031e972306f161_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7031e972306f161_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso03/2cc4ba89-4f17-48fc-9768-fff4f872d76c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7031e972306f161_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 74aeda5e-e0f5-4ba1-aafa-46b426ae9a0b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2cc4ba89-4f17-48fc-9768-fff4f872d76c
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.4145 | 0.0014 | 1 | 1.6608 |
| 6.101 | 0.0072 | 5 | 1.6496 |
| 6.0041 | 0.0145 | 10 | 1.5422 |
| 6.3448 | 0.0217 | 15 | 1.4009 |
| 4.9334 | 0.0289 | 20 | 1.3542 |
| 6.175 | 0.0361 | 25 | 1.3442 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/c29024ca-7f52-4eea-bdf0-f809b9b20df5 | thalllsssss | 2025-01-23T08:34:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:32:46Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c29024ca-7f52-4eea-bdf0-f809b9b20df5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 104fb3eeae33f2bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/104fb3eeae33f2bb_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/c29024ca-7f52-4eea-bdf0-f809b9b20df5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/104fb3eeae33f2bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c29024ca-7f52-4eea-bdf0-f809b9b20df5
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.0029 | 0.3380 | 200 | 5.1674 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lhong4759/d9897b09-9b55-4be6-8e91-388973802f82 | lhong4759 | 2025-01-23T08:33:58Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:32:46Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d9897b09-9b55-4be6-8e91-388973802f82
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 104fb3eeae33f2bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/104fb3eeae33f2bb_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/d9897b09-9b55-4be6-8e91-388973802f82
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/104fb3eeae33f2bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d9897b09-9b55-4be6-8e91-388973802f82
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.9947 | 0.3380 | 200 | 5.1711 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cunghoctienganh/599559c4-85fc-4bbd-9407-9bdc7c1b1204 | cunghoctienganh | 2025-01-23T08:33:39Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:32:41Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 599559c4-85fc-4bbd-9407-9bdc7c1b1204
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 104fb3eeae33f2bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/104fb3eeae33f2bb_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/599559c4-85fc-4bbd-9407-9bdc7c1b1204
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/104fb3eeae33f2bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 599559c4-85fc-4bbd-9407-9bdc7c1b1204
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.9935 | 0.3380 | 200 | 5.1586 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhungphammmmm/764efec6-3d46-4e49-9ca8-c707f33b8ac0 | nhungphammmmm | 2025-01-23T08:33:15Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:32:35Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 764efec6-3d46-4e49-9ca8-c707f33b8ac0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 104fb3eeae33f2bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/104fb3eeae33f2bb_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/764efec6-3d46-4e49-9ca8-c707f33b8ac0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/104fb3eeae33f2bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 764efec6-3d46-4e49-9ca8-c707f33b8ac0
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.9889 | 0.3380 | 200 | 5.1566 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dzanbek/6c08fd7b-91a5-41bb-b885-55b567a71e38 | dzanbek | 2025-01-23T08:33:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:32:49Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6c08fd7b-91a5-41bb-b885-55b567a71e38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 104fb3eeae33f2bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/104fb3eeae33f2bb_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: dzanbek/6c08fd7b-91a5-41bb-b885-55b567a71e38
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/104fb3eeae33f2bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6c08fd7b-91a5-41bb-b885-55b567a71e38
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | nan |
| 0.0 | 0.0084 | 5 | nan |
| 0.0 | 0.0169 | 10 | nan |
| 0.0 | 0.0253 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso01/dc36d136-6b69-4076-b012-14535ab4a0b1 | lesso01 | 2025-01-23T08:33:05Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:19:19Z | ---
library_name: peft
base_model: oopsung/llama2-7b-koNqa-test-v1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc36d136-6b69-4076-b012-14535ab4a0b1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: oopsung/llama2-7b-koNqa-test-v1
bf16: true
chat_template: llama3
datasets:
- data_files:
- 0470cc49f434ca45_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0470cc49f434ca45_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: responseA
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/dc36d136-6b69-4076-b012-14535ab4a0b1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/0470cc49f434ca45_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db1a33bc-9f36-4a09-a66d-2395320ddb3b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: db1a33bc-9f36-4a09-a66d-2395320ddb3b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dc36d136-6b69-4076-b012-14535ab4a0b1
This model is a fine-tuned version of [oopsung/llama2-7b-koNqa-test-v1](https://huggingface.co/oopsung/llama2-7b-koNqa-test-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0005 | 10 | nan |
| 0.0 | 0.0007 | 15 | nan |
| 0.0 | 0.0009 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nadejdatarabukina/e2501263-1134-4276-9b63-3383368faf56 | nadejdatarabukina | 2025-01-23T08:33:04Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:32:44Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e2501263-1134-4276-9b63-3383368faf56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 104fb3eeae33f2bb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/104fb3eeae33f2bb_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/e2501263-1134-4276-9b63-3383368faf56
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/104fb3eeae33f2bb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31c3f894-5134-4c0c-9c0e-e098c4bedb2f
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e2501263-1134-4276-9b63-3383368faf56
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | nan |
| 0.0 | 0.0084 | 5 | nan |
| 0.0 | 0.0169 | 10 | nan |
| 0.0 | 0.0253 | 15 | nan |
| 0.0 | 0.0338 | 20 | nan |
| 0.0 | 0.0422 | 25 | nan |
| 0.0 | 0.0507 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
utter-project/mHuBERT-147-base-2nd-iter | utter-project | 2025-01-23T08:32:43Z | 556 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"hubert",
"feature-extraction",
"ab",
"af",
"am",
"ar",
"as",
"az",
"ba",
"be",
"bn",
"bo",
"bs",
"br",
"bg",
"ca",
"cs",
"cv",
"cy",
"da",
"de",
"dv",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fa",
"tl",
"fi",
"fr",
"fy",
"ga",
"gl",
"gv",
"gn",
"gu",
"ht",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"ig",
"ia",
"id",
"is",
"it",
"jv",
"ja",
"kn",
"ka",
"kk",
"km",
"rw",
"ky",
"ku",
"ko",
"lo",
"la",
"lv",
"ln",
"lt",
"lb",
"lg",
"ml",
"mr",
"mk",
"mg",
"mt",
"mn",
"mi",
"ms",
"my",
"ne",
"nl",
"nn",
"no",
"oc",
"or",
"pa",
"pl",
"pt",
"ps",
"ro",
"ru",
"sa",
"si",
"sl",
"sk",
"sn",
"sd",
"so",
"st",
"es",
"sq",
"sc",
"sr",
"su",
"sw",
"sv",
"ta",
"tt",
"te",
"tg",
"th",
"tn",
"tk",
"tr",
"tw",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"arxiv:2406.06371",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-02-21T14:11:13Z | ---
license: cc-by-nc-sa-4.0
language:
- ab
- af
- am
- ar
- as
- az
- ba
- be
- bn
- bo
- bs
- br
- bg
- ca
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- tl
- fi
- fr
- fy
- ga
- gl
- gv
- gn
- gu
- ht
- ha
- he
- hi
- hr
- hu
- hy
- ig
- ia
- id
- is
- it
- jv
- ja
- kn
- ka
- kk
- km
- rw
- ky
- ku
- ko
- lo
- la
- lv
- ln
- lt
- lb
- lg
- ml
- mr
- mk
- mg
- mt
- mn
- mi
- ms
- my
- ne
- nl
- nn
- no
- oc
- or
- pa
- pl
- pt
- ps
- ro
- ru
- sa
- si
- sl
- sk
- sn
- sd
- so
- st
- es
- sq
- sc
- sr
- su
- sw
- sv
- ta
- tt
- te
- tg
- th
- tn
- tk
- tr
- tw
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- yo
- zh
---
**This repository contains the SECOND ITERATION mHuBERT-147 model.**
**The best mHuBERT-147 model is available [here](https://huggingface.co/utter-project/mHuBERT-147).**
**MODEL DETAILS:** 2nd iteration, K=1000, HuBERT base architecture (95M parameters), 147 languages.
# Table of Contents:
1. [Summary](https://huggingface.co/utter-project/mHuBERT-147#mhubert-147-models)
2. [Training Data and Code](https://huggingface.co/utter-project/mHuBERT-147#training)
3. [ML-SUPERB Scores](https://huggingface.co/utter-project/mHuBERT-147#ml-superb-scores)
4. [Languages and Datasets](https://huggingface.co/utter-project/mHuBERT-147#languages-and-datasets)
6. [Citing and Funding Information](https://huggingface.co/utter-project/mHuBERT-147#citing-and-funding-information)
# mHuBERT-147 models
mHuBERT-147 are compact and competitive multilingual HuBERT models trained on 90K hours of open-license data in 147 languages.
Different from *traditional* HuBERTs, mHuBERT-147 models are trained using faiss IVF discrete speech units.
Training employs a two-level language, data source up-sampling during training. See more information in [our paper](https://arxiv.org/pdf/2406.06371).
**This repository contains:**
* Fairseq checkpoint (original);
* HuggingFace checkpoint (conversion using transformers library);
* Faiss index for continuous pre-training (OPQ16_64,IVF1000_HNSW32,PQ16x4fsr).
**Related Models:**
* [3rd Iteration mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147) (best)
* [1st Iteration mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147-base-1st-iter)
* [HUTTER-12 CommonVoice Prototype (12 languages)](https://huggingface.co/utter-project/hutter-12-3rd-base)
# Training
* **[Manifest list available here.](https://huggingface.co/utter-project/mHuBERT-147-base-3rd-iter/tree/main/manifest)** Please note that since training, there were CommonVoice removal requests. This means that some of the listed files are no longer available.
* **[Fairseq fork](https://github.com/utter-project/fairseq)** contains the scripts for training with multilingual batching with two-level up-sampling.
* **[Scripts for pre-processing/faiss clustering available here.](https://github.com/utter-project/mHuBERT-147-scripts)**
# ML-SUPERB Scores
mHubert-147 reaches second and first position in the 10min and 1h leaderboards respectively. We achieve new SOTA scores for three LID tasks.
See more information in [our paper](https://arxiv.org/pdf/2406.06371).

# Languages and Datasets
**Datasets:** For ASR/ST/TTS datasets, only train set is used.
* [Aishell](https://www.openslr.org/33/) and [AISHELL-3](https://www.openslr.org/93/)
* [BibleTTS](https://www.openslr.org/129/)
* [ClovaCall](https://github.com/clovaai/ClovaCall)
* [CommonVoice v11](https://commonvoice.mozilla.org/en/datasets)
* Google TTS data: [Javanese](https://www.openslr.org/41/), [Khmer](https://www.openslr.org/42/), [Nepali](https://www.openslr.org/43/), [Sundanese](https://www.openslr.org/44/), [South African Languages](https://www.openslr.org/32/), [Bengali Languages](https://www.openslr.org/37/)
* IISc-MILE: [Tamil](https://www.openslr.org/127/), [Kannada](https://www.openslr.org/126/)
* [Japanese Versatile Speech](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus)
* [Kokoro](https://github.com/kaiidams/Kokoro-Speech-Dataset)
* [Kosp2e](https://github.com/warnikchow/kosp2e)
* Media Speech: [Turkish Only](https://www.openslr.org/108/)
* [Multilingual LibriSpeech](https://www.openslr.org/94/)
* [Samrómur](https://www.openslr.org/128/)
* [THCHS-30](https://www.openslr.org/18/) and [THUYG-20](https://www.openslr.org/22/)
* [VoxLingua107](https://bark.phon.ioc.ee/voxlingua107/)
* [VoxPopuli](https://github.com/facebookresearch/voxpopuli/)
**Languages present not indexed by Huggingface:** Asturian (ast), Basaa (bas), Cebuano (ceb), Central Kurdish/Sorani (ckb), Hakha Chin (cnh), Hawaiian (haw), Upper Sorbian (hsb) Kabyle (kab), Moksha (mdf), Meadow Mari (mhr), Hill Mari (mrj), Erzya (myv), Taiwanese Hokkien (nan-tw), Sursilvan (rm-sursilv), Vallader (rm-vallader), Sakha (sah), Santali (sat), Scots (sco), Saraiki (skr), Tigre (tig), Tok Pisin (tpi), Akwapen Twi (tw-akuapem), Asante Twi (tw-asante), Votic (vot), Waray (war), Cantonese (yue).
# Citing and Funding Information
```
@inproceedings{boito2024mhubert,
author={Marcely Zanon Boito, Vivek Iyer, Nikolaos Lagos, Laurent Besacier, Ioan Calapodescu},
title={{mHuBERT-147: A Compact Multilingual HuBERT Model}},
year=2024,
booktitle={Interspeech 2024},
}
```
<img src="https://cdn-uploads.huggingface.co/production/uploads/62262e19d36494a6f743a28d/HbzC1C-uHe25ewTy2wyoK.png" width=7% height=7%>
This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
For more information please visit https://he-utter.eu/
NAVER LABS Europe: https://europe.naverlabs.com/ |
mrhunghd/41c28e2c-0cff-4303-957b-f763c2764195 | mrhunghd | 2025-01-23T08:32:23Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:25:01Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 41c28e2c-0cff-4303-957b-f763c2764195
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1f09a8c91516b26_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1f09a8c91516b26_train_data.json
type:
field_instruction: input_field
field_output: output_field
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/41c28e2c-0cff-4303-957b-f763c2764195
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1f09a8c91516b26_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 925de7c1-8903-4426-93e7-8a873f15c09b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 925de7c1-8903-4426-93e7-8a873f15c09b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 41c28e2c-0cff-4303-957b-f763c2764195
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6543 | 0.9873 | 39 | 1.0232 |
| 3.9941 | 1.0127 | 40 | 1.0261 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5606/b1ba74b4-b098-4fba-8013-414a2ec3deb2 | prxy5606 | 2025-01-23T08:31:14Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:25:37Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1ba74b4-b098-4fba-8013-414a2ec3deb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- b1f09a8c91516b26_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1f09a8c91516b26_train_data.json
type:
field_instruction: input_field
field_output: output_field
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/b1ba74b4-b098-4fba-8013-414a2ec3deb2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b1f09a8c91516b26_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 925de7c1-8903-4426-93e7-8a873f15c09b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 925de7c1-8903-4426-93e7-8a873f15c09b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b1ba74b4-b098-4fba-8013-414a2ec3deb2
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6465 | 0.1 | 1 | 1.9418 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chunminglim/trial | chunminglim | 2025-01-23T08:29:55Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T08:27:52Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chunminglim
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Finnnsansna/sofiaFLUXv3 | Finnnsansna | 2025-01-23T08:28:45Z | 12 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-23T07:46:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sofiaFLUX
---
# Sofiafluxv3
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sofiaFLUX` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Finnnsansna/sofiaFLUXv3', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aleegis10/6a8ea6a9-612b-4150-b569-27168d41652c | aleegis10 | 2025-01-23T08:27:58Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:adapter:elyza/Llama-3-ELYZA-JP-8B",
"license:llama3",
"region:us"
] | null | 2025-01-23T07:49:05Z | ---
library_name: peft
license: llama3
base_model: elyza/Llama-3-ELYZA-JP-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6a8ea6a9-612b-4150-b569-27168d41652c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: elyza/Llama-3-ELYZA-JP-8B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 4e24cfc495d92a70_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4e24cfc495d92a70_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis10/6a8ea6a9-612b-4150-b569-27168d41652c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/4e24cfc495d92a70_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eeeb50de-a3fb-4016-8801-49021fd6c6b9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eeeb50de-a3fb-4016-8801-49021fd6c6b9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6a8ea6a9-612b-4150-b569-27168d41652c
This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9928 | 0.0009 | 1 | 1.0830 |
| 0.2248 | 0.0440 | 50 | 0.3456 |
| 0.232 | 0.0880 | 100 | 0.2754 |
| 0.1364 | 0.1320 | 150 | 0.2401 |
| 0.2074 | 0.1761 | 200 | 0.2197 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso09/344aae17-8f2a-4d84-8922-406e07dd82bf | lesso09 | 2025-01-23T08:26:35Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:24:32Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 344aae17-8f2a-4d84-8922-406e07dd82bf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
datasets:
- data_files:
- b1f09a8c91516b26_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1f09a8c91516b26_train_data.json
type:
field_instruction: input_field
field_output: output_field
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/344aae17-8f2a-4d84-8922-406e07dd82bf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1f09a8c91516b26_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 925de7c1-8903-4426-93e7-8a873f15c09b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 925de7c1-8903-4426-93e7-8a873f15c09b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 344aae17-8f2a-4d84-8922-406e07dd82bf
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.1044 | 0.0253 | 1 | 1.9238 |
| 6.628 | 0.1266 | 5 | 1.5685 |
| 4.2088 | 0.2532 | 10 | 1.0954 |
| 3.9094 | 0.3797 | 15 | 1.0696 |
| 3.607 | 0.5063 | 20 | 1.0187 |
| 3.5662 | 0.6329 | 25 | 1.0137 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
albertmartinez/distilbert-multilingual-sdg-classification | albertmartinez | 2025-01-23T08:25:53Z | 42 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:albertmartinez/OSDG",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:quantized:distilbert/distilbert-base-multilingual-cased",
"doi:10.57967/hf/2737",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-17T06:45:50Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-multilingual-sdg-classification
results: []
datasets:
- albertmartinez/OSDG
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-multilingual-sdg-classification
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8076
- F1: 0.7706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1669 | 1.0 | 538 | 1.2066 | 0.6552 |
| 1.0784 | 2.0 | 1076 | 0.9131 | 0.7414 |
| 0.8756 | 3.0 | 1614 | 0.8408 | 0.7614 |
| 0.7817 | 4.0 | 2152 | 0.8136 | 0.7688 |
| 0.7337 | 5.0 | 2690 | 0.8076 | 0.7706 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.1.2.post304
- Datasets 3.2.0
- Tokenizers 0.21.0 |
albertmartinez/bert-sdg-classification | albertmartinez | 2025-01-23T08:25:39Z | 39 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:albertmartinez/OSDG",
"base_model:google-bert/bert-base-uncased",
"base_model:quantized:google-bert/bert-base-uncased",
"doi:10.57967/hf/2732",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-17T18:34:09Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-sdg-classification
results: []
datasets:
- albertmartinez/OSDG
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-sdg-classification
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7055
- F1: 0.7980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2299 | 1.0 | 538 | 1.0520 | 0.7118 |
| 0.9383 | 2.0 | 1076 | 0.7800 | 0.7794 |
| 0.7379 | 3.0 | 1614 | 0.7253 | 0.7947 |
| 0.6362 | 4.0 | 2152 | 0.7107 | 0.7965 |
| 0.5779 | 5.0 | 2690 | 0.7055 | 0.7980 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.1.2.post304
- Datasets 3.2.0
- Tokenizers 0.21.0 |
kk-aivio/dc96f283-b0c0-4d92-9267-9b01890824b6 | kk-aivio | 2025-01-23T08:24:28Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-23T08:16:48Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc96f283-b0c0-4d92-9267-9b01890824b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aed51b8e2c089967_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aed51b8e2c089967_train_data.json
type:
field_input: instance_id
field_instruction: prompt_msg
field_output: truth
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/dc96f283-b0c0-4d92-9267-9b01890824b6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dc96f283-b0c0-4d92-9267-9b01890824b6
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0011 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/SJT-4B-v1.1-GGUF | mradermacher | 2025-01-23T08:22:49Z | 172 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"ja",
"base_model:Sakalti/SJT-4B-v1.1",
"base_model:quantized:Sakalti/SJT-4B-v1.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T07:32:24Z | ---
base_model: Sakalti/SJT-4B-v1.1
language:
- en
- ja
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sakalti/SJT-4B-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SJT-4B-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SJT-4B-v1.1-GGUF/resolve/main/SJT-4B-v1.1.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ketchup123/llama-2-7b-chat-pubmedqa-safeinstruct-num-samples-500-HF | ketchup123 | 2025-01-23T08:22:18Z | 113 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2025-01-23T08:21:48Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama-2-7b-chat-pubmedqa-safeinstruct-num-samples-500-HF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-7b-chat-pubmedqa-safeinstruct-num-samples-500-HF
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 3407
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
lesso12/8a716a62-4947-470e-8dac-916e84b2a1a2 | lesso12 | 2025-01-23T08:21:47Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T07:30:35Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8a716a62-4947-470e-8dac-916e84b2a1a2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 7e8c233e95996edb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7e8c233e95996edb_train_data.json
type:
field_input: label
field_instruction: text
field_output: text-english
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso12/8a716a62-4947-470e-8dac-916e84b2a1a2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/7e8c233e95996edb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb3b8dbf-21b2-4796-bedc-d035bdf3d717
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb3b8dbf-21b2-4796-bedc-d035bdf3d717
warmup_steps: 10
weight_decay: 0.0
xformers_attention: true
```
</details><br>
# 8a716a62-4947-470e-8dac-916e84b2a1a2
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0008 | 5 | nan |
| 0.0 | 0.0017 | 10 | nan |
| 0.0 | 0.0025 | 15 | nan |
| 0.0 | 0.0034 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/75b5f503-a100-4211-b1b6-1ba9f1c5038a | nhung01 | 2025-01-23T08:21:09Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:18:34Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75b5f503-a100-4211-b1b6-1ba9f1c5038a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3c1653b647a00a0_train_data.json
type:
field_input: context
field_instruction: question
field_output: justification
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/75b5f503-a100-4211-b1b6-1ba9f1c5038a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3c1653b647a00a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed221ead-97e8-4057-b485-f8f04d02c1df
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 75b5f503-a100-4211-b1b6-1ba9f1c5038a
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 31.0221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 132.2982 | 0.7346 | 200 | 31.0221 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
taopanda-1/8ee10f5e-4364-443a-a422-76e16c32ba9a | taopanda-1 | 2025-01-23T08:20:53Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:20:14Z | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: EleutherAI/pythia-70m-deduped
model-index:
- name: 8ee10f5e-4364-443a-a422-76e16c32ba9a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: e3c1653b647a00a0_train_data.json
type:
field: null
field_input: context
field_instruction: question
field_output: justification
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
early_stopping_patience: null
evals_per_epoch: 2
gradient_accumulation_steps: 1
group_by_length: false
hub_model_id: taopanda-1/8ee10f5e-4364-443a-a422-76e16c32ba9a
learning_rate: 1.0e-05
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: null
lora_target_modules:
- query_key_value
micro_batch_size: 4
num_epochs: 1
output_dir: ./outputs/lora-alpaca-pythia/taopanda-1_ed221ead-97e8-4057-b485-f8f04d02c1df
resume_from_checkpoint: null
seed: 5096
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
tf32: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: fatcat87-taopanda
wandb_log_model: null
wandb_mode: online
wandb_name: taopanda-1_ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: subnet56
wandb_runid: taopanda-1_ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_watch: null
weight_decay: 0.1
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/xkin534u)
# 8ee10f5e-4364-443a-a422-76e16c32ba9a
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 32.1315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 5096
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 33.3329 | 0.0114 | 1 | 32.1791 |
| 33.7261 | 0.5 | 44 | 32.1460 |
| 33.2233 | 1.0 | 88 | 32.1315 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
lhong4759/fa06b87e-3eb3-4b36-b00e-5dfdf87a5de5 | lhong4759 | 2025-01-23T08:20:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:18:38Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fa06b87e-3eb3-4b36-b00e-5dfdf87a5de5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3c1653b647a00a0_train_data.json
type:
field_input: context
field_instruction: question
field_output: justification
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/fa06b87e-3eb3-4b36-b00e-5dfdf87a5de5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3c1653b647a00a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed221ead-97e8-4057-b485-f8f04d02c1df
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fa06b87e-3eb3-4b36-b00e-5dfdf87a5de5
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 30.7479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 132.4387 | 0.7346 | 200 | 30.7479 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhoxinh/d0c29fe1-504f-48a9-b54d-be8d6f3ed86f | nhoxinh | 2025-01-23T08:20:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:18:26Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0c29fe1-504f-48a9-b54d-be8d6f3ed86f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3c1653b647a00a0_train_data.json
type:
field_input: context
field_instruction: question
field_output: justification
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/d0c29fe1-504f-48a9-b54d-be8d6f3ed86f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3c1653b647a00a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed221ead-97e8-4057-b485-f8f04d02c1df
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d0c29fe1-504f-48a9-b54d-be8d6f3ed86f
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 31.0420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 132.8683 | 0.7346 | 200 | 31.0420 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
datlaaaaaaa/3b645475-fcb9-43f7-b9fd-548b60f527c6 | datlaaaaaaa | 2025-01-23T08:19:55Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T08:18:27Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b645475-fcb9-43f7-b9fd-548b60f527c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3c1653b647a00a0_train_data.json
type:
field_input: context
field_instruction: question
field_output: justification
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/3b645475-fcb9-43f7-b9fd-548b60f527c6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3c1653b647a00a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed221ead-97e8-4057-b485-f8f04d02c1df
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3b645475-fcb9-43f7-b9fd-548b60f527c6
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 30.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 132.5058 | 0.7346 | 200 | 30.8942 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shahiryar/crimson-agent | shahiryar | 2025-01-23T08:19:49Z | 64 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-08T13:18:46Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: shahiryar/crimson-agent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shahiryar/crimson-agent
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7756
- Train Accuracy: 0.5357
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.7756 | 0.5357 | 0 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.16.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
prxy5606/afa2f532-6fe9-4a3f-a92b-1d1ea42d6909 | prxy5606 | 2025-01-23T08:19:26Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:18:28Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: afa2f532-6fe9-4a3f-a92b-1d1ea42d6909
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3c1653b647a00a0_train_data.json
type:
field_input: context
field_instruction: question
field_output: justification
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/afa2f532-6fe9-4a3f-a92b-1d1ea42d6909
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e3c1653b647a00a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed221ead-97e8-4057-b485-f8f04d02c1df
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# afa2f532-6fe9-4a3f-a92b-1d1ea42d6909
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 122.6097 | 0.0147 | 1 | 31.9985 |
| 30.4 | 0.7326 | 50 | 8.0000 |
| 22.6874 | 1.4652 | 100 | 6.0734 |
| 20.4517 | 2.1978 | 150 | 5.5288 |
| 20.0385 | 2.9304 | 200 | 5.3234 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ryanlu522/Qwen2-VL-7B-Instruct-IQ4_NL-GGUF | ryanlu522 | 2025-01-23T08:19:03Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2025-01-23T08:18:41Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: Qwen/Qwen2-VL-7B-Instruct
---
# ryanlu522/Qwen2-VL-7B-Instruct-IQ4_NL-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ4_NL-GGUF --hf-file qwen2-vl-7b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ4_NL-GGUF --hf-file qwen2-vl-7b-instruct-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ4_NL-GGUF --hf-file qwen2-vl-7b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ryanlu522/Qwen2-VL-7B-Instruct-IQ4_NL-GGUF --hf-file qwen2-vl-7b-instruct-iq4_nl-imat.gguf -c 2048
```
|
cvoffer/fd97ba81-368e-46cf-8a1c-2b0f854bbbf0 | cvoffer | 2025-01-23T08:19:00Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T08:18:34Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd97ba81-368e-46cf-8a1c-2b0f854bbbf0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3c1653b647a00a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3c1653b647a00a0_train_data.json
type:
field_input: context
field_instruction: question
field_output: justification
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cvoffer/fd97ba81-368e-46cf-8a1c-2b0f854bbbf0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3c1653b647a00a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed221ead-97e8-4057-b485-f8f04d02c1df
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed221ead-97e8-4057-b485-f8f04d02c1df
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# fd97ba81-368e-46cf-8a1c-2b0f854bbbf0
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.1492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | 9.2899 |
| 27.1393 | 0.0184 | 5 | 9.2502 |
| 26.023 | 0.0367 | 10 | 9.2137 |
| 28.8907 | 0.0551 | 15 | 9.1622 |
| 29.285 | 0.0735 | 20 | 9.1845 |
| 32.8603 | 0.0918 | 25 | 9.0770 |
| 35.885 | 0.1102 | 30 | 9.1492 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/3697b3ef-cde9-4290-b701-9c74705548da | thalllsssss | 2025-01-23T08:16:44Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:adapter:jingyeom/seal3.1.6n_7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:54:50Z | ---
library_name: peft
base_model: jingyeom/seal3.1.6n_7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3697b3ef-cde9-4290-b701-9c74705548da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: jingyeom/seal3.1.6n_7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4fb5aa4ebc7d0064_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4fb5aa4ebc7d0064_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/3697b3ef-cde9-4290-b701-9c74705548da
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4fb5aa4ebc7d0064_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0cf7ab13-7fb6-4938-b313-c87703196b3e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0cf7ab13-7fb6-4938-b313-c87703196b3e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3697b3ef-cde9-4290-b701-9c74705548da
This model is a fine-tuned version of [jingyeom/seal3.1.6n_7b](https://huggingface.co/jingyeom/seal3.1.6n_7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8187 | 0.0390 | 200 | 1.9808 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/UIT-NO-PRExlnet-large-cased-finetuned | sercetexam9 | 2025-01-23T08:14:40Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-large-cased",
"base_model:finetune:xlnet/xlnet-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-23T08:13:55Z | ---
library_name: transformers
license: mit
base_model: xlnet/xlnet-large-cased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: UIT-NO-PRExlnet-large-cased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UIT-NO-PRExlnet-large-cased-finetuned
This model is a fine-tuned version of [xlnet/xlnet-large-cased](https://huggingface.co/xlnet/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6311
- F1: 0.7534
- Roc Auc: 0.8047
- Accuracy: 0.5018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6008 | 1.0 | 139 | 0.5907 | 0.0994 | 0.5061 | 0.1227 |
| 0.5594 | 2.0 | 278 | 0.5834 | 0.1435 | 0.5 | 0.1300 |
| 0.4458 | 3.0 | 417 | 0.3967 | 0.6474 | 0.7336 | 0.4007 |
| 0.3153 | 4.0 | 556 | 0.3647 | 0.7128 | 0.7775 | 0.4495 |
| 0.2474 | 5.0 | 695 | 0.3392 | 0.7382 | 0.7952 | 0.4693 |
| 0.1915 | 6.0 | 834 | 0.3702 | 0.7346 | 0.7980 | 0.5054 |
| 0.1194 | 7.0 | 973 | 0.4083 | 0.7340 | 0.7994 | 0.4982 |
| 0.0953 | 8.0 | 1112 | 0.4656 | 0.7507 | 0.8101 | 0.4910 |
| 0.0482 | 9.0 | 1251 | 0.5682 | 0.7438 | 0.7934 | 0.4838 |
| 0.0504 | 10.0 | 1390 | 0.5374 | 0.7419 | 0.8069 | 0.4729 |
| 0.0265 | 11.0 | 1529 | 0.6019 | 0.7408 | 0.8011 | 0.4838 |
| 0.0082 | 12.0 | 1668 | 0.6136 | 0.7429 | 0.8015 | 0.4874 |
| 0.0077 | 13.0 | 1807 | 0.6212 | 0.7461 | 0.8020 | 0.4982 |
| 0.0117 | 14.0 | 1946 | 0.6089 | 0.7519 | 0.8086 | 0.4928 |
| 0.0044 | 15.0 | 2085 | 0.6246 | 0.7508 | 0.8050 | 0.5 |
| 0.0041 | 16.0 | 2224 | 0.6382 | 0.7460 | 0.8005 | 0.4946 |
| 0.0024 | 17.0 | 2363 | 0.6333 | 0.7467 | 0.8011 | 0.5 |
| 0.0053 | 18.0 | 2502 | 0.6311 | 0.7534 | 0.8047 | 0.5018 |
| 0.0027 | 19.0 | 2641 | 0.6311 | 0.7508 | 0.8032 | 0.5036 |
| 0.0029 | 20.0 | 2780 | 0.6318 | 0.7520 | 0.8037 | 0.5054 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.21.0
|
Best000/767cb4c6-3ad3-46f4-b827-ca227a9648b9 | Best000 | 2025-01-23T08:14:37Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-23T08:12:33Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 767cb4c6-3ad3-46f4-b827-ca227a9648b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- aba4cbb8799260b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aba4cbb8799260b0_train_data.json
type:
field_instruction: questions
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/767cb4c6-3ad3-46f4-b827-ca227a9648b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/aba4cbb8799260b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2f455866-9ba0-47c4-9984-af49b419b951
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2f455866-9ba0-47c4-9984-af49b419b951
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 767cb4c6-3ad3-46f4-b827-ca227a9648b9
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5717 | 0.0008 | 1 | nan |
| 2.1874 | 0.0024 | 3 | nan |
| 4.6303 | 0.0049 | 6 | nan |
| 3.7152 | 0.0073 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nadejdatarabukina/408e9676-bb62-4f66-a9c1-b070598bcea5 | nadejdatarabukina | 2025-01-23T08:09:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-23T03:34:01Z | ---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 408e9676-bb62-4f66-a9c1-b070598bcea5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 170c6834dc7ec4fa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/170c6834dc7ec4fa_train_data.json
type:
field_input: title
field_instruction: content
field_output: summary1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/408e9676-bb62-4f66-a9c1-b070598bcea5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/170c6834dc7ec4fa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ca8ff29d-9d37-4866-b211-3cbcc242f321
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ca8ff29d-9d37-4866-b211-3cbcc242f321
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 408e9676-bb62-4f66-a9c1-b070598bcea5
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.4164 |
| 4.4222 | 0.0001 | 5 | 4.0538 |
| 3.9655 | 0.0001 | 10 | 3.8064 |
| 3.6699 | 0.0002 | 15 | 3.6741 |
| 3.7247 | 0.0003 | 20 | 3.6176 |
| 3.655 | 0.0003 | 25 | 3.5898 |
| 3.6614 | 0.0004 | 30 | 3.5846 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vmpsergio/a1508d1f-ab7b-4525-812b-3865a2d8a41e | vmpsergio | 2025-01-23T08:08:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"region:us"
] | null | 2025-01-23T03:33:31Z | ---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a1508d1f-ab7b-4525-812b-3865a2d8a41e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 170c6834dc7ec4fa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/170c6834dc7ec4fa_train_data.json
type:
field_input: title
field_instruction: content
field_output: summary1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: vmpsergio/a1508d1f-ab7b-4525-812b-3865a2d8a41e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/170c6834dc7ec4fa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ca8ff29d-9d37-4866-b211-3cbcc242f321
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ca8ff29d-9d37-4866-b211-3cbcc242f321
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a1508d1f-ab7b-4525-812b-3865a2d8a41e
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.4164 |
| 4.423 | 0.0001 | 5 | 4.0566 |
| 3.9645 | 0.0001 | 10 | 3.8057 |
| 3.6697 | 0.0002 | 15 | 3.6749 |
| 3.7253 | 0.0003 | 20 | 3.6183 |
| 3.6549 | 0.0003 | 25 | 3.5909 |
| 3.6623 | 0.0004 | 30 | 3.5856 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik2987/a04c8b26-84ce-4163-b06a-fda53afae0bd | dimasik2987 | 2025-01-23T08:07:22Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:44:31Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a04c8b26-84ce-4163-b06a-fda53afae0bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c54c4cfeb1403ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c54c4cfeb1403ba8_train_data.json
type:
field_instruction: hieroglyphs
field_output: translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: dimasik2987/a04c8b26-84ce-4163-b06a-fda53afae0bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/c54c4cfeb1403ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# a04c8b26-84ce-4163-b06a-fda53afae0bd
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | 3.2015 |
| 8.191 | 0.0067 | 5 | 2.4169 |
| 6.9339 | 0.0134 | 10 | 1.9922 |
| 6.4111 | 0.0201 | 15 | 1.8906 |
| 6.2029 | 0.0268 | 20 | 1.8688 |
| 6.4598 | 0.0335 | 25 | 1.8550 |
| 6.6 | 0.0402 | 30 | 1.8521 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pylu5229/conditional-detr-resnet-50-uLED-obj-detect-test | pylu5229 | 2025-01-23T08:05:35Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-01-23T07:22:13Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: conditional-detr-resnet-50-uLED-obj-detect-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conditional-detr-resnet-50-uLED-obj-detect-test
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0912
- Map: 0.9334
- Map 50: 0.9684
- Map 75: 0.9684
- Map Small: -1.0
- Map Medium: 0.9334
- Map Large: -1.0
- Mar 1: 0.0125
- Mar 10: 0.1259
- Mar 100: 0.9777
- Mar Small: -1.0
- Mar Medium: 0.9777
- Mar Large: -1.0
- Map Uled: 0.9334
- Mar 100 Uled: 0.9777
- Map Trash: -1.0
- Mar 100 Trash: -1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Uled | Mar 100 Uled | Map Trash | Mar 100 Trash |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------:|:------------:|:---------:|:-------------:|
| No log | 1.0 | 41 | 0.2460 | 0.7925 | 0.9619 | 0.9382 | -1.0 | 0.7925 | -1.0 | 0.0115 | 0.1133 | 0.8652 | -1.0 | 0.8652 | -1.0 | 0.7925 | 0.8652 | -1.0 | -1.0 |
| No log | 2.0 | 82 | 0.2123 | 0.8121 | 0.9671 | 0.9527 | -1.0 | 0.8121 | -1.0 | 0.0111 | 0.1125 | 0.8797 | -1.0 | 0.8797 | -1.0 | 0.8121 | 0.8797 | -1.0 | -1.0 |
| No log | 3.0 | 123 | 0.1597 | 0.8576 | 0.9645 | 0.963 | -1.0 | 0.8576 | -1.0 | 0.0118 | 0.1181 | 0.9217 | -1.0 | 0.9217 | -1.0 | 0.8576 | 0.9217 | -1.0 | -1.0 |
| No log | 4.0 | 164 | 0.1645 | 0.8532 | 0.9644 | 0.9606 | -1.0 | 0.8532 | -1.0 | 0.0118 | 0.1184 | 0.9174 | -1.0 | 0.9174 | -1.0 | 0.8532 | 0.9174 | -1.0 | -1.0 |
| No log | 5.0 | 205 | 0.2037 | 0.824 | 0.9632 | 0.9614 | -1.0 | 0.824 | -1.0 | 0.0115 | 0.1142 | 0.8826 | -1.0 | 0.8826 | -1.0 | 0.824 | 0.8826 | -1.0 | -1.0 |
| No log | 6.0 | 246 | 0.1342 | 0.8864 | 0.9672 | 0.9665 | -1.0 | 0.8864 | -1.0 | 0.0119 | 0.1213 | 0.9429 | -1.0 | 0.9429 | -1.0 | 0.8864 | 0.9429 | -1.0 | -1.0 |
| No log | 7.0 | 287 | 0.1365 | 0.8821 | 0.9677 | 0.9672 | -1.0 | 0.8821 | -1.0 | 0.0121 | 0.1218 | 0.9362 | -1.0 | 0.9362 | -1.0 | 0.8821 | 0.9362 | -1.0 | -1.0 |
| No log | 8.0 | 328 | 0.1470 | 0.872 | 0.9666 | 0.9662 | -1.0 | 0.872 | -1.0 | 0.0119 | 0.12 | 0.9326 | -1.0 | 0.9326 | -1.0 | 0.872 | 0.9326 | -1.0 | -1.0 |
| No log | 9.0 | 369 | 0.1783 | 0.8495 | 0.9678 | 0.9673 | -1.0 | 0.8495 | -1.0 | 0.0118 | 0.118 | 0.9017 | -1.0 | 0.9017 | -1.0 | 0.8495 | 0.9017 | -1.0 | -1.0 |
| No log | 10.0 | 410 | 0.1563 | 0.8676 | 0.9662 | 0.9643 | -1.0 | 0.8676 | -1.0 | 0.012 | 0.1203 | 0.9225 | -1.0 | 0.9225 | -1.0 | 0.8676 | 0.9225 | -1.0 | -1.0 |
| No log | 11.0 | 451 | 0.1458 | 0.8783 | 0.966 | 0.9658 | -1.0 | 0.8783 | -1.0 | 0.012 | 0.121 | 0.9321 | -1.0 | 0.9321 | -1.0 | 0.8783 | 0.9321 | -1.0 | -1.0 |
| No log | 12.0 | 492 | 0.1273 | 0.8939 | 0.9669 | 0.9667 | -1.0 | 0.8939 | -1.0 | 0.0123 | 0.1234 | 0.9462 | -1.0 | 0.9462 | -1.0 | 0.8939 | 0.9462 | -1.0 | -1.0 |
| 0.2348 | 13.0 | 533 | 0.1376 | 0.8862 | 0.9683 | 0.968 | -1.0 | 0.8862 | -1.0 | 0.0121 | 0.1217 | 0.9404 | -1.0 | 0.9404 | -1.0 | 0.8862 | 0.9404 | -1.0 | -1.0 |
| 0.2348 | 14.0 | 574 | 0.1338 | 0.8865 | 0.9669 | 0.9668 | -1.0 | 0.8865 | -1.0 | 0.0122 | 0.1222 | 0.9422 | -1.0 | 0.9422 | -1.0 | 0.8865 | 0.9422 | -1.0 | -1.0 |
| 0.2348 | 15.0 | 615 | 0.1258 | 0.8917 | 0.9685 | 0.9685 | -1.0 | 0.8917 | -1.0 | 0.012 | 0.1221 | 0.9454 | -1.0 | 0.9454 | -1.0 | 0.8917 | 0.9454 | -1.0 | -1.0 |
| 0.2348 | 16.0 | 656 | 0.1206 | 0.8998 | 0.9689 | 0.9689 | -1.0 | 0.8998 | -1.0 | 0.0123 | 0.1233 | 0.9524 | -1.0 | 0.9524 | -1.0 | 0.8998 | 0.9524 | -1.0 | -1.0 |
| 0.2348 | 17.0 | 697 | 0.1075 | 0.911 | 0.969 | 0.969 | -1.0 | 0.911 | -1.0 | 0.0123 | 0.1238 | 0.9612 | -1.0 | 0.9612 | -1.0 | 0.911 | 0.9612 | -1.0 | -1.0 |
| 0.2348 | 18.0 | 738 | 0.1084 | 0.9113 | 0.9692 | 0.9691 | -1.0 | 0.9113 | -1.0 | 0.0123 | 0.1237 | 0.9628 | -1.0 | 0.9628 | -1.0 | 0.9113 | 0.9628 | -1.0 | -1.0 |
| 0.2348 | 19.0 | 779 | 0.1104 | 0.91 | 0.9688 | 0.9688 | -1.0 | 0.91 | -1.0 | 0.0123 | 0.1236 | 0.9602 | -1.0 | 0.9602 | -1.0 | 0.91 | 0.9602 | -1.0 | -1.0 |
| 0.2348 | 20.0 | 820 | 0.1097 | 0.9103 | 0.9693 | 0.9693 | -1.0 | 0.9103 | -1.0 | 0.0123 | 0.1241 | 0.9616 | -1.0 | 0.9616 | -1.0 | 0.9103 | 0.9616 | -1.0 | -1.0 |
| 0.2348 | 21.0 | 861 | 0.1111 | 0.9106 | 0.9666 | 0.9665 | -1.0 | 0.9106 | -1.0 | 0.0123 | 0.1242 | 0.9624 | -1.0 | 0.9624 | -1.0 | 0.9106 | 0.9624 | -1.0 | -1.0 |
| 0.2348 | 22.0 | 902 | 0.1007 | 0.923 | 0.9667 | 0.9666 | -1.0 | 0.923 | -1.0 | 0.0125 | 0.1251 | 0.972 | -1.0 | 0.972 | -1.0 | 0.923 | 0.972 | -1.0 | -1.0 |
| 0.2348 | 23.0 | 943 | 0.1080 | 0.9103 | 0.9671 | 0.9671 | -1.0 | 0.9103 | -1.0 | 0.0123 | 0.1242 | 0.9612 | -1.0 | 0.9612 | -1.0 | 0.9103 | 0.9612 | -1.0 | -1.0 |
| 0.2348 | 24.0 | 984 | 0.0987 | 0.9197 | 0.967 | 0.967 | -1.0 | 0.9197 | -1.0 | 0.0124 | 0.1253 | 0.9697 | -1.0 | 0.9697 | -1.0 | 0.9197 | 0.9697 | -1.0 | -1.0 |
| 0.1648 | 25.0 | 1025 | 0.0979 | 0.9226 | 0.9675 | 0.9675 | -1.0 | 0.9226 | -1.0 | 0.0125 | 0.1253 | 0.9715 | -1.0 | 0.9715 | -1.0 | 0.9226 | 0.9715 | -1.0 | -1.0 |
| 0.1648 | 26.0 | 1066 | 0.0912 | 0.9334 | 0.9684 | 0.9684 | -1.0 | 0.9334 | -1.0 | 0.0125 | 0.1259 | 0.9777 | -1.0 | 0.9777 | -1.0 | 0.9334 | 0.9777 | -1.0 | -1.0 |
| 0.1648 | 27.0 | 1107 | 0.0926 | 0.9311 | 0.9682 | 0.9682 | -1.0 | 0.9311 | -1.0 | 0.0125 | 0.1258 | 0.9763 | -1.0 | 0.9763 | -1.0 | 0.9311 | 0.9763 | -1.0 | -1.0 |
| 0.1648 | 28.0 | 1148 | 0.0933 | 0.9301 | 0.9682 | 0.9681 | -1.0 | 0.9301 | -1.0 | 0.0125 | 0.1258 | 0.9756 | -1.0 | 0.9756 | -1.0 | 0.9301 | 0.9756 | -1.0 | -1.0 |
| 0.1648 | 29.0 | 1189 | 0.0937 | 0.9301 | 0.9682 | 0.9681 | -1.0 | 0.9301 | -1.0 | 0.0125 | 0.1259 | 0.9758 | -1.0 | 0.9758 | -1.0 | 0.9301 | 0.9758 | -1.0 | -1.0 |
| 0.1648 | 30.0 | 1230 | 0.0932 | 0.9311 | 0.9682 | 0.9681 | -1.0 | 0.9311 | -1.0 | 0.0125 | 0.126 | 0.9763 | -1.0 | 0.9763 | -1.0 | 0.9311 | 0.9763 | -1.0 | -1.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
sercetexam9/UIT-NO-PREPROCESSING-deberta-v3-large-finetuned | sercetexam9 | 2025-01-23T08:04:41Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-23T08:03:36Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: UIT-NO-PREPROCESSING-deberta-v3-large-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UIT-NO-PREPROCESSING-deberta-v3-large-finetuned
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5263
- F1: 0.7688
- Roc Auc: 0.8207
- Accuracy: 0.5199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.5077 | 1.0 | 139 | 0.4600 | 0.4594 | 0.6568 | 0.3357 |
| 0.3888 | 2.0 | 278 | 0.3953 | 0.6320 | 0.7310 | 0.4007 |
| 0.3306 | 3.0 | 417 | 0.3528 | 0.7181 | 0.7846 | 0.4838 |
| 0.1688 | 4.0 | 556 | 0.3831 | 0.7490 | 0.8098 | 0.4603 |
| 0.127 | 5.0 | 695 | 0.4009 | 0.7598 | 0.8160 | 0.5090 |
| 0.0984 | 6.0 | 834 | 0.4668 | 0.7282 | 0.7928 | 0.4892 |
| 0.0477 | 7.0 | 973 | 0.4952 | 0.7547 | 0.8093 | 0.5018 |
| 0.0293 | 8.0 | 1112 | 0.5263 | 0.7688 | 0.8207 | 0.5199 |
| 0.0205 | 9.0 | 1251 | 0.6005 | 0.7445 | 0.8044 | 0.4856 |
| 0.0202 | 10.0 | 1390 | 0.6518 | 0.7581 | 0.8079 | 0.4892 |
| 0.01 | 11.0 | 1529 | 0.6087 | 0.7662 | 0.8228 | 0.5162 |
| 0.0021 | 12.0 | 1668 | 0.6349 | 0.7584 | 0.8118 | 0.5090 |
| 0.0019 | 13.0 | 1807 | 0.6584 | 0.7567 | 0.8089 | 0.5126 |
| 0.0014 | 14.0 | 1946 | 0.6690 | 0.7608 | 0.8127 | 0.5072 |
| 0.0024 | 15.0 | 2085 | 0.6591 | 0.7637 | 0.8165 | 0.5108 |
| 0.0014 | 16.0 | 2224 | 0.6727 | 0.7632 | 0.8157 | 0.5162 |
| 0.0015 | 17.0 | 2363 | 0.6736 | 0.7619 | 0.8144 | 0.5144 |
| 0.0015 | 18.0 | 2502 | 0.6753 | 0.7641 | 0.8158 | 0.5199 |
| 0.002 | 19.0 | 2641 | 0.6768 | 0.7631 | 0.8151 | 0.5181 |
| 0.0014 | 20.0 | 2780 | 0.6769 | 0.7631 | 0.8151 | 0.5181 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.21.0
|
lesso15/75eae7cb-0736-4d0b-8711-07586d611dcc | lesso15 | 2025-01-23T08:03:42Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:59:02Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75eae7cb-0736-4d0b-8711-07586d611dcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 295c95d886899e42_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/295c95d886899e42_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso15/75eae7cb-0736-4d0b-8711-07586d611dcc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/295c95d886899e42_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2265eb03-3c11-4dde-ab58-14e90d80cd0e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2265eb03-3c11-4dde-ab58-14e90d80cd0e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75eae7cb-0736-4d0b-8711-07586d611dcc
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.1264 | 0.0017 | 1 | 2.5462 |
| 8.297 | 0.0086 | 5 | 2.5022 |
| 9.5278 | 0.0172 | 10 | 2.3152 |
| 9.1482 | 0.0257 | 15 | 2.2642 |
| 8.6922 | 0.0343 | 20 | 2.2074 |
| 8.9574 | 0.0429 | 25 | 2.1957 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
0x1202/335a5533-6330-45c7-81c3-6f914201490b | 0x1202 | 2025-01-23T08:03:29Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T07:58:29Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 335a5533-6330-45c7-81c3-6f914201490b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 295c95d886899e42_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/295c95d886899e42_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/335a5533-6330-45c7-81c3-6f914201490b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/295c95d886899e42_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2265eb03-3c11-4dde-ab58-14e90d80cd0e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2265eb03-3c11-4dde-ab58-14e90d80cd0e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 335a5533-6330-45c7-81c3-6f914201490b
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.2815 | 0.0069 | 1 | 2.6706 |
| 7.8787 | 0.3431 | 50 | 1.9610 |
| 7.0153 | 0.6861 | 100 | 1.8373 |
| 6.5346 | 1.0292 | 150 | 1.7711 |
| 5.8487 | 1.3722 | 200 | 1.7466 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kawon/llama3.1-food-finetune_v13_r8 | Kawon | 2025-01-23T08:01:31Z | 9 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-01-23T07:16:21Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
douchebag/lora_model | douchebag | 2025-01-23T08:01:09Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T07:31:04Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** douchebag
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nerva1228/yingtao5 | Nerva1228 | 2025-01-23T08:00:50Z | 16 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-23T08:00:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: yingtao
---
# Yingtao5
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `yingtao` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/yingtao5', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso11/ed646f55-8ec2-4f6d-b133-6a1e9c3ca9db | lesso11 | 2025-01-23T08:00:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:46:18Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed646f55-8ec2-4f6d-b133-6a1e9c3ca9db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: true
chat_template: llama3
datasets:
- data_files:
- c54c4cfeb1403ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c54c4cfeb1403ba8_train_data.json
type:
field_instruction: hieroglyphs
field_output: translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/ed646f55-8ec2-4f6d-b133-6a1e9c3ca9db
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/c54c4cfeb1403ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed646f55-8ec2-4f6d-b133-6a1e9c3ca9db
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.2692 | 0.0007 | 1 | 4.8858 |
| 17.3642 | 0.0033 | 5 | 4.0327 |
| 12.288 | 0.0067 | 10 | 3.0785 |
| 11.3571 | 0.0100 | 15 | 2.9143 |
| 11.9615 | 0.0134 | 20 | 2.7820 |
| 12.4712 | 0.0167 | 25 | 2.7535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik1987/6bdebade-e274-40b3-a62f-7caed136657a | dimasik1987 | 2025-01-23T08:00:27Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T07:58:47Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6bdebade-e274-40b3-a62f-7caed136657a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 295c95d886899e42_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/295c95d886899e42_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/6bdebade-e274-40b3-a62f-7caed136657a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/295c95d886899e42_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2265eb03-3c11-4dde-ab58-14e90d80cd0e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2265eb03-3c11-4dde-ab58-14e90d80cd0e
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 6bdebade-e274-40b3-a62f-7caed136657a
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0034 | 1 | 3.0316 |
| 9.6746 | 0.0172 | 5 | 2.8420 |
| 9.3766 | 0.0343 | 10 | 2.5671 |
| 9.0653 | 0.0515 | 15 | 2.4688 |
| 9.1153 | 0.0686 | 20 | 2.4100 |
| 8.6397 | 0.0858 | 25 | 2.3839 |
| 8.8737 | 0.1029 | 30 | 2.3847 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF | mradermacher | 2025-01-23T08:00:05Z | 351 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:sudhir2016/llama-3-8b-Instruct-lora-hinglish",
"base_model:quantized:sudhir2016/llama-3-8b-Instruct-lora-hinglish",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T07:47:19Z | ---
base_model: sudhir2016/llama-3-8b-Instruct-lora-hinglish
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/sudhir2016/llama-3-8b-Instruct-lora-hinglish
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-Instruct-lora-hinglish-GGUF/resolve/main/llama-3-8b-Instruct-lora-hinglish.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Devyanshi3/en_pipeline | Devyanshi3 | 2025-01-23T07:59:45Z | 11 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | 2024-04-22T05:49:41Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9783877242
- name: NER Recall
type: recall
value: 0.9648337596
- name: NER F Score
type: f_score
value: 0.9715634725
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.5,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (15 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `AWB`, `COMMODITY`, `DESTINATION`, `DIMENSIONS`, `GROSSWEIGHT`, `HSNCODE`, `INCOTERMS`, `INVOICE`, `MODE`, `ORIGIN`, `QUANTITY`, `SHIPMENTDATE`, `TEMPERATURE`, `VOLUMEWEIGHT`, `WEIGHT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 97.16 |
| `ENTS_P` | 97.84 |
| `ENTS_R` | 96.48 |
| `TOK2VEC_LOSS` | 31298.39 |
| `NER_LOSS` | 137951.75 | |
mradermacher/Hercules-phi-2-GGUF | mradermacher | 2025-01-23T07:57:33Z | 231 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Locutusque/hercules-v4.5",
"base_model:M4-ai/Hercules-phi-2",
"base_model:quantized:M4-ai/Hercules-phi-2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-23T07:47:19Z | ---
base_model: M4-ai/Hercules-phi-2
datasets:
- Locutusque/hercules-v4.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/M4-ai/Hercules-phi-2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hercules-phi-2-GGUF/resolve/main/Hercules-phi-2.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kk-aivio/dce6becb-65ef-40d9-bca9-6f431bf69ec7 | kk-aivio | 2025-01-23T07:57:10Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T07:55:10Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dce6becb-65ef-40d9-bca9-6f431bf69ec7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c54c4cfeb1403ba8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c54c4cfeb1403ba8_train_data.json
type:
field_instruction: hieroglyphs
field_output: translation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/dce6becb-65ef-40d9-bca9-6f431bf69ec7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c54c4cfeb1403ba8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6813de76-c54d-49f6-88c5-cfc3d6c7ec03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dce6becb-65ef-40d9-bca9-6f431bf69ec7
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.4663 | 0.0007 | 1 | 4.9188 |
| 20.3848 | 0.0020 | 3 | 4.7695 |
| 17.3843 | 0.0040 | 6 | 3.7011 |
| 14.3025 | 0.0060 | 9 | 3.1414 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/AdamKasumovic_-_phi3-mini-4k-instruct-bactrian-x-af-100-percent-low-med-perplexity-8bits | RichardErkhov | 2025-01-23T07:55:41Z | 6 | 0 | null | [
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:53:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi3-mini-4k-instruct-bactrian-x-af-100-percent-low-med-perplexity - bnb 8bits
- Model creator: https://huggingface.co/AdamKasumovic/
- Original model: https://huggingface.co/AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-af-100-percent-low-med-perplexity/
Original model description:
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vermoney/f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c | vermoney | 2025-01-23T07:55:17Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-23T07:31:10Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7e8c233e95996edb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7e8c233e95996edb_train_data.json
type:
field_input: label
field_instruction: text
field_output: text-english
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vermoney/f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/7e8c233e95996edb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb3b8dbf-21b2-4796-bedc-d035bdf3d717
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb3b8dbf-21b2-4796-bedc-d035bdf3d717
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# f4a24b4c-e2e4-4c58-b7b8-5f743fe7666c
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0008 | 5 | nan |
| 0.0 | 0.0017 | 10 | nan |
| 0.0 | 0.0025 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mikekubi/task-1-Qwen-Qwen2-7B-Instruct | mikekubi | 2025-01-23T07:54:40Z | 280 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"region:us"
] | null | 2025-01-10T07:15:19Z | ---
base_model: Qwen/Qwen2-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mikekubi/task-1-google-gemma-7b-it | mikekubi | 2025-01-23T07:54:26Z | 2,165 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b-it",
"base_model:adapter:google/gemma-7b-it",
"region:us"
] | null | 2025-01-07T06:58:56Z | ---
base_model: google/gemma-7b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
RichardErkhov/alexrodpas_-_phi3-mini-4k-lora-pycode-18k-4bits | RichardErkhov | 2025-01-23T07:52:18Z | 8 | 0 | null | [
"safetensors",
"phi3",
"custom_code",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-23T07:50:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi3-mini-4k-lora-pycode-18k - bnb 4bits
- Model creator: https://huggingface.co/alexrodpas/
- Original model: https://huggingface.co/alexrodpas/phi3-mini-4k-lora-pycode-18k/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits