modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kostiantynk-out/16019304-562e-4f8e-bc7c-fc09385eed3f | kostiantynk-out | 2025-01-29T06:46:33Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"base_model:adapter:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"region:us"
] | null | 2025-01-29T06:15:00Z | ---
library_name: peft
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16019304-562e-4f8e-bc7c-fc09385eed3f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 29eee4dbaacb6194_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29eee4dbaacb6194_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/16019304-562e-4f8e-bc7c-fc09385eed3f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/29eee4dbaacb6194_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e01ae65-216b-419b-8293-aa58408aef14
wandb_project: Mine-SN56-1-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e01ae65-216b-419b-8293-aa58408aef14
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 16019304-562e-4f8e-bc7c-fc09385eed3f
This model is a fine-tuned version of [rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d](https://huggingface.co/rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 13.6439 |
| 7.2237 | 0.0003 | 13 | 1.5586 |
| 1.502 | 0.0005 | 26 | 0.2557 |
| 0.4942 | 0.0008 | 39 | 0.2648 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/8521530e-949b-412d-a088-9b8575ff5f89 | thalllsssss | 2025-01-29T06:46:28Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:45:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8521530e-949b-412d-a088-9b8575ff5f89
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47d54f36be91dd39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47d54f36be91dd39_train_data.json
type:
field_input: choices
field_instruction: question_eng
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/8521530e-949b-412d-a088-9b8575ff5f89
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/47d54f36be91dd39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f1df40a9-a29a-4e64-9bf4-df4241b29729
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f1df40a9-a29a-4e64-9bf4-df4241b29729
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8521530e-949b-412d-a088-9b8575ff5f89
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7952 | 0.96 | 12 | 2.6245 |
| 4.6313 | 1.04 | 13 | 2.6001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
maulikanalog/pareshv | maulikanalog | 2025-01-29T06:46:16Z | 77 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T06:36:18Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: pareshv in tailored Italian suit
output:
url: images/example_6p265ph4k.png
- text: pareshv in tailored Italian blue suit in office
output:
url: images/example_2cusg7yp8.png
- text: pareshv in tailored Italian suit
output:
url: images/example_zn0vcd9hd.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: pareshv
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# pareshv
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `pareshv` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/None/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('None', weight_name='pareshv')
image = pipeline('A person in a bustling cafe pareshv').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
nghiatrannnnnn/9907a2d9-244d-4bc1-a282-1cef43daf6db | nghiatrannnnnn | 2025-01-29T06:45:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:45:23Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9907a2d9-244d-4bc1-a282-1cef43daf6db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47d54f36be91dd39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47d54f36be91dd39_train_data.json
type:
field_input: choices
field_instruction: question_eng
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/9907a2d9-244d-4bc1-a282-1cef43daf6db
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/47d54f36be91dd39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f1df40a9-a29a-4e64-9bf4-df4241b29729
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f1df40a9-a29a-4e64-9bf4-df4241b29729
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9907a2d9-244d-4bc1-a282-1cef43daf6db
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7951 | 0.96 | 12 | 2.5965 |
| 4.5822 | 1.04 | 13 | 2.5733 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/04e5eaac-b4b8-4b72-945f-311b88a7763e | mrferr3t | 2025-01-29T06:43:42Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:39:52Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 04e5eaac-b4b8-4b72-945f-311b88a7763e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212cea8e8d3699da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212cea8e8d3699da_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/04e5eaac-b4b8-4b72-945f-311b88a7763e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 9
micro_batch_size: 2
mlflow_experiment_name: /tmp/212cea8e8d3699da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b1daaa76-ce91-488a-8876-44fa4641b938
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1daaa76-ce91-488a-8876-44fa4641b938
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 04e5eaac-b4b8-4b72-945f-311b88a7763e
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7639 | 0.0021 | 1 | 11.7636 |
| 11.771 | 0.0063 | 3 | 11.7635 |
| 11.763 | 0.0125 | 6 | 11.7635 |
| 11.7648 | 0.0188 | 9 | 11.7634 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
msyukorai/DeepSeek-R1-Distill-Llama-8B-Q4_0-GGUF | msyukorai | 2025-01-29T06:41:20Z | 295 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-29T05:05:31Z | ---
license: mit
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
tags:
- llama-cpp
- gguf-my-repo
---
# msyukorai/DeepSeek-R1-Distill-Llama-8B-Q4_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Llama-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo msyukorai/DeepSeek-R1-Distill-Llama-8B-Q4_0-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo msyukorai/DeepSeek-R1-Distill-Llama-8B-Q4_0-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo msyukorai/DeepSeek-R1-Distill-Llama-8B-Q4_0-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo msyukorai/DeepSeek-R1-Distill-Llama-8B-Q4_0-GGUF --hf-file deepseek-r1-distill-llama-8b-q4_0.gguf -c 2048
```
|
Theros/Qwen2.5-ColdBrew-R1-test4 | Theros | 2025-01-29T06:40:27Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Theros/Qwen2.5-ColdBrew-R1-test2",
"base_model:merge:Theros/Qwen2.5-ColdBrew-R1-test2",
"base_model:bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2",
"base_model:merge:bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:35:09Z | ---
base_model:
- bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2
- Theros/Qwen2.5-ColdBrew-R1-test2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2](https://huggingface.co/bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2)
* [Theros/Qwen2.5-ColdBrew-R1-test2](https://huggingface.co/Theros/Qwen2.5-ColdBrew-R1-test2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Theros/Qwen2.5-ColdBrew-R1-test2
layer_range: [0, 28]
- model: bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2
layer_range: [0, 28]
merge_method: slerp
base_model: Theros/Qwen2.5-ColdBrew-R1-test2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
tokenizer_source: Theros/Qwen2.5-ColdBrew-R1-test2
```
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_8-hook_resid_post-635.018737792969-76 | Prisma-Multimodal | 2025-01-29T06:40:24Z | 19 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:40:14Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 8
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0028
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 635.0187
- **Dead Features**: 0
- **Mean Passes Since Fired**: 45.8548
### Reconstruction
- **Explained Variance**: 0.7672
- **Explained Variance Std**: 0.2072
- **MSE Loss**: 0.0015
- **L1 Loss**: 230.6383
- **Overall Loss**: 0.0015
## Training Details
- **Training Duration**: 360 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/c0dcb7e7-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/ii5o7h2h
- **Random Seed**: 42
|
havinash-ai/33956833-75bb-42d8-845e-a9efc4b76978 | havinash-ai | 2025-01-29T06:40:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-01-29T06:35:12Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 33956833-75bb-42d8-845e-a9efc4b76978
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f65209fd2b79f576_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f65209fd2b79f576_train_data.json
type:
field_instruction: text
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/33956833-75bb-42d8-845e-a9efc4b76978
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f65209fd2b79f576_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 33956833-75bb-42d8-845e-a9efc4b76978
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 0.7522 |
| 9.7216 | 0.0021 | 13 | 0.6302 |
| 6.0455 | 0.0041 | 26 | 0.4365 |
| 4.0591 | 0.0062 | 39 | 0.3814 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_7-hook_resid_post-492.959381103516-88 | Prisma-Multimodal | 2025-01-29T06:40:13Z | 12 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:40:05Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 7
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0036
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 492.9594
- **Dead Features**: 0
- **Mean Passes Since Fired**: 121.3178
### Reconstruction
- **Explained Variance**: 0.8836
- **Explained Variance Std**: 0.0215
- **MSE Loss**: 0.0005
- **L1 Loss**: 215.2193
- **Overall Loss**: 0.0010
## Training Details
- **Training Duration**: 252 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/21aa4c67-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/5tdstmwv
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_6-hook_resid_post-430.556243896484-92 | Prisma-Multimodal | 2025-01-29T06:40:04Z | 13 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:39:53Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 6
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0061
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 430.5562
- **Dead Features**: 0
- **Mean Passes Since Fired**: 179.1497
### Reconstruction
- **Explained Variance**: 0.9292
- **Explained Variance Std**: 0.0209
- **MSE Loss**: 0.0003
- **L1 Loss**: 342.2079
- **Overall Loss**: 0.0003
## Training Details
- **Training Duration**: 254 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/a4f2874e-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/lqwere3b
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_4-hook_resid_post-682.543762207031-95 | Prisma-Multimodal | 2025-01-29T06:39:43Z | 13 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:39:34Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 4
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0076
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 682.5438
- **Dead Features**: 0
- **Mean Passes Since Fired**: 232.3228
### Reconstruction
- **Explained Variance**: 0.9544
- **Explained Variance Std**: 0.0125
- **MSE Loss**: 0.0001
- **L1 Loss**: 318.7141
- **Overall Loss**: 0.0001
## Training Details
- **Training Duration**: 249 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/f2bb5300-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/9qbjy580
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_2-hook_resid_post-711.121887207031-96 | Prisma-Multimodal | 2025-01-29T06:39:24Z | 12 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:39:11Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 2
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0153
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 711.1219
- **Dead Features**: 0
- **Mean Passes Since Fired**: 297.1984
### Reconstruction
- **Explained Variance**: 0.9622
- **Explained Variance Std**: 0.0124
- **MSE Loss**: 0.0001
- **L1 Loss**: 139.1979
- **Overall Loss**: 0.0001
## Training Details
- **Training Duration**: 243 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/61879000-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/exkzappo
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_0-hook_resid_post-936.799987792969-82 | Prisma-Multimodal | 2025-01-29T06:38:59Z | 24 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:38:48Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 0
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0071
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 936.8000
- **Dead Features**: 0
- **Mean Passes Since Fired**: 290.6174
### Reconstruction
- **Explained Variance**: 0.8231
- **Explained Variance Std**: 0.1048
- **MSE Loss**: 0.0001
- **L1 Loss**: 223.2854
- **Overall Loss**: 0.0001
## Training Details
- **Training Duration**: 337 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/9f3da34e-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/z7oj0h70
- **Random Seed**: 42
|
prxy5604/b07c7818-1c3f-4b80-a165-6bd56a5f1494 | prxy5604 | 2025-01-29T06:37:56Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:25:27Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b07c7818-1c3f-4b80-a165-6bd56a5f1494
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- b08f3dca86f2cb9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b08f3dca86f2cb9d_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/b07c7818-1c3f-4b80-a165-6bd56a5f1494
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b08f3dca86f2cb9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e7e6af3-0874-40bc-9012-038990c5f193
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e7e6af3-0874-40bc-9012-038990c5f193
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b07c7818-1c3f-4b80-a165-6bd56a5f1494
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.484 | 0.0018 | 1 | 5.8152 |
| 3.6986 | 0.0908 | 50 | 2.2620 |
| 3.3932 | 0.1816 | 100 | 1.8828 |
| 2.2973 | 0.2724 | 150 | 1.4752 |
| 2.5491 | 0.3631 | 200 | 1.3405 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ymoslem/ModernBERT-base-qe-v1 | ymoslem | 2025-01-29T06:35:38Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"quality-estimation",
"regression",
"generated_from_trainer",
"multilingual",
"bn",
"cs",
"de",
"en",
"et",
"fi",
"fr",
"gu",
"ha",
"hi",
"is",
"ja",
"kk",
"km",
"lt",
"lv",
"pl",
"ps",
"ru",
"ta",
"tr",
"uk",
"xh",
"zh",
"zu",
"dataset:ymoslem/tokenized-wmt-da-human-evaluation",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T16:51:54Z | ---
library_name: transformers
language:
- multilingual
- bn
- cs
- de
- en
- et
- fi
- fr
- gu
- ha
- hi
- is
- ja
- kk
- km
- lt
- lv
- pl
- ps
- ru
- ta
- tr
- uk
- xh
- zh
- zu
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- quality-estimation
- regression
- generated_from_trainer
datasets:
- ymoslem/tokenized-wmt-da-human-evaluation
model-index:
- name: Quality Estimation for Machine Translation
results:
- task:
type: regression
dataset:
name: ymoslem/wmt-da-human-evaluation-long-context
type: QE
metrics:
- name: Pearson
type: Pearson Correlation
value: 0.4465
- name: MAE
type: Mean Absolute Error
value: 0.126
- name: RMSE
type: Root Mean Squared Error
value: 0.1623
- name: R-R2
type: R-Squared
value: 0.0801
- task:
type: regression
dataset:
name: ymoslem/wmt-da-human-evaluation
type: QE
metrics:
- name: Pearson
type: Pearson Correlation
value:
- name: MAE
type: Mean Absolute Error
value:
- name: RMSE
type: Root Mean Squared Error
value:
- name: R-R2
type: R-Squared
value:
metrics:
- pearsonr
- mae
- r_squared
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Quality Estimation for Machine Translation
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the ymoslem/tokenized-wmt-da-human-evaluation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0571
## Model description
This model is for reference-free, sentence level quality estimation (QE) of machine translation (MT) systems.
The long-context / document-level model can be found at: [ModernBERT-base-long-context-qe-v1](https://huggingface.co/ymoslem/ModernBERT-base-long-context-qe-v1),
which is trained on a long-context / document-level QE dataset [ymoslem/wmt-da-human-evaluation-long-context](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation-long-context)
## Training and evaluation data
This model is trained on the sentence-level quality estimation dataset: [ymoslem/wmt-da-human-evaluation](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation)
## Training procedure
This version of the model uses the full lengthtokenizer.model_max_length=8192,
but it is still trained on a sentence-level QE dataset [ymoslem/wmt-da-human-evaluation](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation)
The long-context / document-level model can be found at: [ModernBERT-base-long-context-qe-v1](https://huggingface.co/ymoslem/ModernBERT-base-long-context-qe-v1),
which is trained on a long-context / document-level QE dataset [ymoslem/wmt-da-human-evaluation-long-context](https://huggingface.co/datasets/ymoslem/wmt-da-human-evaluation-long-context)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.0686 | 0.1004 | 1000 | 0.0712 |
| 0.0652 | 0.2007 | 2000 | 0.0687 |
| 0.0648 | 0.3011 | 3000 | 0.0623 |
| 0.0609 | 0.4015 | 4000 | 0.0600 |
| 0.0585 | 0.5019 | 5000 | 0.0603 |
| 0.0588 | 0.6022 | 6000 | 0.0589 |
| 0.0592 | 0.7026 | 7000 | 0.0581 |
| 0.0585 | 0.8030 | 8000 | 0.0574 |
| 0.0588 | 0.9033 | 9000 | 0.0572 |
| 0.0563 | 1.0037 | 10000 | 0.0571 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.4.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
beingbatman/CTMAE-P2-V2-S1 | beingbatman | 2025-01-29T06:33:02Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-01-29T04:10:18Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CTMAE-P2-V2-S1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CTMAE-P2-V2-S1
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4718
- Accuracy: 0.8261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.62 | 0.02 | 65 | 0.7161 | 0.5435 |
| 0.5694 | 1.02 | 130 | 0.7819 | 0.5435 |
| 0.546 | 2.02 | 195 | 0.8927 | 0.5435 |
| 0.6022 | 3.02 | 260 | 0.6859 | 0.5435 |
| 0.5779 | 4.02 | 325 | 0.6449 | 0.5435 |
| 0.4662 | 5.02 | 390 | 0.8167 | 0.5435 |
| 0.5101 | 6.02 | 455 | 0.5114 | 0.7826 |
| 0.3779 | 7.02 | 520 | 0.5149 | 0.7391 |
| 0.3656 | 8.02 | 585 | 0.6273 | 0.6304 |
| 0.4837 | 9.02 | 650 | 0.9093 | 0.6522 |
| 0.6897 | 10.02 | 715 | 0.5653 | 0.6739 |
| 0.435 | 11.02 | 780 | 0.4927 | 0.7826 |
| 0.6362 | 12.02 | 845 | 0.5877 | 0.6739 |
| 0.4422 | 13.02 | 910 | 0.5351 | 0.8043 |
| 0.3913 | 14.02 | 975 | 0.7300 | 0.8043 |
| 0.6191 | 15.02 | 1040 | 1.1917 | 0.5652 |
| 0.2704 | 16.02 | 1105 | 0.5930 | 0.7826 |
| 0.3976 | 17.02 | 1170 | 0.5296 | 0.8043 |
| 0.3038 | 18.02 | 1235 | 0.6735 | 0.7609 |
| 0.2974 | 19.02 | 1300 | 0.4718 | 0.8261 |
| 0.2434 | 20.02 | 1365 | 0.5224 | 0.8261 |
| 0.4984 | 21.02 | 1430 | 1.2637 | 0.6957 |
| 0.1256 | 22.02 | 1495 | 0.7204 | 0.8261 |
| 0.448 | 23.02 | 1560 | 0.6897 | 0.7609 |
| 0.2702 | 24.02 | 1625 | 0.6801 | 0.8261 |
| 0.5101 | 25.02 | 1690 | 0.5134 | 0.8261 |
| 0.354 | 26.02 | 1755 | 0.8076 | 0.8043 |
| 0.4218 | 27.02 | 1820 | 0.7551 | 0.7826 |
| 1.1586 | 28.02 | 1885 | 1.1514 | 0.6522 |
| 0.3586 | 29.02 | 1950 | 1.1479 | 0.7391 |
| 0.4746 | 30.02 | 2015 | 0.9521 | 0.7174 |
| 0.6256 | 31.02 | 2080 | 0.8559 | 0.8043 |
| 0.4668 | 32.02 | 2145 | 0.9766 | 0.7826 |
| 0.1502 | 33.02 | 2210 | 0.9262 | 0.7826 |
| 0.5093 | 34.02 | 2275 | 0.9402 | 0.7609 |
| 0.2621 | 35.02 | 2340 | 0.9229 | 0.7609 |
| 0.1456 | 36.02 | 2405 | 0.7937 | 0.8261 |
| 0.1826 | 37.02 | 2470 | 0.9106 | 0.7826 |
| 0.3778 | 38.02 | 2535 | 0.9376 | 0.7826 |
| 0.1763 | 39.02 | 2600 | 0.9300 | 0.7826 |
| 0.1083 | 40.02 | 2665 | 1.1018 | 0.7609 |
| 0.1994 | 41.02 | 2730 | 0.8667 | 0.8261 |
| 0.0111 | 42.02 | 2795 | 0.9896 | 0.8043 |
| 0.0818 | 43.02 | 2860 | 1.0258 | 0.7826 |
| 0.1808 | 44.02 | 2925 | 0.9841 | 0.7826 |
| 0.1371 | 45.02 | 2990 | 0.9337 | 0.8043 |
| 0.0129 | 46.02 | 3055 | 0.8905 | 0.8043 |
| 0.1492 | 47.02 | 3120 | 0.9629 | 0.8261 |
| 0.0184 | 48.02 | 3185 | 1.0828 | 0.7174 |
| 0.1146 | 49.02 | 3250 | 1.0449 | 0.7826 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Quinametzin/checkpoints | Quinametzin | 2025-01-29T06:32:12Z | 236 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-01-29T06:31:56Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0489
- Precision: 0.8842
- Recall: 0.9068
- F1: 0.8953
- Accuracy: 0.9849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.2658 | 100 | 0.0575 | 0.8673 | 0.8744 | 0.8708 | 0.9815 |
| No log | 2.5316 | 200 | 0.0490 | 0.8876 | 0.8970 | 0.8923 | 0.9846 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
kartikgupta373/as15664-508913-pastel-green | kartikgupta373 | 2025-01-29T06:31:16Z | 14 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T06:31:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15664 508913 Pastel Green
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15664-508913-pastel-green', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso04/bfaf0fab-7c88-46d9-b803-81322ae77eb2 | lesso04 | 2025-01-29T06:30:01Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"region:us"
] | null | 2025-01-29T06:26:21Z | ---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bfaf0fab-7c88-46d9-b803-81322ae77eb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 349dac9ba163f0a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/349dac9ba163f0a5_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso04/bfaf0fab-7c88-46d9-b803-81322ae77eb2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/349dac9ba163f0a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f1b7d9e-507c-4d19-8049-642ebf7e0fb6
wandb_project: multi
wandb_run: your_name
wandb_runid: 0f1b7d9e-507c-4d19-8049-642ebf7e0fb6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bfaf0fab-7c88-46d9-b803-81322ae77eb2
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2173 | 1.0 | 35 | 1.2389 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mergekit-community/DeepVeo-R1-A | mergekit-community | 2025-01-29T06:29:10Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Alfitaria/Q25-1.5B-VeoLu",
"base_model:merge:Alfitaria/Q25-1.5B-VeoLu",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:merge:Qwen/Qwen2.5-1.5B",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:28:07Z | ---
base_model:
- Alfitaria/Q25-1.5B-VeoLu
- Qwen/Qwen2.5-1.5B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) as a base.
### Models Merged
The following models were included in the merge:
* [Alfitaria/Q25-1.5B-VeoLu](https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu)
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-1.5B
# No parameters necessary for base model
- model: Alfitaria/Q25-1.5B-VeoLu
parameters:
density: 0.56
weight: 0.6
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
parameters:
density: 0.44
weight: 0.4
merge_method: dare_ties
base_model: Qwen/Qwen2.5-1.5B
parameters:
int8_mask: true
dtype: float16
```
|
LockeLamora2077/NiNa_deepseek_testing | LockeLamora2077 | 2025-01-29T06:27:55Z | 70 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T06:22:51Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LockeLamora2077
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
great0001/f1a7a1f6-62ba-4d72-a672-a8a4fe5f9d86 | great0001 | 2025-01-29T06:27:50Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"region:us"
] | null | 2025-01-29T06:25:58Z | ---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f1a7a1f6-62ba-4d72-a672-a8a4fe5f9d86
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 349dac9ba163f0a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/349dac9ba163f0a5_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/f1a7a1f6-62ba-4d72-a672-a8a4fe5f9d86
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/349dac9ba163f0a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f1b7d9e-507c-4d19-8049-642ebf7e0fb6
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f1b7d9e-507c-4d19-8049-642ebf7e0fb6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f1a7a1f6-62ba-4d72-a672-a8a4fe5f9d86
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6208 | 0.0036 | 1 | 4.2997 |
| 1.3094 | 0.0466 | 13 | 1.2815 |
| 0.9577 | 0.0933 | 26 | 1.0802 |
| 0.9384 | 0.1399 | 39 | 1.0481 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Dominic2106/llama-3-Legal-Advisor-FineTune | Dominic2106 | 2025-01-29T06:26:52Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T06:24:03Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dominic2106
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sniperfix/d0cb3c69-e5b2-4769-a1fb-551e634ce51d | sniperfix | 2025-01-29T06:26:00Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:21:24Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0cb3c69-e5b2-4769-a1fb-551e634ce51d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212cea8e8d3699da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212cea8e8d3699da_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: sniperfix/d0cb3c69-e5b2-4769-a1fb-551e634ce51d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 90
micro_batch_size: 2
mlflow_experiment_name: /tmp/212cea8e8d3699da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: indexjupri-sniper-country
wandb_mode: online
wandb_name: b1daaa76-ce91-488a-8876-44fa4641b938
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1daaa76-ce91-488a-8876-44fa4641b938
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# d0cb3c69-e5b2-4769-a1fb-551e634ce51d
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0167 | 1 | 11.7636 |
| 11.7641 | 0.1334 | 8 | 11.7632 |
| 11.7633 | 0.2668 | 16 | 11.7616 |
| 11.7596 | 0.4002 | 24 | 11.7581 |
| 11.7555 | 0.5336 | 32 | 11.7515 |
| 11.7467 | 0.6670 | 40 | 11.7422 |
| 11.7391 | 0.8004 | 48 | 11.7361 |
| 11.7356 | 0.9338 | 56 | 11.7337 |
| 11.7518 | 1.0677 | 64 | 11.7327 |
| 11.7706 | 1.2011 | 72 | 11.7323 |
| 11.7041 | 1.3345 | 80 | 11.7321 |
| 11.8285 | 1.4680 | 88 | 11.7321 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_M-GGUF | Triangle104 | 2025-01-29T06:24:29Z | 440 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-29T06:24:05Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_m.gguf -c 2048
```
|
asr-africa/wav2vec2-xls-r-akan-100-hours | asr-africa | 2025-01-29T06:24:28Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-28T11:08:56Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-akan-100-hours
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/bvnbmsvo)
# wav2vec2-xls-r-akan-100-hours
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7988
- Model Preparation Time: 0.0143
- Wer: 0.2968
- Cer: 0.0937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer |
|:-------------:|:-------:|:-----:|:---------------:|:----------------------:|:------:|:------:|
| 11.1522 | 1.7331 | 500 | 2.7710 | 0.0143 | 1.0 | 1.0 |
| 2.0881 | 3.4662 | 1000 | 0.3882 | 0.0143 | 0.3401 | 0.1057 |
| 0.8886 | 5.1993 | 1500 | 0.3437 | 0.0143 | 0.2956 | 0.0916 |
| 0.7671 | 6.9324 | 2000 | 0.3246 | 0.0143 | 0.2898 | 0.0891 |
| 0.6983 | 8.6655 | 2500 | 0.3230 | 0.0143 | 0.2810 | 0.0872 |
| 0.6688 | 10.3986 | 3000 | 0.3235 | 0.0143 | 0.2800 | 0.0872 |
| 0.6241 | 12.1317 | 3500 | 0.3273 | 0.0143 | 0.2828 | 0.0879 |
| 0.5917 | 13.8648 | 4000 | 0.3328 | 0.0143 | 0.2836 | 0.0886 |
| 0.5503 | 15.5979 | 4500 | 0.3366 | 0.0143 | 0.2803 | 0.0882 |
| 0.5163 | 17.3310 | 5000 | 0.3568 | 0.0143 | 0.2825 | 0.0889 |
| 0.487 | 19.0641 | 5500 | 0.3597 | 0.0143 | 0.2876 | 0.0899 |
| 0.446 | 20.7972 | 6000 | 0.3719 | 0.0143 | 0.2831 | 0.0895 |
| 0.416 | 22.5303 | 6500 | 0.4071 | 0.0143 | 0.2964 | 0.0928 |
| 0.3844 | 24.2634 | 7000 | 0.4167 | 0.0143 | 0.2928 | 0.0924 |
| 0.3526 | 25.9965 | 7500 | 0.4353 | 0.0143 | 0.2999 | 0.0942 |
| 0.3173 | 27.7296 | 8000 | 0.4568 | 0.0143 | 0.3076 | 0.0968 |
| 0.2892 | 29.4627 | 8500 | 0.4936 | 0.0143 | 0.2990 | 0.0936 |
| 0.265 | 31.1958 | 9000 | 0.5298 | 0.0143 | 0.3044 | 0.0957 |
| 0.2452 | 32.9289 | 9500 | 0.5566 | 0.0143 | 0.2922 | 0.0930 |
| 0.2244 | 34.6620 | 10000 | 0.5921 | 0.0143 | 0.2973 | 0.0943 |
| 0.2064 | 36.3951 | 10500 | 0.6147 | 0.0143 | 0.3169 | 0.0980 |
| 0.1937 | 38.1282 | 11000 | 0.6672 | 0.0143 | 0.3118 | 0.0968 |
| 0.1733 | 39.8614 | 11500 | 0.6968 | 0.0143 | 0.2997 | 0.0938 |
| 0.1644 | 41.5945 | 12000 | 0.7098 | 0.0143 | 0.3010 | 0.0955 |
| 0.1527 | 43.3276 | 12500 | 0.7449 | 0.0143 | 0.2998 | 0.0947 |
| 0.1488 | 45.0607 | 13000 | 0.7555 | 0.0143 | 0.3054 | 0.0955 |
| 0.1341 | 46.7938 | 13500 | 0.7626 | 0.0143 | 0.3010 | 0.0951 |
| 0.1277 | 48.5269 | 14000 | 0.7988 | 0.0143 | 0.2968 | 0.0937 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
eddysang/cbcb9ea7-77b5-40fd-baaf-f3a66d36d225 | eddysang | 2025-01-29T06:23:32Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:21:04Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cbcb9ea7-77b5-40fd-baaf-f3a66d36d225
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212cea8e8d3699da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212cea8e8d3699da_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: eddysang/cbcb9ea7-77b5-40fd-baaf-f3a66d36d225
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00015
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/212cea8e8d3699da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: yaudayah0
wandb_mode: online
wandb_name: b1daaa76-ce91-488a-8876-44fa4641b938
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1daaa76-ce91-488a-8876-44fa4641b938
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# cbcb9ea7-77b5-40fd-baaf-f3a66d36d225
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0167 | 1 | 11.7636 |
| 11.7637 | 0.1501 | 9 | 11.7632 |
| 11.7614 | 0.3002 | 18 | 11.7618 |
| 11.7598 | 0.4502 | 27 | 11.7591 |
| 11.7557 | 0.6003 | 36 | 11.7550 |
| 11.75 | 0.7504 | 45 | 11.7490 |
| 11.7445 | 0.9005 | 54 | 11.7427 |
| 11.7571 | 1.0511 | 63 | 11.7384 |
| 11.7745 | 1.2011 | 72 | 11.7363 |
| 11.8078 | 1.3512 | 81 | 11.7354 |
| 11.6771 | 1.5013 | 90 | 11.7350 |
| 11.6701 | 1.6514 | 99 | 11.7350 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gavrilstep/5dbfe88e-bbca-4c55-820a-9ef2ec3d77d9 | gavrilstep | 2025-01-29T06:23:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:18:54Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5dbfe88e-bbca-4c55-820a-9ef2ec3d77d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 75ea8b2b0ce0747b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/75ea8b2b0ce0747b_train_data.json
type:
field_input: Resume_str
field_instruction: Category
field_output: Resume_html
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/5dbfe88e-bbca-4c55-820a-9ef2ec3d77d9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/75ea8b2b0ce0747b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 09b31402-03d6-4e52-b0bc-a10763cac165
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 09b31402-03d6-4e52-b0bc-a10763cac165
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5dbfe88e-bbca-4c55-820a-9ef2ec3d77d9
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | nan |
| 14.8385 | 0.0184 | 5 | nan |
| 0.0 | 0.0369 | 10 | nan |
| 0.0 | 0.0553 | 15 | nan |
| 0.0 | 0.0737 | 20 | nan |
| 0.0 | 0.0922 | 25 | nan |
| 0.0 | 0.1106 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nexspear/91818616-5b33-40e9-a8e0-eaa4ab21ad48 | Nexspear | 2025-01-29T06:22:54Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:58:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 91818616-5b33-40e9-a8e0-eaa4ab21ad48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f018b4f4c84734e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f018b4f4c84734e_train_data.json
type:
field_input: fullSectionsTitre
field_instruction: title_main
field_output: texte
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: Nexspear/91818616-5b33-40e9-a8e0-eaa4ab21ad48
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/5f018b4f4c84734e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 6e907a9e-7c14-47ff-9a22-8cd83cba5430
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 6e907a9e-7c14-47ff-9a22-8cd83cba5430
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 91818616-5b33-40e9-a8e0-eaa4ab21ad48
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0066 | 1 | 1.2190 |
| 4.4518 | 0.0595 | 9 | 1.0525 |
| 3.9299 | 0.1190 | 18 | 0.9747 |
| 3.6303 | 0.1785 | 27 | 0.9424 |
| 3.5778 | 0.2380 | 36 | 0.9245 |
| 3.8691 | 0.2975 | 45 | 0.9066 |
| 3.4806 | 0.3570 | 54 | 0.8934 |
| 3.861 | 0.4165 | 63 | 0.8836 |
| 3.8442 | 0.4760 | 72 | 0.8759 |
| 3.515 | 0.5355 | 81 | 0.8719 |
| 3.7434 | 0.5950 | 90 | 0.8697 |
| 3.6529 | 0.6545 | 99 | 0.8692 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso12/48d909cf-461e-4528-8b37-f9a1134db917 | lesso12 | 2025-01-29T06:21:59Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:21:44Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48d909cf-461e-4528-8b37-f9a1134db917
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212cea8e8d3699da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212cea8e8d3699da_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso12/48d909cf-461e-4528-8b37-f9a1134db917
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/212cea8e8d3699da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b1daaa76-ce91-488a-8876-44fa4641b938
wandb_project: multi
wandb_run: your_name
wandb_runid: b1daaa76-ce91-488a-8876-44fa4641b938
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 48d909cf-461e-4528-8b37-f9a1134db917
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.7595 | 1.0 | 60 | 11.7631 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tarabukinivan/173248fb-dcd7-4eda-8dd0-46938d5dfd0c | tarabukinivan | 2025-01-29T06:21:48Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:21:23Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 173248fb-dcd7-4eda-8dd0-46938d5dfd0c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 212cea8e8d3699da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212cea8e8d3699da_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/173248fb-dcd7-4eda-8dd0-46938d5dfd0c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/212cea8e8d3699da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b1daaa76-ce91-488a-8876-44fa4641b938
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1daaa76-ce91-488a-8876-44fa4641b938
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 173248fb-dcd7-4eda-8dd0-46938d5dfd0c
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0021 | 1 | 11.7633 |
| 11.763 | 0.0104 | 5 | 11.7633 |
| 11.7638 | 0.0208 | 10 | 11.7632 |
| 11.7635 | 0.0313 | 15 | 11.7630 |
| 11.7632 | 0.0417 | 20 | 11.7627 |
| 11.7628 | 0.0521 | 25 | 11.7626 |
| 11.7627 | 0.0625 | 30 | 11.7626 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/a7be932e-4c65-4f7a-8fff-ba2fd83b0e8c | nttx | 2025-01-29T06:21:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:20:54Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a7be932e-4c65-4f7a-8fff-ba2fd83b0e8c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 212cea8e8d3699da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/212cea8e8d3699da_train_data.json
type:
field_input: doc
field_instruction: original_text
field_output: edited_summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/a7be932e-4c65-4f7a-8fff-ba2fd83b0e8c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/212cea8e8d3699da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b1daaa76-ce91-488a-8876-44fa4641b938
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1daaa76-ce91-488a-8876-44fa4641b938
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a7be932e-4c65-4f7a-8fff-ba2fd83b0e8c
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7481 | 0.8333 | 200 | 11.7588 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nathanialhunt/17e573b9-f4a6-4d2e-8163-a3c8b528d27e | nathanialhunt | 2025-01-29T06:21:26Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:19:20Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 17e573b9-f4a6-4d2e-8163-a3c8b528d27e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dcef816926ec2838_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dcef816926ec2838_train_data.json
type:
field_input: activity
field_instruction: topic
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/17e573b9-f4a6-4d2e-8163-a3c8b528d27e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/dcef816926ec2838_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d997858c-edf3-49a2-a1d9-29c48b4b7819
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d997858c-edf3-49a2-a1d9-29c48b4b7819
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 17e573b9-f4a6-4d2e-8163-a3c8b528d27e
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.1771 |
| 2.0816 | 0.0062 | 13 | 1.8745 |
| 1.8908 | 0.0123 | 26 | 1.7459 |
| 1.7819 | 0.0185 | 39 | 1.7139 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung03/1848c0e1-831c-47f6-a068-1121d547c37f | nhung03 | 2025-01-29T06:19:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:53:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1848c0e1-831c-47f6-a068-1121d547c37f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f018b4f4c84734e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f018b4f4c84734e_train_data.json
type:
field_input: fullSectionsTitre
field_instruction: title_main
field_output: texte
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/1848c0e1-831c-47f6-a068-1121d547c37f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5f018b4f4c84734e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6e907a9e-7c14-47ff-9a22-8cd83cba5430
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6e907a9e-7c14-47ff-9a22-8cd83cba5430
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1848c0e1-831c-47f6-a068-1121d547c37f
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.092 | 0.3310 | 200 | 0.9530 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/german-sentiment-bert-finetuned-augmentation | sercetexam9 | 2025-01-29T06:18:18Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:oliverguhr/german-sentiment-bert",
"base_model:finetune:oliverguhr/german-sentiment-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T06:06:51Z | ---
library_name: transformers
license: mit
base_model: oliverguhr/german-sentiment-bert
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: german-sentiment-bert-finetuned-augmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-sentiment-bert-finetuned-augmentation
This model is a fine-tuned version of [oliverguhr/german-sentiment-bert](https://huggingface.co/oliverguhr/german-sentiment-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5289
- F1: 0.4075
- Roc Auc: 0.6477
- Accuracy: 0.3476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4252 | 1.0 | 141 | 0.4303 | 0.2285 | 0.5735 | 0.3244 |
| 0.3796 | 2.0 | 282 | 0.4097 | 0.2975 | 0.6081 | 0.3512 |
| 0.34 | 3.0 | 423 | 0.4223 | 0.2989 | 0.6109 | 0.3494 |
| 0.3083 | 4.0 | 564 | 0.4245 | 0.3105 | 0.6188 | 0.3529 |
| 0.2541 | 5.0 | 705 | 0.4319 | 0.3399 | 0.6179 | 0.3565 |
| 0.2404 | 6.0 | 846 | 0.4361 | 0.3412 | 0.6222 | 0.3583 |
| 0.2196 | 7.0 | 987 | 0.4547 | 0.3744 | 0.6344 | 0.3583 |
| 0.2299 | 8.0 | 1128 | 0.4542 | 0.3709 | 0.6334 | 0.3512 |
| 0.2 | 9.0 | 1269 | 0.4648 | 0.3502 | 0.6229 | 0.3601 |
| 0.1662 | 10.0 | 1410 | 0.4873 | 0.3746 | 0.6345 | 0.3440 |
| 0.1677 | 11.0 | 1551 | 0.4975 | 0.3920 | 0.6454 | 0.3601 |
| 0.1421 | 12.0 | 1692 | 0.5007 | 0.3844 | 0.6401 | 0.3494 |
| 0.1384 | 13.0 | 1833 | 0.5071 | 0.3836 | 0.6395 | 0.3529 |
| 0.1497 | 14.0 | 1974 | 0.5112 | 0.3837 | 0.6388 | 0.3672 |
| 0.1229 | 15.0 | 2115 | 0.5206 | 0.3950 | 0.6441 | 0.3458 |
| 0.1442 | 16.0 | 2256 | 0.5263 | 0.4015 | 0.6467 | 0.3494 |
| 0.1148 | 17.0 | 2397 | 0.5245 | 0.3996 | 0.6435 | 0.3547 |
| 0.1077 | 18.0 | 2538 | 0.5292 | 0.3977 | 0.6433 | 0.3369 |
| 0.1203 | 19.0 | 2679 | 0.5289 | 0.4051 | 0.6462 | 0.3422 |
| 0.1234 | 20.0 | 2820 | 0.5289 | 0.4075 | 0.6477 | 0.3476 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
mergekit-community/mergekit-linear-mocebtg | mergekit-community | 2025-01-29T06:17:40Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:mergekit-community/mergekit-model_stock-czbocwb",
"base_model:merge:mergekit-community/mergekit-model_stock-czbocwb",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:05:24Z | ---
base_model:
- mergekit-community/mergekit-model_stock-czbocwb
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/mergekit-model_stock-czbocwb](https://huggingface.co/mergekit-community/mergekit-model_stock-czbocwb)
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
parameters:
weight: 1.0
- model: mergekit-community/mergekit-model_stock-czbocwb
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
```
|
Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_S-GGUF | Triangle104 | 2025-01-29T06:17:29Z | 320 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-29T06:17:06Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q4_K_S-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q4_k_s.gguf -c 2048
```
|
Triangle104/Set-70b | Triangle104 | 2025-01-29T06:16:36Z | 44 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:TheDrummer/Nautilus-70B-v0.1",
"base_model:merge:TheDrummer/Nautilus-70B-v0.1",
"base_model:codelion/Llama-3.3-70B-o1",
"base_model:merge:codelion/Llama-3.3-70B-o1",
"license:llama3.3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-19T16:08:47Z | ---
license: llama3.3
library_name: transformers
tags:
- mergekit
- merge
base_model:
- TheDrummer/Anubis-70B-v1
- TheDrummer/Nautilus-70B-v0.1
- codelion/Llama-3.3-70B-o1
model-index:
- name: Set-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 76.43
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Set-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 56.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Set-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 36.33
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Set-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 26.17
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Set-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.96
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Set-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.36
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Set-70b
name: Open LLM Leaderboard
---
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details

RP with some o1 inspiration.
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [codelion/Llama-3.3-70B-o1](https://huggingface.co/codelion/Llama-3.3-70B-o1) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1)
* [TheDrummer/Nautilus-70B-v0.1](https://huggingface.co/TheDrummer/Nautilus-70B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: codelion/Llama-3.3-70B-o1
- model: TheDrummer/Anubis-70B-v1
- model: TheDrummer/Nautilus-70B-v0.1
base_model: codelion/Llama-3.3-70B-o1
merge_method: model_stock
parameters:
normalize: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Triangle104__Set-70b-details)
| Metric |Value|
|-------------------|----:|
|Avg. |44.02|
|IFEval (0-Shot) |76.43|
|BBH (3-Shot) |56.88|
|MATH Lvl 5 (4-Shot)|36.33|
|GPQA (0-shot) |26.17|
|MuSR (0-shot) |18.96|
|MMLU-PRO (5-shot) |49.36|
|
robiulawaldev/a9f3aa97-9e1d-4bb9-b383-3ea958441630 | robiulawaldev | 2025-01-29T06:16:26Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:14:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9f3aa97-9e1d-4bb9-b383-3ea958441630
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5a1549a363bd92b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a1549a363bd92b9_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/a9f3aa97-9e1d-4bb9-b383-3ea958441630
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/5a1549a363bd92b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9d3bed81-78f2-4061-9ad2-a87e632c5343
wandb_project: Birthday-SN56-35-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9d3bed81-78f2-4061-9ad2-a87e632c5343
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a9f3aa97-9e1d-4bb9-b383-3ea958441630
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.6335 |
| 1.5609 | 0.0001 | 13 | 1.2013 |
| 1.3608 | 0.0002 | 26 | 1.0845 |
| 1.1424 | 0.0003 | 39 | 1.0703 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso15/26f70f17-5b0e-4f5d-b9fa-da55f1560aaa | lesso15 | 2025-01-29T06:15:27Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:12:40Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26f70f17-5b0e-4f5d-b9fa-da55f1560aaa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
datasets:
- data_files:
- e11d3af61284289e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e11d3af61284289e_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: reference_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso15/26f70f17-5b0e-4f5d-b9fa-da55f1560aaa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e11d3af61284289e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 33053983-d2d7-46cd-86bd-33b197e4dd4c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 33053983-d2d7-46cd-86bd-33b197e4dd4c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 26f70f17-5b0e-4f5d-b9fa-da55f1560aaa
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0277 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
krory/GenBook-Deepseek-R1.Llama-8B | krory | 2025-01-29T06:15:22Z | 65 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T05:33:35Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- es
- en
---

### **About the Model**
This model is designed to be a storytelling AI capable of creating fun, engaging, and well-structured narratives. Its purpose is to serve as an interactive tool for generating and experiencing unique stories in real time, tailored to the user's input and preferences.
### **Key Features**
- **Interactive Narratives:** Produces coherent and entertaining stories based on user prompts, adapting dynamically to maintain engagement.
- **Consistent World-Building:** Ensures logical progression and consistency in characters, settings, and events across long narratives.
- **Optimized for Efficiency:** Built to perform reliably on limited hardware while delivering high-quality outputs.
### **Training Overview**
The model was fine-tuned using datasets focused on narrative construction, character development, and immersive descriptions. Key aspects of the training include:
- **Adaptability:** Special attention was given to creating a system that responds flexibly to varied user inputs while maintaining coherence.
- **Resource Efficiency:** Techniques like LoRA (Low-Rank Adaptation) and 4-bit quantization were employed to optimize memory usage without compromising output quality.
- **Long-Context Support:** Enhanced with methods to handle extended interactions and complex storylines.
### **Purpose**
The primary goal of this model is to create a personal, customizable storytelling AI, allowing users to immerse themselves in unique, AI-driven stories anytime.
--- |
ohashi56225/pptod-multiwoz | ohashi56225 | 2025-01-29T06:15:09Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-29T06:10:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pratham109/Durvasa | Pratham109 | 2025-01-29T06:14:54Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T06:10:11Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Pratham109
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
|
lesso01/f2509208-0c31-40a2-a734-26e5710b39af | lesso01 | 2025-01-29T06:11:49Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:05:08Z | ---
library_name: peft
license: mit
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f2509208-0c31-40a2-a734-26e5710b39af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 28869e035ebaf0bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/28869e035ebaf0bf_train_data.json
type:
field_input: labels
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/f2509208-0c31-40a2-a734-26e5710b39af
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/28869e035ebaf0bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c01e03ea-ac63-445b-b53d-881712c18952
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c01e03ea-ac63-445b-b53d-881712c18952
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f2509208-0c31-40a2-a734-26e5710b39af
This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.9041 | 0.0001 | 1 | 2.5982 |
| 10.6185 | 0.0003 | 5 | 2.5946 |
| 10.39 | 0.0006 | 10 | 2.5174 |
| 9.3238 | 0.0008 | 15 | 2.4453 |
| 9.2303 | 0.0011 | 20 | 2.4064 |
| 9.0661 | 0.0014 | 25 | 2.3998 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/geberta-base-finetuned-augmentation | sercetexam9 | 2025-01-29T06:11:34Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:ikim-uk-essen/geberta-base",
"base_model:finetune:ikim-uk-essen/geberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T05:45:05Z | ---
library_name: transformers
base_model: ikim-uk-essen/geberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: geberta-base-finetuned-augmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# geberta-base-finetuned-augmentation
This model is a fine-tuned version of [ikim-uk-essen/geberta-base](https://huggingface.co/ikim-uk-essen/geberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4327
- F1: 0.6097
- Roc Auc: 0.7506
- Accuracy: 0.4563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3808 | 1.0 | 141 | 0.3997 | 0.2027 | 0.5729 | 0.3119 |
| 0.3278 | 2.0 | 282 | 0.3627 | 0.3647 | 0.6353 | 0.3939 |
| 0.2881 | 3.0 | 423 | 0.3447 | 0.4099 | 0.6583 | 0.4349 |
| 0.2479 | 4.0 | 564 | 0.3317 | 0.4440 | 0.6741 | 0.4456 |
| 0.1888 | 5.0 | 705 | 0.3475 | 0.5081 | 0.6974 | 0.4439 |
| 0.135 | 6.0 | 846 | 0.3659 | 0.5597 | 0.7345 | 0.4332 |
| 0.1031 | 7.0 | 987 | 0.3894 | 0.5817 | 0.7401 | 0.4635 |
| 0.0755 | 8.0 | 1128 | 0.4100 | 0.5799 | 0.7292 | 0.4510 |
| 0.0559 | 9.0 | 1269 | 0.4327 | 0.6097 | 0.7506 | 0.4563 |
| 0.041 | 10.0 | 1410 | 0.4568 | 0.5988 | 0.7464 | 0.4456 |
| 0.0247 | 11.0 | 1551 | 0.4807 | 0.5891 | 0.7399 | 0.4456 |
| 0.0188 | 12.0 | 1692 | 0.5030 | 0.5945 | 0.7443 | 0.4403 |
| 0.0169 | 13.0 | 1833 | 0.5272 | 0.6055 | 0.7508 | 0.4510 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
kostiantynk1205/d881f980-5340-4475-bc25-89fa9f9d98c9 | kostiantynk1205 | 2025-01-29T06:09:54Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-29T06:06:58Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d881f980-5340-4475-bc25-89fa9f9d98c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b850ae6d01c6c1d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b850ae6d01c6c1d_train_data.json
type:
field_input: post
field_instruction: query
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/d881f980-5340-4475-bc25-89fa9f9d98c9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/1b850ae6d01c6c1d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16672098-11fc-47d4-9215-2a127b077006
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 16672098-11fc-47d4-9215-2a127b077006
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d881f980-5340-4475-bc25-89fa9f9d98c9
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0008 | 13 | nan |
| 0.0 | 0.0017 | 26 | nan |
| 0.0 | 0.0025 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso16/86bb2885-e629-4625-9db6-df3a1fc9b2a0 | lesso16 | 2025-01-29T06:08:59Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-random-GemmaForCausalLM",
"base_model:adapter:fxmarty/tiny-random-GemmaForCausalLM",
"license:mit",
"region:us"
] | null | 2025-01-29T06:08:03Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-random-GemmaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 86bb2885-e629-4625-9db6-df3a1fc9b2a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-random-GemmaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1b850ae6d01c6c1d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1b850ae6d01c6c1d_train_data.json
type:
field_input: post
field_instruction: query
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso16/86bb2885-e629-4625-9db6-df3a1fc9b2a0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/1b850ae6d01c6c1d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16672098-11fc-47d4-9215-2a127b077006
wandb_project: multi
wandb_run: your_name
wandb_runid: 16672098-11fc-47d4-9215-2a127b077006
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 86bb2885-e629-4625-9db6-df3a1fc9b2a0
This model is a fine-tuned version of [fxmarty/tiny-random-GemmaForCausalLM](https://huggingface.co/fxmarty/tiny-random-GemmaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.1040 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso02/2d84c935-36d4-4f6b-b448-e94ddc2e630a | lesso02 | 2025-01-29T06:07:46Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-29T06:01:12Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2d84c935-36d4-4f6b-b448-e94ddc2e630a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c553ffe9c794c5bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c553ffe9c794c5bd_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/2d84c935-36d4-4f6b-b448-e94ddc2e630a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/c553ffe9c794c5bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb862a1e-b09a-4967-b139-a02f72ec2cc8
wandb_project: multi
wandb_run: your_name
wandb_runid: eb862a1e-b09a-4967-b139-a02f72ec2cc8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2d84c935-36d4-4f6b-b448-e94ddc2e630a
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 111
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9893 | 1.0 | 111 | 0.9203 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/bert-base-swedish-cased-new-finetuned-augmentation | sercetexam9 | 2025-01-29T06:07:21Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:KBLab/bert-base-swedish-cased-new",
"base_model:finetune:KBLab/bert-base-swedish-cased-new",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T06:02:06Z | ---
library_name: transformers
base_model: KBLab/bert-base-swedish-cased-new
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-base-swedish-cased-new-finetuned-augmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-swedish-cased-new-finetuned-augmentation
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-new](https://huggingface.co/KBLab/bert-base-swedish-cased-new) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1896
- F1: 0.5215
- Roc Auc: 0.7532
- Accuracy: 0.6931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4332 | 1.0 | 70 | 0.3604 | 0.1123 | 0.5414 | 0.4513 |
| 0.2766 | 2.0 | 140 | 0.2395 | 0.3940 | 0.6846 | 0.6606 |
| 0.2381 | 3.0 | 210 | 0.2095 | 0.3914 | 0.6785 | 0.6606 |
| 0.2096 | 4.0 | 280 | 0.2080 | 0.4761 | 0.7219 | 0.6751 |
| 0.186 | 5.0 | 350 | 0.2017 | 0.4803 | 0.7216 | 0.6570 |
| 0.1801 | 6.0 | 420 | 0.1937 | 0.4888 | 0.7343 | 0.6823 |
| 0.1333 | 7.0 | 490 | 0.1935 | 0.4903 | 0.7354 | 0.6606 |
| 0.1128 | 8.0 | 560 | 0.1962 | 0.4930 | 0.7356 | 0.6823 |
| 0.1107 | 9.0 | 630 | 0.2039 | 0.5069 | 0.7467 | 0.6643 |
| 0.0909 | 10.0 | 700 | 0.1896 | 0.5215 | 0.7532 | 0.6931 |
| 0.0811 | 11.0 | 770 | 0.2059 | 0.5147 | 0.7571 | 0.6679 |
| 0.0762 | 12.0 | 840 | 0.1988 | 0.5052 | 0.7423 | 0.6715 |
| 0.0673 | 13.0 | 910 | 0.1984 | 0.5160 | 0.7416 | 0.6968 |
| 0.0482 | 14.0 | 980 | 0.2044 | 0.5050 | 0.7430 | 0.6679 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
memevis/p8 | memevis | 2025-01-29T06:07:20Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:01:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso14/2b28ea53-5117-41a6-9a3f-4025a3325851 | lesso14 | 2025-01-29T06:07:01Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-29T06:00:51Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b28ea53-5117-41a6-9a3f-4025a3325851
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c553ffe9c794c5bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c553ffe9c794c5bd_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso14/2b28ea53-5117-41a6-9a3f-4025a3325851
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/c553ffe9c794c5bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb862a1e-b09a-4967-b139-a02f72ec2cc8
wandb_project: multi
wandb_run: your_name
wandb_runid: eb862a1e-b09a-4967-b139-a02f72ec2cc8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2b28ea53-5117-41a6-9a3f-4025a3325851
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 111
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9865 | 1.0 | 111 | 0.9209 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/fc533d92-e56d-4bdf-a967-d3489925157c | hongngo | 2025-01-29T06:04:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:19:01Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc533d92-e56d-4bdf-a967-d3489925157c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 71d293a351cdff95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/71d293a351cdff95_train_data.json
type:
field_input: neg
field_instruction: query
field_output: pos
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/fc533d92-e56d-4bdf-a967-d3489925157c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/71d293a351cdff95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fc533d92-e56d-4bdf-a967-d3489925157c
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5303 | 0.0309 | 200 | 4.4860 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/bde368a6-84fe-4cc6-9200-cecd1c4d4fb7 | hongngo | 2025-01-29T06:04:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:31:00Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bde368a6-84fe-4cc6-9200-cecd1c4d4fb7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b08f3dca86f2cb9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b08f3dca86f2cb9d_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/bde368a6-84fe-4cc6-9200-cecd1c4d4fb7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b08f3dca86f2cb9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e7e6af3-0874-40bc-9012-038990c5f193
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e7e6af3-0874-40bc-9012-038990c5f193
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bde368a6-84fe-4cc6-9200-cecd1c4d4fb7
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4075 | 0.0908 | 200 | 2.2733 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nblinh63/c9f3fb6d-b99f-4782-9542-c5d1c690f2e8 | nblinh63 | 2025-01-29T06:03:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:18:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c9f3fb6d-b99f-4782-9542-c5d1c690f2e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 71d293a351cdff95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/71d293a351cdff95_train_data.json
type:
field_input: neg
field_instruction: query
field_output: pos
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/c9f3fb6d-b99f-4782-9542-c5d1c690f2e8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/71d293a351cdff95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c9f3fb6d-b99f-4782-9542-c5d1c690f2e8
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5245 | 0.0309 | 200 | 4.4793 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrhunghd/dfcd6ebf-4b24-4345-b180-9cbf245e085f | mrhunghd | 2025-01-29T06:03:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:19:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dfcd6ebf-4b24-4345-b180-9cbf245e085f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 71d293a351cdff95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/71d293a351cdff95_train_data.json
type:
field_input: neg
field_instruction: query
field_output: pos
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/dfcd6ebf-4b24-4345-b180-9cbf245e085f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/71d293a351cdff95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dfcd6ebf-4b24-4345-b180-9cbf245e085f
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5316 | 0.0309 | 200 | 4.4820 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/3634ab0a-f471-4adc-a9a8-4d6729aadd57 | great0001 | 2025-01-29T06:02:45Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-29T06:00:58Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3634ab0a-f471-4adc-a9a8-4d6729aadd57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c553ffe9c794c5bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c553ffe9c794c5bd_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/3634ab0a-f471-4adc-a9a8-4d6729aadd57
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c553ffe9c794c5bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb862a1e-b09a-4967-b139-a02f72ec2cc8
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb862a1e-b09a-4967-b139-a02f72ec2cc8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3634ab0a-f471-4adc-a9a8-4d6729aadd57
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2575 | 0.0011 | 1 | 2.4325 |
| 1.2341 | 0.0147 | 13 | 1.1619 |
| 1.0006 | 0.0293 | 26 | 1.0318 |
| 1.0134 | 0.0440 | 39 | 0.9837 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/bda91e05-c107-40e7-92d0-e42263ea60e8 | aleegis12 | 2025-01-29T06:02:42Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:26:10Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bda91e05-c107-40e7-92d0-e42263ea60e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- d9f1192b8c58ed2d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d9f1192b8c58ed2d_train_data.json
type:
field_input: schema
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/bda91e05-c107-40e7-92d0-e42263ea60e8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/d9f1192b8c58ed2d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2ccfaff7-28e6-4d9e-8b3d-0f91fec12998
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2ccfaff7-28e6-4d9e-8b3d-0f91fec12998
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bda91e05-c107-40e7-92d0-e42263ea60e8
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9992 | 0.0016 | 1 | 0.4011 |
| 0.9022 | 0.0824 | 50 | 0.2685 |
| 1.0692 | 0.1647 | 100 | 0.2487 |
| 1.0712 | 0.2471 | 150 | 0.2350 |
| 1.2513 | 0.3295 | 200 | 0.2293 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/27316e0c-7de2-4ce6-8c59-a067cc6c97d4 | Best000 | 2025-01-29T06:02:00Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-29T06:00:26Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27316e0c-7de2-4ce6-8c59-a067cc6c97d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c553ffe9c794c5bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c553ffe9c794c5bd_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/27316e0c-7de2-4ce6-8c59-a067cc6c97d4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c553ffe9c794c5bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb862a1e-b09a-4967-b139-a02f72ec2cc8
wandb_project: Birthday-SN56-32-Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb862a1e-b09a-4967-b139-a02f72ec2cc8
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 27316e0c-7de2-4ce6-8c59-a067cc6c97d4
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 2.4325 |
| 2.3025 | 0.0147 | 13 | 1.9193 |
| 1.6542 | 0.0293 | 26 | 1.1245 |
| 1.1326 | 0.0440 | 39 | 1.0212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nghiatrannnnnn/00f413d1-cad5-40c3-8edf-2b77f3f8642e | nghiatrannnnnn | 2025-01-29T06:01:37Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:30:50Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 00f413d1-cad5-40c3-8edf-2b77f3f8642e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b08f3dca86f2cb9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b08f3dca86f2cb9d_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/00f413d1-cad5-40c3-8edf-2b77f3f8642e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b08f3dca86f2cb9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e7e6af3-0874-40bc-9012-038990c5f193
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e7e6af3-0874-40bc-9012-038990c5f193
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 00f413d1-cad5-40c3-8edf-2b77f3f8642e
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3907 | 0.0908 | 200 | 2.2642 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/ee0bee4a-3448-4463-9143-6c55f7c4e792 | adammandic87 | 2025-01-29T06:01:03Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | 2025-01-29T05:45:22Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ee0bee4a-3448-4463-9143-6c55f7c4e792
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ab9f66717531643e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ab9f66717531643e_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/ee0bee4a-3448-4463-9143-6c55f7c4e792
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/ab9f66717531643e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 99226ce4-70ae-47e9-94ba-26f819deda4a
wandb_project: Birthday-SN56-13-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 99226ce4-70ae-47e9-94ba-26f819deda4a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ee0bee4a-3448-4463-9143-6c55f7c4e792
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3233 | 0.0002 | 1 | 2.5930 |
| 2.2476 | 0.0021 | 13 | 2.2252 |
| 2.1282 | 0.0042 | 26 | 1.9763 |
| 1.8158 | 0.0063 | 39 | 1.8456 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso05/443b6a3e-2d1f-4747-aa44-81b7fdf863bd | lesso05 | 2025-01-29T05:58:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:55:28Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 443b6a3e-2d1f-4747-aa44-81b7fdf863bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: true
chat_template: llama3
datasets:
- data_files:
- 8761f2b4c663324e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8761f2b4c663324e_train_data.json
type:
field_input: Article Content
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/443b6a3e-2d1f-4747-aa44-81b7fdf863bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/8761f2b4c663324e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 542ef2a5-6717-4111-9cb4-d9bb7d2c34d1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 542ef2a5-6717-4111-9cb4-d9bb7d2c34d1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 443b6a3e-2d1f-4747-aa44-81b7fdf863bd
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3875 | 0.0020 | 1 | 1.3837 |
| 1.355 | 0.0099 | 5 | 1.3731 |
| 1.129 | 0.0198 | 10 | 1.3047 |
| 1.1948 | 0.0296 | 15 | 1.2178 |
| 1.2318 | 0.0395 | 20 | 1.1867 |
| 1.0716 | 0.0494 | 25 | 1.1782 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso02/84e66dc1-af54-4bbe-8441-a2b9a37ad826 | lesso02 | 2025-01-29T05:58:23Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2025-01-29T05:55:50Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 84e66dc1-af54-4bbe-8441-a2b9a37ad826
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8761f2b4c663324e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8761f2b4c663324e_train_data.json
type:
field_input: Article Content
field_instruction: Question
field_output: Answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/84e66dc1-af54-4bbe-8441-a2b9a37ad826
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/8761f2b4c663324e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 542ef2a5-6717-4111-9cb4-d9bb7d2c34d1
wandb_project: multi
wandb_run: your_name
wandb_runid: 542ef2a5-6717-4111-9cb4-d9bb7d2c34d1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 84e66dc1-af54-4bbe-8441-a2b9a37ad826
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 64
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1724 | 0.9960 | 63 | 1.1689 |
| 1.9755 | 1.0119 | 64 | 1.1688 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Keltezaa/perfect_anime_p_flux | Keltezaa | 2025-01-29T05:57:59Z | 129 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-image | 2025-01-29T05:50:17Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/custom.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: cc-by-nc-4.0
---
# perfect_anime_p_flux
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/perfect_p_flux/tree/main) them in the Files & versions tab.
|
Amigoo/chiad-girl | Amigoo | 2025-01-29T05:57:32Z | 285 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T05:29:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: chiad-girl
---
# Chiad Girl
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `chiad-girl` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Amigoo/chiad-girl', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
datlaaaaaaa/ccf7432d-0156-4996-a730-6cab7d5af581 | datlaaaaaaa | 2025-01-29T05:56:19Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:42:27Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ccf7432d-0156-4996-a730-6cab7d5af581
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/ccf7432d-0156-4996-a730-6cab7d5af581
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ccf7432d-0156-4996-a730-6cab7d5af581
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5878 | 0.1716 | 200 | 2.4631 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/5ead647f-9cc9-4ce2-9aed-f333bb1d1de2 | adammandic87 | 2025-01-29T05:55:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:52:50Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5ead647f-9cc9-4ce2-9aed-f333bb1d1de2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 226486ea217cc845_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/226486ea217cc845_train_data.json
type:
field_instruction: prompt
field_output: caption
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/5ead647f-9cc9-4ce2-9aed-f333bb1d1de2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/226486ea217cc845_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d450f3db-bde7-42c0-80c7-58bdc98ab00b
wandb_project: Birthday-SN56-34-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d450f3db-bde7-42c0-80c7-58bdc98ab00b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5ead647f-9cc9-4ce2-9aed-f333bb1d1de2
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 2.2178 |
| 2.1666 | 0.0049 | 13 | 1.8687 |
| 1.8313 | 0.0099 | 26 | 1.5912 |
| 1.5756 | 0.0148 | 39 | 1.5171 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cunghoctienganh/85cd217a-5b84-4e5e-a18a-138fb6d27847 | cunghoctienganh | 2025-01-29T05:55:06Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:42:44Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 85cd217a-5b84-4e5e-a18a-138fb6d27847
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/85cd217a-5b84-4e5e-a18a-138fb6d27847
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 85cd217a-5b84-4e5e-a18a-138fb6d27847
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5997 | 0.1716 | 200 | 2.4693 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/ChrisG19_-_Llama-2-7b-kukul-bot-v3-8bits | RichardErkhov | 2025-01-29T05:53:38Z | 6 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:49:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-kukul-bot-v3 - bnb 8bits
- Model creator: https://huggingface.co/ChrisG19/
- Original model: https://huggingface.co/ChrisG19/Llama-2-7b-kukul-bot-v3/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nexspear/8370c40c-6342-44de-9a08-2f18573723a3 | Nexspear | 2025-01-29T05:51:57Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:42:29Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8370c40c-6342-44de-9a08-2f18573723a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: Nexspear/8370c40c-6342-44de-9a08-2f18573723a3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8370c40c-6342-44de-9a08-2f18573723a3
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0034 | 1 | 3.7973 |
| 3.2948 | 0.0309 | 9 | 3.0690 |
| 2.8413 | 0.0617 | 18 | 2.7606 |
| 2.6077 | 0.0926 | 27 | 2.5916 |
| 2.5384 | 0.1235 | 36 | 2.4937 |
| 2.4458 | 0.1544 | 45 | 2.4309 |
| 2.3587 | 0.1852 | 54 | 2.3902 |
| 2.4441 | 0.2161 | 63 | 2.3579 |
| 2.3751 | 0.2470 | 72 | 2.3372 |
| 2.3437 | 0.2779 | 81 | 2.3266 |
| 2.3477 | 0.3087 | 90 | 2.3215 |
| 2.3269 | 0.3396 | 99 | 2.3205 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/10ec86e3-3cf9-45e0-87c3-f9303357dd14 | nhung01 | 2025-01-29T05:49:52Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:18:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 10ec86e3-3cf9-45e0-87c3-f9303357dd14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 71d293a351cdff95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/71d293a351cdff95_train_data.json
type:
field_input: neg
field_instruction: query
field_output: pos
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/10ec86e3-3cf9-45e0-87c3-f9303357dd14
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/71d293a351cdff95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 10ec86e3-3cf9-45e0-87c3-f9303357dd14
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5207 | 0.0309 | 200 | 4.4795 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/d0cd8d1c-1752-4d1b-91fc-a41d95183148 | shibajustfor | 2025-01-29T05:49:17Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:48:00Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0cd8d1c-1752-4d1b-91fc-a41d95183148
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/d0cd8d1c-1752-4d1b-91fc-a41d95183148
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d0cd8d1c-1752-4d1b-91fc-a41d95183148
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 3.7038 |
| 3.2702 | 0.0112 | 13 | 2.8230 |
| 2.8093 | 0.0223 | 26 | 2.6314 |
| 2.6943 | 0.0335 | 39 | 2.5365 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Virtuoso-Lite-Q6_K-GGUF | Triangle104 | 2025-01-29T05:48:55Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:arcee-ai/Virtuoso-Lite",
"base_model:quantized:arcee-ai/Virtuoso-Lite",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T05:46:29Z | ---
base_model: arcee-ai/Virtuoso-Lite
library_name: transformers
license: other
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Virtuoso-Lite-Q6_K-GGUF
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Lite`](https://huggingface.co/arcee-ai/Virtuoso-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Lite) for more details on the model.
---
Model details:
-
Virtuoso-Lite (10B) is our next-generation,
10-billion-parameter language model based on the Llama-3 architecture.
It is distilled from Deepseek-v3 using ~1.1B tokens/logits, allowing it
to achieve robust performance at a significantly reduced parameter count
compared to larger models. Despite its compact size, Virtuoso-Lite
excels in a variety of tasks, demonstrating advanced reasoning, code
generation, and mathematical problem-solving capabilities.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Virtuoso-Lite-Q6_K-GGUF --hf-file virtuoso-lite-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Virtuoso-Lite-Q6_K-GGUF --hf-file virtuoso-lite-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Virtuoso-Lite-Q6_K-GGUF --hf-file virtuoso-lite-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Virtuoso-Lite-Q6_K-GGUF --hf-file virtuoso-lite-q6_k.gguf -c 2048
```
|
lesso11/ccf71468-2cf5-4196-9a1e-98d393ec06e0 | lesso11 | 2025-01-29T05:48:41Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:43:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ccf71468-2cf5-4196-9a1e-98d393ec06e0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso11/ccf71468-2cf5-4196-9a1e-98d393ec06e0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: multi
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ccf71468-2cf5-4196-9a1e-98d393ec06e0
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 146
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3394 | 0.9949 | 145 | 2.3490 |
| 2.9645 | 1.0017 | 146 | 2.3491 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/e5b0629f-792f-4e39-8f05-8c37b30e589a | nttx | 2025-01-29T05:48:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:42:17Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5b0629f-792f-4e39-8f05-8c37b30e589a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/e5b0629f-792f-4e39-8f05-8c37b30e589a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e5b0629f-792f-4e39-8f05-8c37b30e589a
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3391 | 0.3432 | 200 | 2.2885 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thaffggg/a659a1f7-b305-46ed-af94-dd8bee2e8d4d | thaffggg | 2025-01-29T05:47:45Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:18:21Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a659a1f7-b305-46ed-af94-dd8bee2e8d4d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 71d293a351cdff95_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/71d293a351cdff95_train_data.json
type:
field_input: neg
field_instruction: query
field_output: pos
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/a659a1f7-b305-46ed-af94-dd8bee2e8d4d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/71d293a351cdff95_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7681049f-e5d7-4d35-b3c4-7fac246dd4b7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a659a1f7-b305-46ed-af94-dd8bee2e8d4d
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5292 | 0.0309 | 200 | 4.4813 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jrinky/model3 | Jrinky | 2025-01-29T05:46:55Z | 39 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11808",
"loss:Infonce",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-01-29T05:41:49Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:11808
- loss:Infonce
base_model: BAAI/bge-m3
widget:
- source_sentence: Who are some notable individuals named Roger Mason
sentences:
- "Rav Kook's writings are extensive, and he is considered one of the most celebrated\
\ and influential rabbis of the 20th century. Some rabbis recommend that students\
\ of his begin studying his writings with Ein Ayah. References\n\nExternal links\n\
\ Ayin Ayah (full text), Hebrew Wikisource\n * Ayn Aya Classes in English\n\n\
Talmud\nAggadic Midrashim\nAbraham Isaac Kook\nHebrew-language religious books"
- 'Roger Mason may refer to:
Roger Mason (baseball) (born 1958), American baseball player
Roger Mason (geologist) (born 1941), discoverer of Ediacaran fossils
Roger Mason Jr. (born 1980), American basketball player
Roger Mason (musician), Australian keyboardist
L. Roger Mason, Jr., former assistant director of National Intelligence for Systems
and Resource Analyses'
- 'Timetabled passenger services on both lines had ceased by the end of February
1959. Shipping
The Bourne-Morton Canal or Bourne Old Eau connected the town to the sea in Roman
times. Until the mid-19th century, the present Bourne Eau was capable of carrying
commercial boat traffic from the Wash coast and Spalding. This resulted from the
investment following the Bourne Navigation Act of 1780. Passage became impossible
once the junction of the Eau and the River Glen was converted from gates to a
sluice in 1860. Media
Local news and television programmes are provided by BBC Yorkshire and Lincolnshire
and ITV Yorkshire. Television signals are received from the Belmont TV transmitter,
the Waltham TV transmitter can also be received which broadcast BBC East Midlands
and ITV Central programmes. Local radio stations are BBC Radio Lincolnshire, Greatest
Hits Radio Lincolnshire and Lincs FM. The town''s local newspapers are Bourne
Local and Stamford Mercury. Sport
Bourne Town Football Club plays football in the United Counties Football League,
whilst Bourne Cricket Club plays in the Lincolnshire ECB Premier League. These
teams play their home games at the Abbey Lawn, a recreation ground privately owned
by the Bourne United Charities. Motor sports
The racing-car marques English Racing Automobiles (ERA) and British Racing Motors
(BRM) were both founded in Bourne by Raymond Mays, an international racing driver
and designer who lived in Bourne. The former ERA and BRM workshops in Spalding
Road are adjacent to Eastgate House, the Mays'' family home in the town''s Eastgate.
Landmarks
There are currently 71 listed buildings in the parish of Bourne, the most important
being Bourne Abbey and the Parish Church of St Peter and St Paul (1138), which
is the only one scheduled Grade I. Notable people
Bourne is reputedly the birthplace of Hereward the Wake (in about 1035), although
the 12th-century source of this information, De Gestis Herwardi Saxonis, refers
only to his father as being "of Bourne" and to the father''s house and retainers
there. Robert Mannyng (1264–1340) is credited with putting the speech of the ordinary
people of his time into recognisable form. He is better known as Robert de Brunne
because of his long period of residence as a canon at Bourne Abbey. There he completed
his life''s work of popularising religious and historical material in a Middle
English dialect that was easily understood at that time. William Cecil (1520–1598)
became the first Lord Burghley after serving Queen Elizabeth I. He was born at
a house in the centre of Bourne that is now the Burghley Arms. Dr William Dodd
(1729–1777), was an Anglican clergyman, man of letters and forger. He was prosecuted,
sentenced to death and publicly hanged at Tyburn in 1777. Charles Frederick Worth
(1825–1895), son of a solicitor, lived at Wake House in North Street. He moved
to Paris and became a renowned designer of women''s fashion and the founder of
haute couture. The French government awarded him the Légion d''honneur. Sir George
White (1840-1912), MP for North West Norfolk, a seat he held for twelve years
until he died in 1912. He was knighted for public service in 1907.'
- source_sentence: What football team does the Japanese player play for
sentences:
- After the meeting, Box summons up the courage to ask Lorraine (Sue Holderness)
on the date. The act ends with Robert's coat getting on fire because of the cigarette,
with "Smoke Gets in Your Eyes" on the background.
- is a Japanese football player. He plays for Honda Lock.
- As followers on Twitter and FB probably well know I’ve been up to more than a
spot of preserving of late. It’s my latest addiction, as if I need any more of
those. My Dad’s the King of Jams, Chutneys and Pickles and I have a feeling he’s
passed his enthusiastic genes for it on to me!. Which is great, but time consuming.
Many an evening has been spent peeling, dicing, de-stoning, chopping, stirring,
testing, sterilising and jarring. And then obviously the tasting. And all the
crackers, bread and cheese to go with it!. I rarely get to bed much before midnight
on my chutneying nights. And to be honest my cupboards are now fit to bursting
with so many goodies, but at least I have christmas presents totally nailed this
year. My Dad’s been making Hedgerow Chutney for years, and it happens to be everyone’s
favourite of all his chutney recipes (and he makes quite a number!). Each autumn
he takes a long walk around the field at the back of his house in Herefordshire
picking all the freebie hedgerow goodies he can find and transforms them into
this marvellously fruitful chutney. There’s always plenty of damsons, bullaces,
sloes, blackberries and a few elderberries. Plus pears or apples for smoothing
and bulking out. We don’t have quite the same fruit in our hedgerows in France
but I thought I’d make my own French version picking the fruit from our garden
and nearby tracks and lanes, managing to find plenty of figs, greengages, plums,
pears, blackberries and sloes just before the season finished a couple of weeks
ago. We’ve elderberries here too but they were way past their best by the time
I got into full chutney mode. There’s no escaping how time consuming and labourious
chutney making can be, especially when using so much fruit that needs hefty preparatory
work. I realise now why it’s a hobby generally taken up by retired folk. But the
results are so worth it, if you can spare it set aside a whole evening in the
kitchen and wile away the hours getting lost in music or the radio or even catching
up on a few programmes on You Tube.
- source_sentence: What is the purpose of Business Intelligence
sentences:
- 'College career
Proctor played as a defensive lineman for the North Carolina Central Eagles from
2008 to 2012. He was redshirted in 2008.'
- The purpose of Business Intelligence is the transformation of raw data into meaningful
information which can be used to make better business decisions. Business Intelligence
grew out of Decision Support systems and is all about collecting data from disparate
sources, conforming and integrating that data into central repositories which
support reporting and analysis activities.
- You have to show the police courtesy, they are only human. No one even WANTS for
the judicial system to work. They are too lazy.
- source_sentence: How does the speaker feel about Battle Symphony
sentences:
- It's a symptomless prearranged fact that when you afford your babe a infant work
you motivate the status system, bolster the infant's stressed system, eat up colic,
and harden your in bondage next to your kid. Now, how satisfying is that
- Piquet passed Laffite to become the race's fifth different leader. Senna reached
second just 1.7 seconds behind Piquet by passing Laffite, who then pitted for
tires. With the two of them in front on their own, and Piquet leading by up to
3.5 seconds, Senna was content for the time being to follow his countryman. After
eight laps in the lead, Piquet pitted for tires. Senna regained first place and
then also pitted. Piquet's 18.4 second stop was even slower than teammate Mansell's
had been, but when he returned to the track, the two-time champion got the bit
between his teeth. Running second behind Senna, Piquet set the fastest lap of
the race on lap 41, but with a pit stop ten seconds quicker than Piquet's, Senna
was able to retain the lead. On the very next lap, the 42nd, Piquet pushed a bit
too much, and crashed hard at the left-hand corner before the last chicane. He
ended up in the tire barrier, unhurt, but with his car in a very precarious position.
The crane, present for just that reason, was unable to move the car. Arnoux, now
16.6 seconds behind in second, took a second a lap off Senna's lead for five laps
while a yellow was displayed in the corner where Piquet had crashed. As soon as
the yellow flag was gone, Arnoux went wide and hit Piquet's abandoned Williams!
The Frenchman decided that his car was not damaged, and attempted to rejoin the
field, but did so right in front of Thierry Boutsen's Arrows-BMW, sidelining both
cars. Very uncharacteristic of a street race, these three – Piquet, Arnoux and
Boutsen – were the only drivers all afternoon to retire due to accidents.
- Like Battle Symphony, it's not bad. It's just extremely boring.
- source_sentence: When did he migrate to New South Wales
sentences:
- 'predict ministry in a sales and special floor being Job to the vulnerability
diver. team: This research will work last for either, often, and also obtaining
spreadsheets in the funny wedding power of the usability time. Physical Demands:
The exclusive transitions was temporarily need perfect of those that must share
developed by an position to badly do the animal objectives of this source. necessary
terabytes may pay acted to increase streets with hearts to address the professional
items. solely, the job will distract, Coordinate and be inbox security fun interdisciplinary
operations that might read in back of 20 updates The service will properly be
to like the detection throughout the use: logging, including, killing, teaching,
leading, preparing, operating, and using.'
- "Shizuka Shirakawa, Scholar of Chinese-language literature. Horin Fukuoji, Nihonga\
\ painter. 2005\n Mitsuko Mori. Actress. Makoto Saitō (1921–2008). Political scientist,\
\ specializing in American diplomatic and political history. Ryuzan Aoki, Ceramic\
\ artist. Toshio Sawada, Civil engineer. Shigeaki Hinohara, Doctor. 2006\n Yoshiaki\
\ Arata. A pioneer of nuclear fusion research. Jakuchō Setouchi. Writer/Buddhist\
\ nun. Hidekazu Yoshida. Music critic. Chusaku Oyama, Nihonga painter. Miyohei\
\ Shinohara, Economist. 2007\n Akira Mikazuki. Former justice minister and professor\
\ emeritus. Shinya Nakamura. Sculptor. Kōji Nakanishi. Organic chemist. Tokindo\
\ Okada, Developmental biologist. Shigeyama Sensaku, Kyogen performer. 2008\n\
\ Hironoshin Furuhashi (1928–2009). Sportsman and sports bureaucrat. Kiyoshi Itō.\
\ A mathematician whose work is now called Itō calculus. Donald Keene."
- He attended Derby Grammar School and Beaufort House in London, and migrated to
New South Wales in 1883. He settled in Newcastle, where he worked as a shipping
agent, eventually partnering with his brothers in a firm. On 6 May 1893 he married
Gertrude Mary Saddington, with whom he had five children.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Jrinky/model3")
# Run inference
sentences = [
'When did he migrate to New South Wales',
'He attended Derby Grammar School and Beaufort House in London, and migrated to New South Wales in 1883. He settled in Newcastle, where he worked as a shipping agent, eventually partnering with his brothers in a firm. On 6 May 1893 he married Gertrude Mary Saddington, with whom he had five children.',
'Shizuka Shirakawa, Scholar of Chinese-language literature. Horin Fukuoji, Nihonga painter. 2005\n Mitsuko Mori. Actress. Makoto Saitō (1921–2008). Political scientist, specializing in American diplomatic and political history. Ryuzan Aoki, Ceramic artist. Toshio Sawada, Civil engineer. Shigeaki Hinohara, Doctor. 2006\n Yoshiaki Arata. A pioneer of nuclear fusion research. Jakuchō Setouchi. Writer/Buddhist nun. Hidekazu Yoshida. Music critic. Chusaku Oyama, Nihonga painter. Miyohei Shinohara, Economist. 2007\n Akira Mikazuki. Former justice minister and professor emeritus. Shinya Nakamura. Sculptor. Kōji Nakanishi. Organic chemist. Tokindo Okada, Developmental biologist. Shigeyama Sensaku, Kyogen performer. 2008\n Hironoshin Furuhashi (1928–2009). Sportsman and sports bureaucrat. Kiyoshi Itō. A mathematician whose work is now called Itō calculus. Donald Keene.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 11,808 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.85 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 186.46 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What type of tournament structure was used in this freestyle wrestling competition</code> | <code>This freestyle wrestling competition consisted of a single-elimination tournament, with a repechage used to determine the winners of two bronze medals. Results<br>Legend<br>F — Won by fall<br><br>Final<br><br>Top half<br><br>Bottom half<br><br>Repechage<br><br>References<br>Official website<br><br>Women's freestyle 58 kg<br>World</code> |
| <code>What was the status of Josip Broz Tito under the 1974 Constitution of Yugoslavia regarding his presidency</code> | <code>1 Wednesday, 22 April 1998. 2 (8.30 a.m.). 3 JUDGE CASSESE: Good morning. May I ask the<br>4 Registrar to call out the case number, please. 5 THE REGISTRAR: Case number IT-95-13a-T,<br>6 Prosecutor versus Slavko Dokmanovic. 7 MR. NIEMANN: My name is Niemann. I appear<br>8 with my colleagues, Mr. Williamson, Mr. Waespi and<br>9 Mr. Vos. 10 MR. FILA: My name is Mr. Toma Fila and<br>11 I appear with Ms. Lopicic and Mr. Petrovic in Defence of<br>12 my client, Mr. Slavko Dokmanovic. 13 JUDGE CASSESE: Mr. Dokmanovic, can you<br>14 follow me? Before we call the witness, may I ask you<br>15 whether you agree to this note from the Registrar about<br>16 the two documents which we discussed yesterday -- you<br>17 have probably received the English translation of the<br>18 bibliography of our witness, plus the missing pages of<br>19 the other document, so I think it is agreed that they<br>20 can be admitted into evidence. 21 MR. NIEMANN: Yes. 22 JUDGE CASSESE: Shall we proceed with the<br>24 MR. FILA: Your Honour, before we continue<br>25 wi...</code> |
| <code>How quickly can you get loan approval and funds transferred with Crawfort</code> | <code>Then click on the submit button, and it’s done. Make your dream come true with Crawfort<br>When you all submit the loan form, then the agency takes a few hours to process and for approval of the loan. Not only that, you can get your loan amount in your account within a day after getting approval. Many money lenders all take more time in processing things and to credit the amount as well. So, for all that, a customer suffers more as they can’t get the money immediately. But here all these things are not done, and the staff here always make sure to provide you best and fast services. For all these things, you can get the best loan services from here without any doubt.</code> |
* Loss: <code>selfloss.Infonce</code> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,476 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.61 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 171.81 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is Hector Guimard best known for</code> | <code>Hector Guimard (, 10 March 1867 – 20 May 1942) was a French architect and designer, and a prominent figure of the Art Nouveau style. He achieved early fame with his design for the Castel Beranger, the first Art Nouveau apartment building in Paris, which was selected in an 1899 competition as one of the best new building facades in the city. He is best known for the glass and iron edicules or canopies, with ornamental Art Nouveau curves, which he designed to cover the entrances of the first stations of the Paris Metro. Between 1890 and 1930, Guimard designed and built some fifty buildings, in addition to one hundred and forty-one subway entrances for Paris Metro, as well as numerous pieces of furniture and other decorative works. However, in the 1910s Art Nouveau went out of fashion and by the 1960s most of his works had been demolished, and only two of his original Metro edicules were still in place. Guimard's critical reputation revived in the 1960s, in part due to subsequent acquisit...</code> |
| <code>What does Mark Kantrowitz say about the inclusion of loans in financial aid packages</code> | <code>"They don't always understand that part of the financial aid package includes loans," he says. But loans "don't really reduce your costs," explains Mark Kantrowitz, founder of the financial aid website FinAid.org and publisher of Edvisors Network. "They simply spread them out over time. ... A loan is a loan.</code> |
| <code>How can Ayurveda support women's health during menopause</code> | <code>Especially as we journey towards menopause, Ayurveda is there to support us with its millenary wisdom. These are some easy routines to incorporate for the daily care of the vulva and vagina, our most delicate flower. Sesame oil: our best allied against dryness, it cannot be missing in our diet.</code> |
* Loss: <code>selfloss.Infonce</code> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.2033 | 100 | 0.2694 | 0.0690 |
| 0.4065 | 200 | 0.0822 | 0.0528 |
| 0.6098 | 300 | 0.0689 | 0.0497 |
| 0.8130 | 400 | 0.0644 | 0.0469 |
| 1.0163 | 500 | 0.0643 | 0.0443 |
| 1.2195 | 600 | 0.0378 | 0.0473 |
| 1.4228 | 700 | 0.04 | 0.0479 |
| 1.6260 | 800 | 0.0358 | 0.0461 |
| 1.8293 | 900 | 0.0332 | 0.0507 |
| 2.0325 | 1000 | 0.0283 | 0.0538 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.4.0
- Transformers: 4.42.4
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### Infonce
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
shibajustfor/dd599f20-7887-4c85-baa0-4520aa7f4d75 | shibajustfor | 2025-01-29T05:44:30Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:43:16Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dd599f20-7887-4c85-baa0-4520aa7f4d75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/dd599f20-7887-4c85-baa0-4520aa7f4d75
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dd599f20-7887-4c85-baa0-4520aa7f4d75
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 3.8577 |
| 3.4877 | 0.0112 | 13 | 2.8871 |
| 2.8716 | 0.0223 | 26 | 2.6819 |
| 2.7414 | 0.0335 | 39 | 2.6047 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/6685113d-f6e8-4d1f-a3f5-eb83b5eba3f3 | daniel40 | 2025-01-29T05:43:58Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:42:39Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6685113d-f6e8-4d1f-a3f5-eb83b5eba3f3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/6685113d-f6e8-4d1f-a3f5-eb83b5eba3f3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6685113d-f6e8-4d1f-a3f5-eb83b5eba3f3
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 3.8577 |
| 3.5768 | 0.0112 | 13 | 2.9808 |
| 2.9599 | 0.0223 | 26 | 2.7027 |
| 2.7612 | 0.0335 | 39 | 2.6134 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung03/dbcc51f5-eb2c-430c-a255-e716a3c3d9ab | nhung03 | 2025-01-29T05:43:07Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:30:56Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dbcc51f5-eb2c-430c-a255-e716a3c3d9ab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b08f3dca86f2cb9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b08f3dca86f2cb9d_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/dbcc51f5-eb2c-430c-a255-e716a3c3d9ab
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b08f3dca86f2cb9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e7e6af3-0874-40bc-9012-038990c5f193
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e7e6af3-0874-40bc-9012-038990c5f193
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# dbcc51f5-eb2c-430c-a255-e716a3c3d9ab
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3583 | 0.0908 | 200 | 2.2654 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Yukikai-Gemma-v0.3-GGUF | mradermacher | 2025-01-29T05:39:18Z | 261 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"sft",
"en",
"base_model:N8Programs/Yukikai-Gemma-v0.3",
"base_model:quantized:N8Programs/Yukikai-Gemma-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-29T04:01:36Z | ---
base_model: N8Programs/Yukikai-Gemma-v0.3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/N8Programs/Yukikai-Gemma-v0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Yukikai-Gemma-v0.3-GGUF/resolve/main/Yukikai-Gemma-v0.3.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV35 | RobertoSonic | 2025-01-29T05:38:51Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-01-29T05:10:18Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-tiny-patch4-window8-256-dmae-humeda-DAV35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-dmae-humeda-DAV35
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2578
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.85 | 1.0 | 36 | 1.4133 | 0.5333 |
| 1.9294 | 2.0 | 72 | 0.9294 | 0.6333 |
| 1.1818 | 3.0 | 108 | 0.7700 | 0.65 |
| 0.7534 | 4.0 | 144 | 0.7531 | 0.7167 |
| 0.4285 | 5.0 | 180 | 0.9580 | 0.7 |
| 0.08 | 6.0 | 216 | 1.1785 | 0.75 |
| 0.0891 | 7.0 | 252 | 1.4686 | 0.7333 |
| 0.0602 | 8.0 | 288 | 1.7816 | 0.7 |
| 0.0284 | 9.0 | 324 | 1.5790 | 0.7667 |
| 0.0513 | 10.0 | 360 | 1.8933 | 0.7 |
| 0.0335 | 11.0 | 396 | 2.1433 | 0.65 |
| 0.025 | 12.0 | 432 | 2.3483 | 0.6667 |
| 0.0246 | 13.0 | 468 | 2.6426 | 0.6667 |
| 0.0306 | 14.0 | 504 | 3.0153 | 0.65 |
| 0.016 | 15.0 | 540 | 3.1259 | 0.6833 |
| 0.006 | 16.0 | 576 | 2.7612 | 0.7167 |
| 0.0234 | 17.0 | 612 | 2.5334 | 0.7167 |
| 0.0025 | 18.0 | 648 | 2.1768 | 0.7667 |
| 0.0001 | 19.0 | 684 | 2.6585 | 0.7167 |
| 0.0007 | 20.0 | 720 | 2.3282 | 0.7167 |
| 0.0003 | 21.0 | 756 | 2.6975 | 0.7333 |
| 0.0003 | 22.0 | 792 | 2.6186 | 0.7 |
| 0.0006 | 23.0 | 828 | 2.9600 | 0.7167 |
| 0.0008 | 24.0 | 864 | 2.9623 | 0.7333 |
| 0.0002 | 25.0 | 900 | 2.8632 | 0.7167 |
| 0.0143 | 26.0 | 936 | 2.8460 | 0.7167 |
| 0.0 | 27.0 | 972 | 2.9372 | 0.7167 |
| 0.0002 | 28.0 | 1008 | 2.8056 | 0.75 |
| 0.0001 | 29.0 | 1044 | 3.0591 | 0.7167 |
| 0.0001 | 30.0 | 1080 | 3.3295 | 0.6833 |
| 0.0 | 31.0 | 1116 | 3.2851 | 0.6833 |
| 0.0001 | 32.0 | 1152 | 3.4065 | 0.7 |
| 0.0 | 33.0 | 1188 | 3.3669 | 0.7 |
| 0.0 | 34.0 | 1224 | 3.3185 | 0.7167 |
| 0.0006 | 35.0 | 1260 | 3.2563 | 0.7 |
| 0.0004 | 36.0 | 1296 | 3.2831 | 0.7 |
| 0.0001 | 37.0 | 1332 | 3.2594 | 0.7 |
| 0.0 | 38.0 | 1368 | 3.2576 | 0.7 |
| 0.0 | 38.9014 | 1400 | 3.2578 | 0.7 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
asr-africa/wav2vec2-xls-r-300m-CV_Fleurs_AMMI_ALFFA-sw-1hr-v1 | asr-africa | 2025-01-29T05:35:51Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-18T03:53:13Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-CV_Fleurs_AMMI_ALFFA-sw-1hr-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-CV_Fleurs_AMMI_ALFFA-sw-1hr-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8810
- Wer: 0.4988
- Cer: 0.1672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 17.0 | 1.0 | 36 | 13.0736 | 1.0 | 1.0 |
| 9.9036 | 2.0 | 72 | 4.8880 | 1.0 | 1.0 |
| 4.684 | 3.0 | 108 | 3.5599 | 1.0 | 1.0 |
| 3.5015 | 4.0 | 144 | 3.1648 | 1.0 | 1.0 |
| 3.201 | 5.0 | 180 | 3.0654 | 1.0 | 1.0 |
| 3.1147 | 6.0 | 216 | 3.1915 | 1.0 | 1.0 |
| 3.0914 | 7.0 | 252 | 2.9619 | 1.0 | 1.0 |
| 3.01 | 8.0 | 288 | 3.0046 | 1.0 | 1.0 |
| 2.9785 | 9.0 | 324 | 2.9234 | 1.0 | 1.0 |
| 2.932 | 10.0 | 360 | 2.9227 | 1.0 | 1.0 |
| 2.8853 | 11.0 | 396 | 2.8842 | 1.0 | 1.0 |
| 2.7422 | 12.0 | 432 | 2.4736 | 0.9999 | 0.9446 |
| 2.0966 | 13.0 | 468 | 1.5906 | 0.9995 | 0.4546 |
| 1.449 | 14.0 | 504 | 1.3529 | 0.8594 | 0.3155 |
| 1.2739 | 15.0 | 540 | 1.2643 | 0.7826 | 0.2549 |
| 0.968 | 16.0 | 576 | 1.1934 | 0.7199 | 0.2297 |
| 0.8544 | 17.0 | 612 | 1.1714 | 0.6661 | 0.2161 |
| 0.7248 | 18.0 | 648 | 1.1922 | 0.6587 | 0.2126 |
| 0.6452 | 19.0 | 684 | 1.3711 | 0.6823 | 0.2196 |
| 0.6399 | 20.0 | 720 | 1.2777 | 0.6351 | 0.2120 |
| 0.5218 | 21.0 | 756 | 1.3353 | 0.6113 | 0.2011 |
| 0.5141 | 22.0 | 792 | 1.3149 | 0.6116 | 0.1995 |
| 0.4709 | 23.0 | 828 | 1.2793 | 0.6262 | 0.2050 |
| 0.4386 | 24.0 | 864 | 1.3153 | 0.6057 | 0.1971 |
| 0.3992 | 25.0 | 900 | 1.3247 | 0.6032 | 0.1970 |
| 0.3569 | 26.0 | 936 | 1.4275 | 0.5992 | 0.1980 |
| 0.3628 | 27.0 | 972 | 1.3171 | 0.5915 | 0.1924 |
| 0.3241 | 28.0 | 1008 | 1.3894 | 0.5791 | 0.1904 |
| 0.3993 | 29.0 | 1044 | 1.4247 | 0.5856 | 0.1942 |
| 0.2921 | 30.0 | 1080 | 1.4364 | 0.5721 | 0.1889 |
| 0.2929 | 31.0 | 1116 | 1.4470 | 0.5646 | 0.1875 |
| 0.2705 | 32.0 | 1152 | 1.3813 | 0.5596 | 0.1865 |
| 0.2675 | 33.0 | 1188 | 1.5556 | 0.5587 | 0.1857 |
| 0.2917 | 34.0 | 1224 | 1.4195 | 0.5680 | 0.1886 |
| 0.2571 | 35.0 | 1260 | 1.5744 | 0.5683 | 0.1871 |
| 0.2378 | 36.0 | 1296 | 1.5611 | 0.5588 | 0.1850 |
| 0.2181 | 37.0 | 1332 | 1.6092 | 0.5618 | 0.1869 |
| 0.2197 | 38.0 | 1368 | 1.5259 | 0.5727 | 0.1890 |
| 0.2022 | 39.0 | 1404 | 1.5426 | 0.5594 | 0.1862 |
| 0.1899 | 40.0 | 1440 | 1.5704 | 0.5645 | 0.1841 |
| 0.1995 | 41.0 | 1476 | 1.5666 | 0.5660 | 0.1834 |
| 0.1972 | 42.0 | 1512 | 1.6442 | 0.5521 | 0.1843 |
| 0.1749 | 43.0 | 1548 | 1.6143 | 0.5566 | 0.1836 |
| 0.1569 | 44.0 | 1584 | 1.6420 | 0.5598 | 0.1844 |
| 0.1659 | 45.0 | 1620 | 1.7003 | 0.5542 | 0.1845 |
| 0.1969 | 46.0 | 1656 | 1.4453 | 0.5482 | 0.1813 |
| 0.1609 | 47.0 | 1692 | 1.6009 | 0.5539 | 0.1838 |
| 0.1613 | 48.0 | 1728 | 1.6792 | 0.5512 | 0.1843 |
| 0.1498 | 49.0 | 1764 | 1.5508 | 0.5443 | 0.1827 |
| 0.1437 | 50.0 | 1800 | 1.7122 | 0.5340 | 0.1794 |
| 0.1674 | 51.0 | 1836 | 1.6303 | 0.5330 | 0.1787 |
| 0.1368 | 52.0 | 1872 | 1.7204 | 0.5476 | 0.1819 |
| 0.1247 | 53.0 | 1908 | 1.7727 | 0.5435 | 0.1825 |
| 0.1321 | 54.0 | 1944 | 1.7033 | 0.5361 | 0.1788 |
| 0.116 | 55.0 | 1980 | 1.6836 | 0.5356 | 0.1789 |
| 0.1095 | 56.0 | 2016 | 1.7173 | 0.5367 | 0.1784 |
| 0.1236 | 57.0 | 2052 | 1.8125 | 0.5406 | 0.1791 |
| 0.1123 | 58.0 | 2088 | 1.7084 | 0.5340 | 0.1783 |
| 0.1103 | 59.0 | 2124 | 1.6993 | 0.5348 | 0.1786 |
| 0.105 | 60.0 | 2160 | 1.7396 | 0.5214 | 0.1743 |
| 0.105 | 61.0 | 2196 | 1.7277 | 0.5288 | 0.1762 |
| 0.1045 | 62.0 | 2232 | 1.7564 | 0.5295 | 0.1772 |
| 0.099 | 63.0 | 2268 | 1.7446 | 0.5183 | 0.1731 |
| 0.091 | 64.0 | 2304 | 1.8399 | 0.5235 | 0.1763 |
| 0.1165 | 65.0 | 2340 | 1.7453 | 0.5284 | 0.1770 |
| 0.0933 | 66.0 | 2376 | 1.7183 | 0.5201 | 0.1730 |
| 0.0945 | 67.0 | 2412 | 1.7575 | 0.5244 | 0.1751 |
| 0.0943 | 68.0 | 2448 | 1.8292 | 0.5179 | 0.1731 |
| 0.0804 | 69.0 | 2484 | 1.7515 | 0.5130 | 0.1715 |
| 0.0936 | 70.0 | 2520 | 1.7478 | 0.5197 | 0.1736 |
| 0.0847 | 71.0 | 2556 | 1.7778 | 0.5212 | 0.1750 |
| 0.0758 | 72.0 | 2592 | 1.8291 | 0.5167 | 0.1728 |
| 0.0787 | 73.0 | 2628 | 1.8027 | 0.5117 | 0.1712 |
| 0.0839 | 74.0 | 2664 | 1.7828 | 0.5160 | 0.1726 |
| 0.0691 | 75.0 | 2700 | 1.7989 | 0.5102 | 0.1714 |
| 0.0752 | 76.0 | 2736 | 1.8084 | 0.5112 | 0.1708 |
| 0.0706 | 77.0 | 2772 | 1.8100 | 0.5121 | 0.1709 |
| 0.0778 | 78.0 | 2808 | 1.7763 | 0.5085 | 0.1700 |
| 0.0631 | 79.0 | 2844 | 1.8313 | 0.5091 | 0.1696 |
| 0.0729 | 80.0 | 2880 | 1.8528 | 0.5055 | 0.1699 |
| 0.0656 | 81.0 | 2916 | 1.8918 | 0.5105 | 0.1711 |
| 0.078 | 82.0 | 2952 | 1.8473 | 0.5076 | 0.1718 |
| 0.0792 | 83.0 | 2988 | 1.7290 | 0.5054 | 0.1693 |
| 0.0649 | 84.0 | 3024 | 1.8294 | 0.5093 | 0.1695 |
| 0.0647 | 85.0 | 3060 | 1.8810 | 0.5023 | 0.1685 |
| 0.0656 | 86.0 | 3096 | 1.7913 | 0.5043 | 0.1683 |
| 0.0566 | 87.0 | 3132 | 1.8506 | 0.5049 | 0.1684 |
| 0.0619 | 88.0 | 3168 | 1.8519 | 0.5043 | 0.1677 |
| 0.0718 | 89.0 | 3204 | 1.8385 | 0.4996 | 0.1667 |
| 0.0562 | 90.0 | 3240 | 1.8502 | 0.5030 | 0.1675 |
| 0.0593 | 91.0 | 3276 | 1.8384 | 0.5038 | 0.1675 |
| 0.0632 | 92.0 | 3312 | 1.8463 | 0.5026 | 0.1679 |
| 0.0545 | 93.0 | 3348 | 1.8528 | 0.5009 | 0.1680 |
| 0.0566 | 94.0 | 3384 | 1.8471 | 0.4995 | 0.1675 |
| 0.0593 | 95.0 | 3420 | 1.8420 | 0.4989 | 0.1673 |
| 0.0578 | 96.0 | 3456 | 1.8687 | 0.4982 | 0.1670 |
| 0.0542 | 97.0 | 3492 | 1.8701 | 0.4988 | 0.1672 |
| 0.0602 | 98.0 | 3528 | 1.8767 | 0.4991 | 0.1672 |
| 0.0561 | 99.0 | 3564 | 1.8789 | 0.4982 | 0.1670 |
| 0.06 | 100.0 | 3600 | 1.8810 | 0.4988 | 0.1672 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
error577/11d428e6-e74a-4846-a189-a0a3e2acee71 | error577 | 2025-01-29T05:34:27Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:46:48Z | ---
library_name: peft
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 11d428e6-e74a-4846-a189-a0a3e2acee71
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a307f33571a64585_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a307f33571a64585_train_data.json
type:
field_input: original_caption
field_instruction: premise_en
field_output: hypothesis_en
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: error577/11d428e6-e74a-4846-a189-a0a3e2acee71
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 1
mlflow_experiment_name: /tmp/a307f33571a64585_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 4
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 256
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02
wandb_entity: null
wandb_mode: online
wandb_name: 520b482a-8596-4661-a960-ed5a8af7690b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 520b482a-8596-4661-a960-ed5a8af7690b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 11d428e6-e74a-4846-a189-a0a3e2acee71
This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3575 | 0.0009 | 1 | 2.6130 |
| 1.0971 | 0.1094 | 125 | 1.1068 |
| 1.7661 | 0.2188 | 250 | 1.0559 |
| 0.8352 | 0.3282 | 375 | 0.9996 |
| 0.8143 | 0.4376 | 500 | 0.9853 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prithivMLmods/Blaze-14B-xElite | prithivMLmods | 2025-01-29T05:34:16Z | 62 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"phi-4",
"LlamaForCausalLM",
"xElite",
"14B",
"conversational",
"en",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T15:35:28Z | ---
license: llama3.1
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- phi-4
- LlamaForCausalLM
- xElite
- 14B
model-index:
- name: Blaze-14B-xElite
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 3.63
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 51.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 35.88
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.68
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.68
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
---

# **Blaze-14B-xElite**
[Blaze-14B-xElite finetuned] is a state-of-the-art open model built on the LLaMA-based model architecture. It has been fine-tuned using a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach is to ensure that small yet capable models are trained with high-quality data focused on advanced reasoning.
Blaze-14B-xElite has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated synthetic datasets. The overall technique employed to achieve safety alignment combines SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization), including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted at multiple safety categories.
# **Dataset Info**
Blaze-14B-xElite is fine-tuned on a synthetic dataset curated through a pipeline explicitly built for this purpose. The data is primarily based on the Chain of Thought (CoT) or Chain of Continuous Flow methodologies. This approach ensures that the dataset is rich in reasoning, problem-solving, and step-by-step breakdowns of complex tasks. The model is specifically designed to excel in reasoning, mathematics, and breaking down problems into logical, manageable steps.
# **Run with Transformers**
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Blaze-14B-xElite")
model = AutoModelForCausalLM.from_pretrained(
"prithivMLmods/Blaze-14B-xElite",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
# **Intended Use**
The Blaze-14B-xElite model is designed for a wide range of applications, particularly those requiring advanced reasoning, high-quality text generation, and multilingual capabilities. Below are some of the intended use cases:
1. **Complex Reasoning Tasks**:
- Solving intricate problems in mathematics, logic, and science.
- Assisting in academic research by providing detailed explanations and summaries.
2. **Multilingual Applications**:
- Translating text across multiple languages while preserving context and nuance.
- Generating content in various languages for global audiences.
3. **Content Creation**:
- Assisting writers, marketers, and creators with high-quality text generation.
- Generating creative ideas, stories, and technical documentation.
4. **Educational Tools**:
- Providing explanations, tutoring, and Q&A support for students and educators.
- Generating practice questions and answers for learning purposes.
5. **Customer Support**:
- Automating responses to customer queries with accurate and helpful information.
- Handling complex customer service scenarios with advanced reasoning.
6. **Safety-Critical Applications**:
- Ensuring responses are aligned with safety guidelines, making it suitable for sensitive domains.
- Providing harmlessness-focused interactions in public-facing applications.
# **Limitations**
While Blaze-14B-xElite is a powerful and versatile model, it has certain limitations that users should be aware of:
1. **Bias and Fairness**:
- Despite rigorous training and safety alignment, the model may still exhibit biases present in the training data. Users should critically evaluate outputs, especially in sensitive contexts.
2. **Contextual Understanding**:
- The model may occasionally misinterpret complex or ambiguous prompts, leading to inaccurate or irrelevant responses.
3. **Real-Time Knowledge**:
- The model's knowledge is limited to the data it was trained on and does not include real-time or post-training updates. It may not be aware of recent events or developments.
4. **Safety and Harmlessness**:
- While extensive efforts have been made to align the model with safety guidelines, it may still generate outputs that are inappropriate or harmful in certain contexts. Continuous monitoring and human oversight are recommended.
5. **Resource Requirements**:
- Running the model efficiently may require significant computational resources, especially for large-scale or real-time applications.
6. **Ethical Considerations**:
- The model should not be used for malicious purposes, such as generating harmful content, misinformation, or spam. Users are responsible for ensuring ethical use.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Blaze-14B-xElite-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FBlaze-14B-xElite&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 28.95|
|IFEval (0-Shot) | 3.63|
|BBH (3-Shot) | 51.57|
|MATH Lvl 5 (4-Shot)| 35.88|
|GPQA (0-shot) | 19.24|
|MuSR (0-shot) | 17.68|
|MMLU-PRO (5-shot) | 45.68|
|
prithivMLmods/Qwen-7B-Distill-Reasoner | prithivMLmods | 2025-01-29T05:33:08Z | 63 | 8 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"deepseek",
"qwen",
"distill",
"cot",
"conversational",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T15:39:57Z | ---
license: apache-2.0
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- deepseek
- qwen
- distill
- cot
model-index:
- name: Qwen-7B-Distill-Reasoner
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 33.96
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 22.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 21.15
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.29
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.78
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 20.2
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwen-7B-Distill-Reasoner
name: Open LLM Leaderboard
---
# **Qwen-7B-Distill-Reasoner**
Qwen-7B-Distill-Reasoner is based on the *Qwen [ KT ] model*, which was distilled by **DeepSeek-AI/DeepSeek-R1-Distill-Qwen-7B**. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
# **Quickstart with Transformers**
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Qwen-7B-Distill-Reasoner"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### **Intended Use:**
1. **Instruction-Following:** The model excels in understanding and executing detailed instructions, making it ideal for automation systems, virtual assistants, and educational tools.
2. **Text Generation:** It can produce coherent, logically structured, and contextually relevant text for use in content creation, summarization, and report writing.
3. **Complex Reasoning Tasks:** With its fine-tuning for chain-of-thought reasoning, the model is well-suited for multi-step problem-solving, logical deduction, and question-answering tasks.
4. **Research and Development:** It can support researchers and developers in exploring advancements in logical reasoning and fine-tuning methodologies.
5. **Educational Applications:** The model can assist in teaching logical reasoning and problem-solving by generating step-by-step solutions.
### **Limitations:**
1. **Domain-Specific Knowledge:** While fine-tuned on reasoning datasets, the model may lack deep expertise in highly specialized or technical domains.
2. **Hallucination:** Like many large language models, it can generate incorrect or fabricated information, especially when reasoning beyond its training data.
3. **Bias in Training Data:** The model's outputs may reflect biases present in the datasets it was fine-tuned on, which could limit its objectivity in certain contexts.
4. **Performance on Non-Reasoning Tasks:** The model is optimized for chain-of-thought reasoning and may underperform on tasks that require simpler, less structured responses.
5. **Resource-Intensive:** Running the model efficiently requires significant computational resources, which may limit accessibility for smaller-scale deployments.
6. **Dependence on Input Quality:** The model’s performance heavily depends on the clarity and quality of the input provided. Ambiguous or poorly structured prompts may yield suboptimal results.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Qwen-7B-Distill-Reasoner-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FQwen-7B-Distill-Reasoner&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 18.43|
|IFEval (0-Shot) | 33.96|
|BBH (3-Shot) | 22.18|
|MATH Lvl 5 (4-Shot)| 21.15|
|GPQA (0-shot) | 10.29|
|MuSR (0-shot) | 2.78|
|MMLU-PRO (5-shot) | 20.20|
|
Best000/cf9e5376-6f64-4ba4-b088-ea0da28b4a7e | Best000 | 2025-01-29T05:33:05Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | 2025-01-29T05:10:54Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf9e5376-6f64-4ba4-b088-ea0da28b4a7e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 30529ea285fff6e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/30529ea285fff6e5_train_data.json
type:
field_input: article
field_instruction: input
field_output: clean_input
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/cf9e5376-6f64-4ba4-b088-ea0da28b4a7e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/30529ea285fff6e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 558bab3b-4762-449f-9904-9dc48b2dd138
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 558bab3b-4762-449f-9904-9dc48b2dd138
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cf9e5376-6f64-4ba4-b088-ea0da28b4a7e
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.7204 |
| 1.6411 | 0.0010 | 13 | 1.2960 |
| 1.4575 | 0.0020 | 26 | 1.0562 |
| 1.2937 | 0.0031 | 39 | 0.9900 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
devgo-aida/Qwen2.5-Coder-7B-Instruct-IQ3_XXS-GGUF | devgo-aida | 2025-01-29T05:29:21Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-01-29T05:29:03Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# devgo-aida/Qwen2.5-Coder-7B-Instruct-IQ3_XXS-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo devgo-aida/Qwen2.5-Coder-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-coder-7b-instruct-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo devgo-aida/Qwen2.5-Coder-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-coder-7b-instruct-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo devgo-aida/Qwen2.5-Coder-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-coder-7b-instruct-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo devgo-aida/Qwen2.5-Coder-7B-Instruct-IQ3_XXS-GGUF --hf-file qwen2.5-coder-7b-instruct-iq3_xxs-imat.gguf -c 2048
```
|
kostiantynk/8b22f353-9f95-4061-a24b-ce4aa9be3fc4 | kostiantynk | 2025-01-29T05:26:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"region:us"
] | null | 2025-01-29T04:32:36Z | ---
library_name: peft
license: mit
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b22f353-9f95-4061-a24b-ce4aa9be3fc4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 28869e035ebaf0bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/28869e035ebaf0bf_train_data.json
type:
field_input: labels
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/8b22f353-9f95-4061-a24b-ce4aa9be3fc4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/28869e035ebaf0bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c01e03ea-ac63-445b-b53d-881712c18952
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: c01e03ea-ac63-445b-b53d-881712c18952
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8b22f353-9f95-4061-a24b-ce4aa9be3fc4
This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.5837 |
| 10.2629 | 0.0007 | 13 | 2.4325 |
| 9.7252 | 0.0015 | 26 | 2.3554 |
| 9.2361 | 0.0022 | 39 | 2.3374 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk/9dcf26b4-a6e9-46bf-8a7d-8d7af31b5167 | kostiantynk | 2025-01-29T05:26:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-29T05:19:17Z | ---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9dcf26b4-a6e9-46bf-8a7d-8d7af31b5167
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 61fdcf379e4ddee9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/61fdcf379e4ddee9_train_data.json
type:
field_input: genres
field_instruction: title
field_output: description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/9dcf26b4-a6e9-46bf-8a7d-8d7af31b5167
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/61fdcf379e4ddee9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dd59f9f5-ca81-47a9-bf7c-9060c0120a4f
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: dd59f9f5-ca81-47a9-bf7c-9060c0120a4f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9dcf26b4-a6e9-46bf-8a7d-8d7af31b5167
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.3381 |
| 2.2627 | 0.0010 | 13 | 2.0559 |
| 1.9553 | 0.0020 | 26 | 2.0006 |
| 1.9338 | 0.0030 | 39 | 1.9916 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ALIN-LLM/ours-llama-3.2-1b-math | ALIN-LLM | 2025-01-29T05:25:28Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T05:24:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nhung03/8b665d3d-aa37-4a81-b0d9-dd693f0cf897 | nhung03 | 2025-01-29T05:19:56Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:47:06Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b665d3d-aa37-4a81-b0d9-dd693f0cf897
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5be263efe7224a93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5be263efe7224a93_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/8b665d3d-aa37-4a81-b0d9-dd693f0cf897
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5be263efe7224a93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92e25f25-19b1-465a-a1eb-13a542866ff6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92e25f25-19b1-465a-a1eb-13a542866ff6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8b665d3d-aa37-4a81-b0d9-dd693f0cf897
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1888 | 0.1636 | 200 | 0.1847 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/0d1001cf-0752-46da-b5e4-264e908aa3d9 | trenden | 2025-01-29T05:18:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | 2025-01-29T04:56:57Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0d1001cf-0752-46da-b5e4-264e908aa3d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 30529ea285fff6e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/30529ea285fff6e5_train_data.json
type:
field_input: article
field_instruction: input
field_output: clean_input
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/0d1001cf-0752-46da-b5e4-264e908aa3d9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/30529ea285fff6e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 558bab3b-4762-449f-9904-9dc48b2dd138
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 558bab3b-4762-449f-9904-9dc48b2dd138
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0d1001cf-0752-46da-b5e4-264e908aa3d9
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.7204 |
| 1.6412 | 0.0010 | 13 | 1.3003 |
| 1.4611 | 0.0020 | 26 | 1.0574 |
| 1.2943 | 0.0031 | 39 | 0.9904 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stevillis/bertimbau-finetuned-glassdoor-reviews | stevillis | 2025-01-29T05:18:52Z | 189 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment analysis",
"nlp",
"glassdoor",
"pt",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-06T06:36:08Z | ---
license: mit
language:
- pt
metrics:
accuracy:
Neutral: 0.99
Positive: 0.97
Negative: 0.98
base_model: neuralmind/bert-base-portuguese-cased
library_name: transformers
tags:
- sentiment analysis
- nlp
- glassdoor
pipeline_tag: text-classification
---
# BERTimbau for Sentiment Analysis of Glassdoor Reviews
## Introduction
This model fine-tunes [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)
for sentiment analysis of Glassdoor reviews about IT companies in Cuiabá.
The dataset used to train the model consists of 2,532 reviews sourced from Glassdoor.
For more detail about the project, follow my [GitHub](https://github.com/stevillis/glassdoor-reviews-analysis-nlp).
### Example Usage
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="stevillis/bertimbau-finetuned-glassdoor-reviews")
result = pipe("Empresa boa para trabalhar")
print(result) # Expected output: [{'label': 'positive', 'score': 0.9993522763252258}]
``` |
trangtrannnnn/7b47b6df-8a43-470d-aeb2-6acc9d9dd573 | trangtrannnnn | 2025-01-29T05:18:07Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:46:50Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b47b6df-8a43-470d-aeb2-6acc9d9dd573
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5be263efe7224a93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5be263efe7224a93_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: trangtrannnnn/7b47b6df-8a43-470d-aeb2-6acc9d9dd573
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5be263efe7224a93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92e25f25-19b1-465a-a1eb-13a542866ff6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92e25f25-19b1-465a-a1eb-13a542866ff6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7b47b6df-8a43-470d-aeb2-6acc9d9dd573
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1871 | 0.1636 | 200 | 0.1841 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/c300dd03-3c7f-452a-8fd3-5707c5f2f461 | great0001 | 2025-01-29T05:18:00Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | 2025-01-29T04:55:39Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c300dd03-3c7f-452a-8fd3-5707c5f2f461
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 30529ea285fff6e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/30529ea285fff6e5_train_data.json
type:
field_input: article
field_instruction: input
field_output: clean_input
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/c300dd03-3c7f-452a-8fd3-5707c5f2f461
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/30529ea285fff6e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 558bab3b-4762-449f-9904-9dc48b2dd138
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 558bab3b-4762-449f-9904-9dc48b2dd138
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c300dd03-3c7f-452a-8fd3-5707c5f2f461
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0826 | 0.0001 | 1 | 1.7204 |
| 1.8578 | 0.0010 | 13 | 1.3599 |
| 1.5865 | 0.0020 | 26 | 1.0956 |
| 1.449 | 0.0031 | 39 | 1.0029 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhungphammmmm/9a8e9065-e958-4d32-9b5a-39c46937ddce | nhungphammmmm | 2025-01-29T05:17:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:46:51Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9a8e9065-e958-4d32-9b5a-39c46937ddce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5be263efe7224a93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5be263efe7224a93_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/9a8e9065-e958-4d32-9b5a-39c46937ddce
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5be263efe7224a93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92e25f25-19b1-465a-a1eb-13a542866ff6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92e25f25-19b1-465a-a1eb-13a542866ff6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9a8e9065-e958-4d32-9b5a-39c46937ddce
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1852 | 0.1636 | 200 | 0.1842 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/0714ab4f-57db-4e04-9b86-a2d9bf7e8bfc | thalllsssss | 2025-01-29T05:17:51Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:47:10Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0714ab4f-57db-4e04-9b86-a2d9bf7e8bfc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5be263efe7224a93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5be263efe7224a93_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/0714ab4f-57db-4e04-9b86-a2d9bf7e8bfc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5be263efe7224a93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92e25f25-19b1-465a-a1eb-13a542866ff6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92e25f25-19b1-465a-a1eb-13a542866ff6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0714ab4f-57db-4e04-9b86-a2d9bf7e8bfc
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1881 | 0.1636 | 200 | 0.1846 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
devgo-aida/ko-r1-1.5b-preview-Q8_0-GGUF | devgo-aida | 2025-01-29T05:17:29Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"ko",
"base_model:OLAIR/ko-r1-1.5b-preview",
"base_model:quantized:OLAIR/ko-r1-1.5b-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T05:06:31Z | ---
library_name: transformers
license: apache-2.0
language:
- ko
base_model: OLAIR/ko-r1-1.5b-preview
tags:
- llama-cpp
- gguf-my-repo
---
# devgo-aida/ko-r1-1.5b-preview-Q8_0-GGUF
This model was converted to GGUF format from [`OLAIR/ko-r1-1.5b-preview`](https://huggingface.co/OLAIR/ko-r1-1.5b-preview) using llama.cpp via the ggml.ai's space.
Refer to the [original model card](https://huggingface.co/OLAIR/ko-r1-1.5b-preview) for more details on the model.
### ollama
```bash
ollama run hf.co/devgo-aida/ko-r1-1.5b-preview-Q8_0-GGUF
```
|
babylon3005/lora-250128 | babylon3005 | 2025-01-29T05:14:46Z | 8 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T05:14:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: astout
---
# Lora 250128
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `astout` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('babylon3005/lora-250128', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Subsets and Splits