modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jssky/e72b66ca-78f6-4da7-94ad-4ca46187928d | jssky | 2025-02-03T14:21:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
]
| null | 2025-02-03T14:01:06Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e72b66ca-78f6-4da7-94ad-4ca46187928d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 14adcf56bd267abc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14adcf56bd267abc_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: jssky/e72b66ca-78f6-4da7-94ad-4ca46187928d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/14adcf56bd267abc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3bf53e4e-e50e-483e-a51f-f8ec21733093
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3bf53e4e-e50e-483e-a51f-f8ec21733093
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e72b66ca-78f6-4da7-94ad-4ca46187928d
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7787 | 0.0326 | 50 | 1.0865 |
| 1.3904 | 0.0653 | 100 | 1.0375 |
| 1.6802 | 0.0979 | 150 | 0.9912 |
| 1.5673 | 0.1306 | 200 | 0.9909 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
wz0202/DeepSeek-R1-Distill-Qwen-1.5B-financeGPT | wz0202 | 2025-02-03T14:20:28Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T13:44:41Z | ---
library_name: transformers
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- generated_from_trainer
model-index:
- name: DeepSeek-R1-Distill-Qwen-1.5B-financeGPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepSeek-R1-Distill-Qwen-1.5B-financeGPT
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mrferr3t/1ef1de88-10fe-4576-b941-28053d519350 | mrferr3t | 2025-02-03T14:20:27Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:42:03Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1ef1de88-10fe-4576-b941-28053d519350
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 589fe59dca0f3dbe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/589fe59dca0f3dbe_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 20
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/1ef1de88-10fe-4576-b941-28053d519350
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/589fe59dca0f3dbe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 20
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 71879dd7-6005-4d28-8cab-23fdd2df3703
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 71879dd7-6005-4d28-8cab-23fdd2df3703
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1ef1de88-10fe-4576-b941-28053d519350
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 307
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.6020 |
| No log | 0.0020 | 20 | 0.5739 |
| No log | 0.0041 | 40 | 0.5344 |
| No log | 0.0061 | 60 | 0.5261 |
| No log | 0.0081 | 80 | 0.5244 |
| 1.1182 | 0.0102 | 100 | 0.5191 |
| 1.1182 | 0.0122 | 120 | 0.5182 |
| 1.1182 | 0.0142 | 140 | 0.5192 |
| 1.1182 | 0.0163 | 160 | 0.5209 |
| 1.1182 | 0.0183 | 180 | 0.5195 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
corranm/square_run_with_16_batch_size | corranm | 2025-02-03T14:19:45Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-02-03T14:19:34Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: square_run_with_16_batch_size
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# square_run_with_16_batch_size
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4457
- F1 Macro: 0.4685
- F1 Micro: 0.5455
- F1 Weighted: 0.5242
- Precision Macro: 0.5341
- Precision Micro: 0.5455
- Precision Weighted: 0.5870
- Recall Macro: 0.4829
- Recall Micro: 0.5455
- Recall Weighted: 0.5455
- Accuracy: 0.5455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:|
| 1.9976 | 1.0 | 29 | 1.9107 | 0.0915 | 0.2045 | 0.1209 | 0.0793 | 0.2045 | 0.1019 | 0.1535 | 0.2045 | 0.2045 | 0.2045 |
| 1.7575 | 2.0 | 58 | 1.8877 | 0.1474 | 0.2348 | 0.1805 | 0.1704 | 0.2348 | 0.2163 | 0.1989 | 0.2348 | 0.2348 | 0.2348 |
| 1.8336 | 3.0 | 87 | 1.7319 | 0.1659 | 0.3182 | 0.2117 | 0.1611 | 0.3182 | 0.2133 | 0.2586 | 0.3182 | 0.3182 | 0.3182 |
| 1.452 | 4.0 | 116 | 1.5316 | 0.3336 | 0.4167 | 0.3752 | 0.3518 | 0.4167 | 0.3903 | 0.3682 | 0.4167 | 0.4167 | 0.4167 |
| 1.2545 | 5.0 | 145 | 1.4192 | 0.3999 | 0.4848 | 0.4447 | 0.4601 | 0.4848 | 0.5021 | 0.4318 | 0.4848 | 0.4848 | 0.4848 |
| 1.6479 | 6.0 | 174 | 1.3642 | 0.4649 | 0.5455 | 0.5265 | 0.5072 | 0.5455 | 0.5559 | 0.4750 | 0.5455 | 0.5455 | 0.5455 |
| 1.301 | 7.0 | 203 | 1.3015 | 0.4178 | 0.5303 | 0.4735 | 0.4090 | 0.5303 | 0.4503 | 0.4535 | 0.5303 | 0.5303 | 0.5303 |
| 0.9006 | 8.0 | 232 | 1.4861 | 0.4234 | 0.4924 | 0.4699 | 0.4586 | 0.4924 | 0.5286 | 0.4621 | 0.4924 | 0.4924 | 0.4924 |
| 0.4134 | 9.0 | 261 | 1.2101 | 0.4852 | 0.5833 | 0.5545 | 0.5427 | 0.5833 | 0.5894 | 0.5010 | 0.5833 | 0.5833 | 0.5833 |
| 0.9532 | 10.0 | 290 | 1.3783 | 0.4577 | 0.5682 | 0.5204 | 0.4557 | 0.5682 | 0.5160 | 0.4972 | 0.5682 | 0.5682 | 0.5682 |
| 0.4521 | 11.0 | 319 | 1.3602 | 0.5266 | 0.6136 | 0.5923 | 0.5296 | 0.6136 | 0.5907 | 0.5403 | 0.6136 | 0.6136 | 0.6136 |
| 0.633 | 12.0 | 348 | 1.4293 | 0.5032 | 0.5833 | 0.5727 | 0.4969 | 0.5833 | 0.5674 | 0.5140 | 0.5833 | 0.5833 | 0.5833 |
| 0.4268 | 13.0 | 377 | 1.4388 | 0.5031 | 0.5833 | 0.5676 | 0.5543 | 0.5833 | 0.6189 | 0.5124 | 0.5833 | 0.5833 | 0.5833 |
| 0.2857 | 14.0 | 406 | 1.6012 | 0.5071 | 0.5833 | 0.5676 | 0.5209 | 0.5833 | 0.5879 | 0.5211 | 0.5833 | 0.5833 | 0.5833 |
| 0.2606 | 15.0 | 435 | 1.5817 | 0.5579 | 0.6136 | 0.6109 | 0.5657 | 0.6136 | 0.6178 | 0.5590 | 0.6136 | 0.6136 | 0.6136 |
| 0.2028 | 16.0 | 464 | 1.8048 | 0.4526 | 0.5227 | 0.5112 | 0.4703 | 0.5227 | 0.5378 | 0.4668 | 0.5227 | 0.5227 | 0.5227 |
| 0.3251 | 17.0 | 493 | 1.6340 | 0.4942 | 0.5833 | 0.5625 | 0.5049 | 0.5833 | 0.5631 | 0.5031 | 0.5833 | 0.5833 | 0.5833 |
| 0.0369 | 18.0 | 522 | 1.5847 | 0.5860 | 0.6439 | 0.6349 | 0.6267 | 0.6439 | 0.6476 | 0.5824 | 0.6439 | 0.6439 | 0.6439 |
| 0.1133 | 19.0 | 551 | 1.5825 | 0.5457 | 0.6288 | 0.6157 | 0.5377 | 0.6288 | 0.6111 | 0.5615 | 0.6288 | 0.6288 | 0.6288 |
| 0.0457 | 20.0 | 580 | 1.7253 | 0.5258 | 0.6136 | 0.5938 | 0.5229 | 0.6136 | 0.5854 | 0.5391 | 0.6136 | 0.6136 | 0.6136 |
| 0.1109 | 21.0 | 609 | 1.7898 | 0.5708 | 0.6212 | 0.6154 | 0.6150 | 0.6212 | 0.6283 | 0.5613 | 0.6212 | 0.6212 | 0.6212 |
| 0.046 | 22.0 | 638 | 1.7368 | 0.5656 | 0.6136 | 0.6029 | 0.6021 | 0.6136 | 0.6103 | 0.5615 | 0.6136 | 0.6136 | 0.6136 |
| 0.0553 | 23.0 | 667 | 2.2478 | 0.4822 | 0.5682 | 0.5430 | 0.4851 | 0.5682 | 0.5380 | 0.4975 | 0.5682 | 0.5682 | 0.5682 |
| 0.0047 | 24.0 | 696 | 2.1705 | 0.5133 | 0.5909 | 0.5750 | 0.5158 | 0.5909 | 0.5716 | 0.5220 | 0.5909 | 0.5909 | 0.5909 |
| 0.0104 | 25.0 | 725 | 2.2669 | 0.4950 | 0.5833 | 0.5622 | 0.5035 | 0.5833 | 0.5609 | 0.5038 | 0.5833 | 0.5833 | 0.5833 |
| 0.0287 | 26.0 | 754 | 2.0390 | 0.5267 | 0.6061 | 0.5935 | 0.5265 | 0.6061 | 0.5898 | 0.5346 | 0.6061 | 0.6061 | 0.6061 |
| 0.0212 | 27.0 | 783 | 2.1345 | 0.5344 | 0.6136 | 0.6005 | 0.5308 | 0.6136 | 0.5946 | 0.5449 | 0.6136 | 0.6136 | 0.6136 |
| 0.0221 | 28.0 | 812 | 2.1555 | 0.5607 | 0.6136 | 0.6035 | 0.5953 | 0.6136 | 0.6107 | 0.5583 | 0.6136 | 0.6136 | 0.6136 |
| 0.001 | 29.0 | 841 | 2.1102 | 0.5833 | 0.6364 | 0.6289 | 0.6172 | 0.6364 | 0.6353 | 0.5789 | 0.6364 | 0.6364 | 0.6364 |
| 0.0045 | 30.0 | 870 | 2.0669 | 0.5862 | 0.6364 | 0.6290 | 0.6164 | 0.6364 | 0.6326 | 0.5831 | 0.6364 | 0.6364 | 0.6364 |
| 0.0021 | 31.0 | 899 | 2.1442 | 0.5833 | 0.6364 | 0.6282 | 0.6165 | 0.6364 | 0.6330 | 0.5789 | 0.6364 | 0.6364 | 0.6364 |
| 0.0014 | 32.0 | 928 | 2.1435 | 0.5616 | 0.6136 | 0.6049 | 0.5957 | 0.6136 | 0.6099 | 0.5569 | 0.6136 | 0.6136 | 0.6136 |
| 0.0016 | 33.0 | 957 | 2.1279 | 0.5621 | 0.6136 | 0.6047 | 0.5966 | 0.6136 | 0.6093 | 0.5569 | 0.6136 | 0.6136 | 0.6136 |
| 0.0008 | 34.0 | 986 | 2.1310 | 0.5691 | 0.6212 | 0.6127 | 0.6030 | 0.6212 | 0.6170 | 0.5641 | 0.6212 | 0.6212 | 0.6212 |
| 0.0006 | 35.0 | 1015 | 2.1338 | 0.5690 | 0.6212 | 0.6130 | 0.6026 | 0.6212 | 0.6176 | 0.5641 | 0.6212 | 0.6212 | 0.6212 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Cran-May/SCE-2-24B | Cran-May | 2025-02-03T14:18:50Z | 27 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:AlSamCur123/Mistral-Small3-24B-InstructContinuedFine",
"base_model:merge:AlSamCur123/Mistral-Small3-24B-InstructContinuedFine",
"base_model:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:merge:cognitivecomputations/Dolphin3.0-Mistral-24B",
"base_model:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:merge:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:trashpanda-org/MS-24B-Instruct-Mullein-v0",
"base_model:merge:trashpanda-org/MS-24B-Instruct-Mullein-v0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T14:01:28Z | ---
base_model:
- AlSamCur123/Mistral-Small3-24B-InstructContinuedFine
- cognitivecomputations/Dolphin3.0-Mistral-24B
- huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
- trashpanda-org/MS-24B-Instruct-Mullein-v0
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated](https://huggingface.co/huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [AlSamCur123/Mistral-Small3-24B-InstructContinuedFine](https://huggingface.co/AlSamCur123/Mistral-Small3-24B-InstructContinuedFine)
* [cognitivecomputations/Dolphin3.0-Mistral-24B](https://huggingface.co/cognitivecomputations/Dolphin3.0-Mistral-24B)
* [trashpanda-org/MS-24B-Instruct-Mullein-v0](https://huggingface.co/trashpanda-org/MS-24B-Instruct-Mullein-v0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
models:
- model: trashpanda-org/MS-24B-Instruct-Mullein-v0
- model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
- model: cognitivecomputations/Dolphin3.0-Mistral-24B
- model: AlSamCur123/Mistral-Small3-24B-InstructContinuedFine
base_model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
tokenizer:
source: base
parameters:
select_topk: 0.8
dtype: float32
out_dtype: bfloat16
normalize: true
```
|
lesso/11d552f9-f055-4a51-acfd-23e825d55757 | lesso | 2025-02-03T14:18:25Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:52:38Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 11d552f9-f055-4a51-acfd-23e825d55757
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 745b74c0f2d7025c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/745b74c0f2d7025c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/11d552f9-f055-4a51-acfd-23e825d55757
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god12/745b74c0f2d7025c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 78f5c593-6a74-46d5-a42f-796f301ea9a6
wandb_project: ab-god12
wandb_run: your_name
wandb_runid: 78f5c593-6a74-46d5-a42f-796f301ea9a6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 11d552f9-f055-4a51-acfd-23e825d55757
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.0313 | 0.0053 | 1 | 2.2753 |
| 6.1538 | 0.2642 | 50 | 1.3210 |
| 5.9009 | 0.5284 | 100 | 1.2667 |
| 5.706 | 0.7926 | 150 | 1.2451 |
| 3.8185 | 1.0568 | 200 | 1.2374 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Nyan-Stunna-7B-i1-GGUF | mradermacher | 2025-02-03T14:16:37Z | 428 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ChaoticNeutrals/Nyan-Stunna-7B",
"base_model:quantized:ChaoticNeutrals/Nyan-Stunna-7B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
]
| null | 2025-01-22T06:34:17Z | ---
base_model: ChaoticNeutrals/Nyan-Stunna-7B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ChaoticNeutrals/Nyan-Stunna-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Nyan-Stunna-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nyan-Stunna-7B-i1-GGUF/resolve/main/Nyan-Stunna-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
blood34/a4591f15-c67d-4bad-aad6-7a510665479b | blood34 | 2025-02-03T14:14:50Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T13:20:38Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4591f15-c67d-4bad-aad6-7a510665479b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 589fe59dca0f3dbe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/589fe59dca0f3dbe_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: blood34/a4591f15-c67d-4bad-aad6-7a510665479b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/589fe59dca0f3dbe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 71879dd7-6005-4d28-8cab-23fdd2df3703
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 71879dd7-6005-4d28-8cab-23fdd2df3703
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4591f15-c67d-4bad-aad6-7a510665479b
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6563 | 0.0406 | 200 | 0.5349 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/5aa2ad61-a884-4aa1-a1fd-48512ea2a562 | mrferr3t | 2025-02-03T14:13:52Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
]
| null | 2025-02-03T14:11:00Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5aa2ad61-a884-4aa1-a1fd-48512ea2a562
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 7465fecdd1b4fae8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7465fecdd1b4fae8_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 20
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/5aa2ad61-a884-4aa1-a1fd-48512ea2a562
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/7465fecdd1b4fae8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 20
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96ba0598-e365-4fe4-a421-689fa74a779f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96ba0598-e365-4fe4-a421-689fa74a779f
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5aa2ad61-a884-4aa1-a1fd-48512ea2a562
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 17
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0017 | 1 | 0.0982 |
| No log | 0.0350 | 20 | 0.1031 |
| No log | 0.0700 | 40 | 0.1110 |
| No log | 0.1050 | 60 | 0.1078 |
| No log | 0.1400 | 80 | 0.1092 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso/558dc5ae-963d-4ced-839a-323b5408528b | lesso | 2025-02-03T14:13:22Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:41:38Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 558dc5ae-963d-4ced-839a-323b5408528b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 67ae0a068059de74_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67ae0a068059de74_train_data.json
type:
field_input: Company Name
field_instruction: Position
field_output: Long Description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/558dc5ae-963d-4ced-839a-323b5408528b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001017
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god17/67ae0a068059de74_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5a3810f1-954e-4bc6-9cfa-cd7881f9fa67
wandb_project: ab-god17
wandb_run: your_name
wandb_runid: 5a3810f1-954e-4bc6-9cfa-cd7881f9fa67
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 558dc5ae-963d-4ced-839a-323b5408528b
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001017
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6893 | 0.0001 | 1 | 2.9463 |
| 2.9055 | 0.0030 | 50 | 2.7112 |
| 2.9005 | 0.0059 | 100 | 2.6746 |
| 2.9777 | 0.0089 | 150 | 2.6450 |
| 3.0197 | 0.0119 | 200 | 2.6325 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF | mradermacher | 2025-02-03T14:12:35Z | 324 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"dataset:netcat420/MFANN",
"base_model:netcat420/Qwen2.5-7b-MFANNv1.1",
"base_model:quantized:netcat420/Qwen2.5-7b-MFANNv1.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-25T13:00:25Z | ---
base_model: netcat420/Qwen2.5-7b-MFANNv1.1
datasets:
- netcat420/MFANN
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/netcat420/Qwen2.5-7b-MFANNv1.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7b-MFANN-slerp-GGUF/resolve/main/Qwen2.5-7b-MFANN-slerp.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Nitrals-Quants/NightWing3_Virtuoso-10B-v0.2-IQ4_NL-GGUF | Nitrals-Quants | 2025-02-03T14:11:03Z | 89 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Nitral-AI/NightWing3_Virtuoso-10B-v0.2",
"base_model:quantized:Nitral-AI/NightWing3_Virtuoso-10B-v0.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-02-03T14:10:35Z | ---
base_model: Nitral-Archive/NightWing3_Virtuoso-10B-v0.2
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Nitral-AI/NightWing3_Virtuoso-10B-v0.2-IQ4_NL-GGUF
This model was converted to GGUF format from [`Nitral-Archive/NightWing3_Virtuoso-10B-v0.2`](https://huggingface.co/Nitral-Archive/NightWing3_Virtuoso-10B-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nitral-Archive/NightWing3_Virtuoso-10B-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nitral-AI/NightWing3_Virtuoso-10B-v0.2-IQ4_NL-GGUF --hf-file nightwing3_virtuoso-10b-v0.2-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nitral-AI/NightWing3_Virtuoso-10B-v0.2-IQ4_NL-GGUF --hf-file nightwing3_virtuoso-10b-v0.2-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nitral-AI/NightWing3_Virtuoso-10B-v0.2-IQ4_NL-GGUF --hf-file nightwing3_virtuoso-10b-v0.2-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nitral-AI/NightWing3_Virtuoso-10B-v0.2-IQ4_NL-GGUF --hf-file nightwing3_virtuoso-10b-v0.2-iq4_nl-imat.gguf -c 2048
```
|
Nohobby/ignore_MS3-test-UNHOLY1-Q6_K-GGUF | Nohobby | 2025-02-03T14:10:30Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Nohobby/MS3-test-Merge-1",
"base_model:quantized:Nohobby/MS3-test-Merge-1",
"endpoints_compatible",
"region:us"
]
| null | 2025-02-03T12:00:10Z | ---
base_model: Nohobby/MS3-test-Merge-1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Nohobby/ignore_MS3-test-UNHOLY1-Q6_K-GGUF
This model was converted to GGUF format from [`Nohobby/ignore_MS3-test-UNHOLY1`](https://huggingface.co/Nohobby/ignore_MS3-test-UNHOLY1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nohobby/ignore_MS3-test-UNHOLY1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nohobby/ignore_MS3-test-UNHOLY1-Q6_K-GGUF --hf-file ignore_ms3-test-unholy1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nohobby/ignore_MS3-test-UNHOLY1-Q6_K-GGUF --hf-file ignore_ms3-test-unholy1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nohobby/ignore_MS3-test-UNHOLY1-Q6_K-GGUF --hf-file ignore_ms3-test-unholy1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nohobby/ignore_MS3-test-UNHOLY1-Q6_K-GGUF --hf-file ignore_ms3-test-unholy1-q6_k.gguf -c 2048
```
|
cimol/3d4663ea-c834-4c5b-b820-455125729d37 | cimol | 2025-02-03T14:07:44Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:12:22Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d4663ea-c834-4c5b-b820-455125729d37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- 3b453216382ea03b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b453216382ea03b_train_data.json
type:
field_instruction: question
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cimol/3d4663ea-c834-4c5b-b820-455125729d37
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 7.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3b453216382ea03b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d267361d-35f8-40a0-bc9b-8de3705c3658
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d267361d-35f8-40a0-bc9b-8de3705c3658
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d4663ea-c834-4c5b-b820-455125729d37
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1798 | 0.0001 | 1 | 3.8939 |
| 2.2806 | 0.0051 | 50 | 2.3753 |
| 1.9951 | 0.0102 | 100 | 2.2929 |
| 1.9763 | 0.0153 | 150 | 2.2547 |
| 1.4953 | 0.0204 | 200 | 2.2433 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/0bede221-1e16-4da7-8082-8da78ebb2faa | hongngo | 2025-02-03T14:06:51Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T13:12:13Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0bede221-1e16-4da7-8082-8da78ebb2faa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b453216382ea03b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b453216382ea03b_train_data.json
type:
field_instruction: question
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/0bede221-1e16-4da7-8082-8da78ebb2faa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b453216382ea03b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d267361d-35f8-40a0-bc9b-8de3705c3658
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d267361d-35f8-40a0-bc9b-8de3705c3658
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0bede221-1e16-4da7-8082-8da78ebb2faa
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4603 | 0.0051 | 200 | 2.5183 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
abenius/d4a365e5-b0b3-4d75-bc6d-17296e860e2d | abenius | 2025-02-03T14:06:42Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T13:11:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d4a365e5-b0b3-4d75-bc6d-17296e860e2d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b453216382ea03b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b453216382ea03b_train_data.json
type:
field_instruction: question
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/d4a365e5-b0b3-4d75-bc6d-17296e860e2d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b453216382ea03b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d267361d-35f8-40a0-bc9b-8de3705c3658
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: d267361d-35f8-40a0-bc9b-8de3705c3658
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# d4a365e5-b0b3-4d75-bc6d-17296e860e2d
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1723 | 0.0051 | 200 | 2.4307 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Dominic2106/mixtral-hindi-translate | Dominic2106 | 2025-02-03T14:06:29Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-02-03T14:04:06Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dominic2106
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ngmediastudio89/marquez | ngmediastudio89 | 2025-02-03T14:05:34Z | 50 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-08T04:11:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MARQUEZ
---
# Marquez
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MARQUEZ` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ngmediastudio89/marquez', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
EleutherAI/sae-SmolLM2-135M-64x | EleutherAI | 2025-02-03T14:05:30Z | 9 | 0 | null | [
"dataset:EleutherAI/fineweb-edu-dedup-10b",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"region:us"
]
| null | 2025-02-03T13:49:47Z | ---
datasets:
- EleutherAI/fineweb-edu-dedup-10b
base_model:
- HuggingFaceTB/SmolLM2-135M
---
SAEs trained on the MLPs of HuggingFaceTB/SmolLM2-135M, with expansion factor 64x. |
reda2002/whiteandpinkpillow | reda2002 | 2025-02-03T14:05:22Z | 10 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-02-03T13:57:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MONPOUFP&WPILLOWS
---
# Whiteandpinkpillow
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MONPOUFP&WPILLOWS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('reda2002/whiteandpinkpillow', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
chibbert/SmolLM2-FT-DPO | chibbert | 2025-02-03T14:04:43Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T14:04:01Z | ---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: transformers
model_name: SmolLM2-FT-DPO
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- dpo
licence: license
---
# Model Card for SmolLM2-FT-DPO
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chibbert/SmolLM2-FT-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vaatsav06/sdxl-base-1.0-greenchair-dreambooth-lora | vaatsav06 | 2025-02-03T14:02:57Z | 17 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-02-03T13:06:56Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A photo of sks chair on a cliff'
output:
url:
"image_0.png"
- text: 'A photo of sks chair on a cliff'
output:
url:
"image_1.png"
- text: 'A photo of sks chair on a cliff'
output:
url:
"image_2.png"
- text: 'A photo of sks chair on a cliff'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks chair
license: openrail++
---
# SDXL LoRA DreamBooth - vaatsav06/sdxl-base-1.0-greenchair-dreambooth-lora
<Gallery />
## Model description
### These are vaatsav06/sdxl-base-1.0-greenchair-dreambooth-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`lora-trained-xl.safetensors` here 💾](/vaatsav06/sdxl-base-1.0-greenchair-dreambooth-lora/blob/main/lora-trained-xl.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:lora-trained-xl:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('vaatsav06/sdxl-base-1.0-greenchair-dreambooth-lora', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks chair on a cliff').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
You should use a photo of sks chair to trigger the image generation.
## Details
All [Files & versions](/vaatsav06/sdxl-base-1.0-greenchair-dreambooth-lora/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. True.
Pivotal tuning was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Primeness/primeh5v8c4 | Primeness | 2025-02-03T14:01:23Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T11:49:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
datlaaaaaaa/5e10518d-cef4-4402-98e9-876cbae1b02a | datlaaaaaaa | 2025-02-03T13:59:56Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T13:12:02Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e10518d-cef4-4402-98e9-876cbae1b02a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b453216382ea03b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b453216382ea03b_train_data.json
type:
field_instruction: question
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/5e10518d-cef4-4402-98e9-876cbae1b02a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b453216382ea03b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d267361d-35f8-40a0-bc9b-8de3705c3658
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d267361d-35f8-40a0-bc9b-8de3705c3658
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5e10518d-cef4-4402-98e9-876cbae1b02a
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4567 | 0.0051 | 200 | 2.5169 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nathanialhunt/68840979-07aa-406c-b5e4-8b8c95401318 | nathanialhunt | 2025-02-03T13:59:12Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
]
| null | 2025-02-03T13:56:36Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 68840979-07aa-406c-b5e4-8b8c95401318
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7465fecdd1b4fae8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7465fecdd1b4fae8_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/68840979-07aa-406c-b5e4-8b8c95401318
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7465fecdd1b4fae8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96ba0598-e365-4fe4-a421-689fa74a779f
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96ba0598-e365-4fe4-a421-689fa74a779f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 68840979-07aa-406c-b5e4-8b8c95401318
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 3.9426 |
| 0.1954 | 0.0875 | 50 | 0.1324 |
| 0.1053 | 0.1751 | 100 | 0.1033 |
| 0.0989 | 0.2626 | 150 | 0.0795 |
| 0.1036 | 0.3501 | 200 | 0.0737 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/DavidAU-Dark_Mistress-8B-GGUF | mradermacher | 2025-02-03T13:56:11Z | 759 | 1 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T11:58:41Z | ---
base_model: MrRobotoAI/DavidAU-Dark_Mistress-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MrRobotoAI/DavidAU-Dark_Mistress-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DavidAU-Dark_Mistress-8B-GGUF/resolve/main/DavidAU-Dark_Mistress-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kostiantynk-out/9466404a-32e5-407b-bd06-db8e2d3eb35b | kostiantynk-out | 2025-02-03T13:56:06Z | 23 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:55:44Z | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9466404a-32e5-407b-bd06-db8e2d3eb35b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5edbcf1a76167eb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5edbcf1a76167eb8_train_data.json
type:
field_instruction: anchor
field_output: logical
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/9466404a-32e5-407b-bd06-db8e2d3eb35b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 250
micro_batch_size: 2
mlflow_experiment_name: /tmp/5edbcf1a76167eb8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d45962f-e7c9-422a-ab3c-ba25232bc246
wandb_project: Mine-SN56-1-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d45962f-e7c9-422a-ab3c-ba25232bc246
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9466404a-32e5-407b-bd06-db8e2d3eb35b
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0023 | 1 | 6.9418 |
| 6.929 | 0.1474 | 63 | 6.9226 |
| 6.9007 | 0.2947 | 126 | 6.8916 |
| 6.8876 | 0.4421 | 189 | 6.8864 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Shashwat13333/bge-base-en-v1.5 | Shashwat13333 | 2025-02-03T13:56:03Z | 72 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:150",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-02-03T12:34:28Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:150
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: What services does Techchefz Digital offer for AI adoption?
sentences:
- 'We are a New breed of innovative digital transformation agency, redefining storytelling
for an always-on world.
With roots dating back to 2017, we started as a pocket size team of enthusiasts
with a goal of helping traditional businesses transform and create dynamic, digital
cultures through disruptive strategies and agile deployment of innovative solutions.'
- "At Techchefz Digital, we specialize in guiding companies through the complexities\
\ of adopting and integrating Artificial Intelligence and Machine Learning technologies.\
\ Our consultancy services are designed to enhance your operational efficiency\
\ and decision-making capabilities across all sectors. With a global network of\
\ AI/ML experts and a commitment to excellence, we are your partners in transforming\
\ innovative possibilities into real-world achievements. \
\ \
\ \n DATA INTELLIGENCE PLATFORMS we\
\ specialize in\nTensorFlow\nDatabricks\nTableau\nPytorch\nOpenAI\nPinecone\""
- 'How can we get started with your DevOps solutions?
Getting started is easy. Contact us through our website. We''ll schedule a consultation
to discuss your needs, evaluate your current infrastructure, and propose a customized
DevOps solution designed to achieve your goals.'
- source_sentence: Hav you made any services for schools and students?
sentences:
- 'How do we do Custom Development ?
We follow below process to develop custom web or mobile Application on Agile Methodology,
breaking requirements in pieces and developing and shipping them with considering
utmost quality:
Requirements Analysis
We begin by understanding the client's needs and objectives for the website.
Identify key features, functionality, and any specific design preferences.
Project Planning
Then create a detailed project plan outlining the scope, timeline, and milestones.
Define the technology stack and development tools suitable for the project.
User Experience Design
Then comes the stage of Developing wireframes or prototypes to visualize the website's
structure and layout. We create a custom design that aligns with the brand identity
and user experience goals.
Development
After getting Sign-off on Design from Client, we break the requirements into Sprints
on Agile Methodology, and start developing them.'
- 'This is our Portfolio
Introducing the world of Housing Finance& Banking Firm.
Corporate Website with 10 regional languages in India with analytics and user
personalization and Dashboard for Regional Managers, Sales Agents, etc. to manage
the Builder Requests, approve/deny Properties, manage visits and appointments,
manage leads, etc.
Introducing the world of Global Automotive Brand.We have implemented a Multi Locale
Multilingual Omnichannel platform for Royal Enfield. The platform supports public
websites, customer portals, internal portals, business applications for over 35+
different locations all over the world.
Developed Digital Platform for Students, Guardians, Teachers, Tutors, with AI/ML
in collaboration with Successive Technologies Inc, USA. Cloud, Dev-Sec-Ops &
Data Governance
Managing cloud provisioning and modernization alongside automated infrastructure,
event-driven microservices, containerization, DevOps, cybersecurity, and 24x7
monitoring support ensures efficient, secure, and responsive IT operations.'
- "SERVICES WE PROVIDE\nFlexible engagement models tailored to your needs\nWe specialize\
\ in comprehensive website audits that provide valuable insights and recommendations\
\ to enhance your online presence.\nDigital Strategy & Consulting\nCreating digital\
\ roadmap that transform your digital enterprise and produce a return on investment,\
\ basis our discovery framework, brainstorming sessions & current state analysis.\n\
\nPlatform Selection\nHelping you select the optimal digital experience, commerce,\
\ cloud and marketing platform for your enterprise.\n\nPlatform Builds\nDeploying\
\ next-gen scalable and agile enterprise digital platforms, along with multi-platform\
\ integrations. \nProduct Builds\nHelp you ideate, strategize, and engineer\
\ your product with help of our enterprise frameworks\nInfrastructure\nSpecialize\
\ in multi-cloud infrastructure helping you put forward the right cloud infrastructure\
\ and optimization strategy.\n\nManaged Services\nOperate and monitor your business-critical\
\ applications, data, and IT workloads, along with Application maintenance and\
\ operations.\nTeam Augmentation\nHelp you scale up and augment your existing\
\ team to solve your hiring challenges with our easy to deploy staff augmentation\
\ offerings.\""
- source_sentence: How did TechChefz evolve from its early days?
sentences:
- 'Why do we need Microservices ?
Instead of building a monolithic application where all functionalities are tightly
integrated, microservices break down the system into modular and loosely coupled
services.
Scalability
Flexibility and Agility
Resilience and Fault Isolation
Technology Diversity
Continuous Delivery'
- 'After a transformative scuba dive in the Maldives, Mayank Maggon made a pivotal
decision to depart from the corporate ladder in December 2016. Fueled by a clear
vision to revolutionize the digital landscape, Mayank set out to leverage the
best technology ingredients, crafting custom applications and digital ecosystems
tailored to clients'' specific needs, limitations, and budgets.
However, this solo journey was not without its challenges. Mayank had to initiate
the revenue engine by offering corporate trainings and conducting online batches
for tech training across the USA. He also undertook small projects and subcontracted
modules of larger projects for clients in the US, UK, and India. It was only after
this initial groundwork that Mayank was able to hire a group of interns, whom
he meticulously trained and groomed to prepare them for handling Enterprise Level
Applications. This journey reflects Mayank''s resilience, determination, and entrepreneurial
spirit in building TechChefz Digital from the ground up.
With a passion for innovation and a relentless drive for excellence, Mayank has
steered TechChefz Digital through strategic partnerships, groundbreaking projects,
and exponential growth. His leadership has been instrumental in shaping TechChefz
Digital into a leading force in the digital transformation arena, inspiring a
culture of innovation and excellence that continues to propel the company forward.'
- 'In what ways can machine learning optimize our operations?
Machine learning algorithms can analyze operational data to identify inefficiencies,
predict maintenance needs, optimize supply chains, and automate repetitive tasks,
significantly improving operational efficiency and reducing costs.'
- source_sentence: What kind of data do you leverage for AI solutions?
sentences:
- 'In the Introducing the world of Global Insurance Firm, we crafted Effective Solutions
for Complex Problems and delieverd a comprehensive Website Development, Production
Support & Managed Services, we optimized customer journeys, integrate analytics,
CRM, ERP, and third-party applications, and implement cutting-edge technologies
for enhanced performance and efficiency
and achievied 200% Reduction in operational time & effort managing content & experience,
70% Reduction in Deployment Errors and Downtime, 2.5X Customer Engagement, Conversion
& Retention'
- 'Our Solutions
Strategy & Digital Transformation
Innovate via digital transformation, modernize tech, craft product strategies,
enhance customer experiences, optimize data analytics, transition to cloud for
growth and efficiency
Product Engineering & Custom Development
Providing product development, enterprise web and mobile development, microservices
integrations, quality engineering, and application support services to drive innovation
and enhance operational efficiency.'
- Our AI/ML services pave the way for transformative change across industries, embodying
a client-focused approach that integrates seamlessly with human-centric innovation.
Our collaborative teams are dedicated to fostering growth, leveraging data, and
harnessing the predictive power of artificial intelligence to forge the next wave
of software excellence. We don't just deliver AI; we deliver the future.
- source_sentence: What managed services does TechChefz provide ?
sentences:
- " What we do\n\nDigital Strategy\nCreating digital frameworks that transform\
\ your digital enterprise and produce a return on investment.\n\nPlatform Selection\n\
Helping you select the optimal digital experience, commerce, cloud and marketing\
\ platform for your enterprise.\n\nPlatform Builds\nDeploying next-gen scalable\
\ and agile enterprise digital platforms, along with multi-platform integrations.\n\
\nProduct Builds\nHelp you ideate, strategize, and engineer your product with\
\ help of our enterprise frameworks \n\nTeam Augmentation\nHelp you scale up and\
\ augment your existing team to solve your hiring challenges with our easy to\
\ deploy staff augmentation offerings .\nManaged Services\nOperate and monitor\
\ your business-critical applications, data, and IT workloads, along with Application\
\ maintenance and operations\n"
- 'What makes your DevOps solutions stand out from the competition?
Our DevOps solutions stand out due to our personalized approach, extensive expertise,
and commitment to innovation. We focus on delivering measurable results, such
as reduced deployment times, improved system reliability, and enhanced security,
ensuring you get the maximum benefit from our services.'
- 'Introducing the world of General Insurance Firm
In this project, we implemented Digital Solution and Implementation with Headless
Drupal as the CMS, and lightweight React JS (Next JS SSR on Node JS) with the
following features:
PWA & AMP based Web Pages
Page Speed Optimization
Reusable and scalable React JS / Next JS Templates and Components
Headless Drupal CMS with Content & Experience management, approval workflows,
etc for seamless collaboration between the business and marketing teams
Minimalistic Buy and Renewal Journeys for various products, with API integrations
and adherence to data compliances
We achieved 250% Reduction in Operational Time and Effort in managing the Content
& Experience for Buy & renew Journeys,220% Reduction in Customer Drops during
buy and renewal journeys, 300% Reduction in bounce rate on policy landing and
campaign pages'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.17333333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5466666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6933333333333334
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.17333333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1822222222222222
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06933333333333333
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17333333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5466666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6933333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.43705488094312567
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3539576719576719
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3663753684578632
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.17333333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5333333333333333
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6266666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6933333333333334
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.17333333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.17777777777777776
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12533333333333332
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06933333333333333
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17333333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5333333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6266666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6933333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.43324477959330543
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3495185185185184
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.359896266319179
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.22666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.49333333333333335
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.68
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16444444444444445
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11199999999999997
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06799999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.22666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.49333333333333335
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.56
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.68
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4383628839300849
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.36210582010582004
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3731640827722892
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.24
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6933333333333334
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.24
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11199999999999997
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06933333333333332
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.24
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.48
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.56
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6933333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4443870388298522
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.36651322751322746
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.37546675549059694
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.08
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3466666666666667
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.49333333333333335
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.56
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.08
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.11555555555555555
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09866666666666667
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05599999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.08
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3466666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.49333333333333335
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.56
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3120295466486537
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.23260846560846554
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24731947636993173
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Shashwat13333/bge-base-en-v1.5")
# Run inference
sentences = [
'What managed services does TechChefz provide ?',
' What we do\n\nDigital Strategy\nCreating digital frameworks that transform your digital enterprise and produce a return on investment.\n\nPlatform Selection\nHelping you select the optimal digital experience, commerce, cloud and marketing platform for your enterprise.\n\nPlatform Builds\nDeploying next-gen scalable and agile enterprise digital platforms, along with multi-platform integrations.\n\nProduct Builds\nHelp you ideate, strategize, and engineer your product with help of our enterprise frameworks \n\nTeam Augmentation\nHelp you scale up and augment your existing team to solve your hiring challenges with our easy to deploy staff augmentation offerings .\nManaged Services\nOperate and monitor your business-critical applications, data, and IT workloads, along with Application maintenance and operations\n',
'Introducing the world of General Insurance Firm\nIn this project, we implemented Digital Solution and Implementation with Headless Drupal as the CMS, and lightweight React JS (Next JS SSR on Node JS) with the following features:\nPWA & AMP based Web Pages\nPage Speed Optimization\nReusable and scalable React JS / Next JS Templates and Components\nHeadless Drupal CMS with Content & Experience management, approval workflows, etc for seamless collaboration between the business and marketing teams\nMinimalistic Buy and Renewal Journeys for various products, with API integrations and adherence to data compliances\n\nWe achieved 250% Reduction in Operational Time and Effort in managing the Content & Experience for Buy & renew Journeys,220% Reduction in Customer Drops during buy and renewal journeys, 300% Reduction in bounce rate on policy landing and campaign pages',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:----------|
| cosine_accuracy@1 | 0.1733 | 0.1733 | 0.2267 | 0.24 | 0.08 |
| cosine_accuracy@3 | 0.5467 | 0.5333 | 0.4933 | 0.48 | 0.3467 |
| cosine_accuracy@5 | 0.6 | 0.6267 | 0.56 | 0.56 | 0.4933 |
| cosine_accuracy@10 | 0.6933 | 0.6933 | 0.68 | 0.6933 | 0.56 |
| cosine_precision@1 | 0.1733 | 0.1733 | 0.2267 | 0.24 | 0.08 |
| cosine_precision@3 | 0.1822 | 0.1778 | 0.1644 | 0.16 | 0.1156 |
| cosine_precision@5 | 0.12 | 0.1253 | 0.112 | 0.112 | 0.0987 |
| cosine_precision@10 | 0.0693 | 0.0693 | 0.068 | 0.0693 | 0.056 |
| cosine_recall@1 | 0.1733 | 0.1733 | 0.2267 | 0.24 | 0.08 |
| cosine_recall@3 | 0.5467 | 0.5333 | 0.4933 | 0.48 | 0.3467 |
| cosine_recall@5 | 0.6 | 0.6267 | 0.56 | 0.56 | 0.4933 |
| cosine_recall@10 | 0.6933 | 0.6933 | 0.68 | 0.6933 | 0.56 |
| **cosine_ndcg@10** | **0.4371** | **0.4332** | **0.4384** | **0.4444** | **0.312** |
| cosine_mrr@10 | 0.354 | 0.3495 | 0.3621 | 0.3665 | 0.2326 |
| cosine_map@100 | 0.3664 | 0.3599 | 0.3732 | 0.3755 | 0.2473 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 150 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 150 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 12.4 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 126.17 tokens</li><li>max: 378 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Is it hard to move old systems to the cloud?</code> | <code>We offer custom software development, digital marketing strategies, and tailored solutions to drive tangible results for your business. Our expert team combines technical prowess with industry insights to propel your business forward in the digital landscape.<br><br>"Engage, analyze & target your customers<br>Digital transformation enables you to interact with customers across multiple channels, providing personalized experiences. This could include social media engagement, interactive websites, and mobile apps." "Empower your employees & partners<br>The push for digital transformation has led many companies to embrace cloud solutions. However, the migration and integration of legacy systems into the cloud often present challenges." "Optimize & automate your operations<br>The push for digital transformation has led many companies to embrace cloud solutions. However, the migration and integration of legacy systems into the cloud often present challenges." "Transform your products<br>The push for digi...</code> |
| <code>What benefits does marketing automation offer for time management?</code> | <code>Our MarTech capabilities<br><br>Personalization<br>Involves tailoring marketing messages and experiences to individual customers. It enhances customer engagement, loyalty, and ultimately, conversion rates.<br><br>Marketing Automation<br>Marketing automation streamlines repetitive tasks such as email marketing, lead nurturing, and social media posting. It improves efficiency, saves time, and ensures timely communication with customers.<br><br>Customer Relationship Management<br>CRM systems help manage interactions with current and potential customers. They store customer data, track interactions, and facilitate communication, improving customer retention.</code> |
| <code>How can your recommendation engines improve our business?</code> | <code>How can your recommendation engines improve our business?<br>Our recommendation engines are designed to analyze customer behavior and preferences to deliver personalized suggestions, enhancing user experience, increasing sales, and boosting customer retention.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `gradient_accumulation_steps`: 4
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `push_to_hub`: True
- `hub_model_id`: Shashwat13333/bge-base-en-v1.5
- `push_to_hub_model_id`: bge-base-en-v1.5
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: Shashwat13333/bge-base-en-v1.5
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: bge-base-en-v1.5
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.2105 | 1 | 4.4608 | - | - | - | - | - |
| 0.8421 | 4 | - | 0.3891 | 0.3727 | 0.4175 | 0.3876 | 0.2956 |
| 1.2105 | 5 | 4.2215 | - | - | - | - | - |
| 1.8421 | 8 | - | 0.4088 | 0.4351 | 0.4034 | 0.4052 | 0.3167 |
| 2.4211 | 10 | 3.397 | - | - | - | - | - |
| 2.8421 | 12 | - | 0.4440 | 0.4252 | 0.4133 | 0.4284 | 0.3024 |
| 3.6316 | 15 | 2.87 | - | - | - | - | - |
| **3.8421** | **16** | **-** | **0.4371** | **0.4332** | **0.4384** | **0.4444** | **0.312** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
kk-aivio/6289da22-4ee8-4b1f-816a-6acdd8dd6e50 | kk-aivio | 2025-02-03T13:55:52Z | 22 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:55:25Z | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6289da22-4ee8-4b1f-816a-6acdd8dd6e50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5edbcf1a76167eb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5edbcf1a76167eb8_train_data.json
type:
field_instruction: anchor
field_output: logical
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/6289da22-4ee8-4b1f-816a-6acdd8dd6e50
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/5edbcf1a76167eb8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d45962f-e7c9-422a-ab3c-ba25232bc246
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d45962f-e7c9-422a-ab3c-ba25232bc246
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6289da22-4ee8-4b1f-816a-6acdd8dd6e50
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0047 | 1 | 6.9418 |
| 6.9284 | 0.2339 | 50 | 6.9245 |
| 6.9011 | 0.4678 | 100 | 6.8959 |
| 6.8932 | 0.7018 | 150 | 6.8868 |
| 6.8913 | 0.9357 | 200 | 6.8859 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
romainnn/54a82f80-5733-4172-9014-5c09604abba5 | romainnn | 2025-02-03T13:55:30Z | 21 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:55:15Z | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54a82f80-5733-4172-9014-5c09604abba5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5edbcf1a76167eb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5edbcf1a76167eb8_train_data.json
type:
field_instruction: anchor
field_output: logical
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: romainnn/54a82f80-5733-4172-9014-5c09604abba5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 38
micro_batch_size: 4
mlflow_experiment_name: /tmp/5edbcf1a76167eb8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3d45962f-e7c9-422a-ab3c-ba25232bc246
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3d45962f-e7c9-422a-ab3c-ba25232bc246
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 54a82f80-5733-4172-9014-5c09604abba5
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 38
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9374 | 0.0374 | 1 | 6.9419 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Shaleen123/MedicalEDI-Llama3.1-8b-Reasoning | Shaleen123 | 2025-02-03T13:54:34Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-01T19:59:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
great0001/7501b1eb-c08e-4a17-b983-bd83bde5c7da | great0001 | 2025-02-03T13:54:25Z | 11 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
]
| null | 2025-02-03T13:51:18Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7501b1eb-c08e-4a17-b983-bd83bde5c7da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7465fecdd1b4fae8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7465fecdd1b4fae8_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/7501b1eb-c08e-4a17-b983-bd83bde5c7da
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7465fecdd1b4fae8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96ba0598-e365-4fe4-a421-689fa74a779f
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96ba0598-e365-4fe4-a421-689fa74a779f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7501b1eb-c08e-4a17-b983-bd83bde5c7da
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 2.5654 |
| 0.1676 | 0.0875 | 50 | 0.1227 |
| 0.1075 | 0.1751 | 100 | 0.1239 |
| 0.1299 | 0.2626 | 150 | 0.0957 |
| 0.1254 | 0.3501 | 200 | 0.0974 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/3cbee2fe-6e5f-43d7-93eb-21377bdf07ce | havinash-ai | 2025-02-03T13:53:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
]
| null | 2025-02-03T13:50:58Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3cbee2fe-6e5f-43d7-93eb-21377bdf07ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7465fecdd1b4fae8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7465fecdd1b4fae8_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/3cbee2fe-6e5f-43d7-93eb-21377bdf07ce
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7465fecdd1b4fae8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96ba0598-e365-4fe4-a421-689fa74a779f
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96ba0598-e365-4fe4-a421-689fa74a779f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3cbee2fe-6e5f-43d7-93eb-21377bdf07ce
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 3.9426 |
| 0.1915 | 0.0875 | 50 | 0.1515 |
| 0.1069 | 0.1751 | 100 | 0.1052 |
| 0.0896 | 0.2626 | 150 | 0.0818 |
| 0.102 | 0.3501 | 200 | 0.0778 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
clarxus/d643de7d-b4d8-4699-a449-32c0a869761a | clarxus | 2025-02-03T13:52:32Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:05:08Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d643de7d-b4d8-4699-a449-32c0a869761a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 67ae0a068059de74_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67ae0a068059de74_train_data.json
type:
field_input: Company Name
field_instruction: Position
field_output: Long Description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: clarxus/d643de7d-b4d8-4699-a449-32c0a869761a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/67ae0a068059de74_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5a3810f1-954e-4bc6-9cfa-cd7881f9fa67
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: 5a3810f1-954e-4bc6-9cfa-cd7881f9fa67
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d643de7d-b4d8-4699-a449-32c0a869761a
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.8995 |
| 2.8798 | 0.0021 | 9 | 2.8480 |
| 2.7651 | 0.0043 | 18 | 2.7714 |
| 2.7154 | 0.0064 | 27 | 2.7261 |
| 2.7263 | 0.0086 | 36 | 2.7029 |
| 2.6637 | 0.0107 | 45 | 2.6886 |
| 2.6984 | 0.0128 | 54 | 2.6787 |
| 2.6832 | 0.0150 | 63 | 2.6715 |
| 2.6334 | 0.0171 | 72 | 2.6671 |
| 2.6979 | 0.0193 | 81 | 2.6644 |
| 2.6112 | 0.0214 | 90 | 2.6632 |
| 2.6077 | 0.0235 | 99 | 2.6630 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
abaddon182/73fe81cf-9410-4ebd-9466-c143c23a184c | abaddon182 | 2025-02-03T13:51:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
]
| null | 2025-02-03T12:54:54Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73fe81cf-9410-4ebd-9466-c143c23a184c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 78c82b5588259efc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/78c82b5588259efc_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: abaddon182/73fe81cf-9410-4ebd-9466-c143c23a184c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/78c82b5588259efc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cd38ec50-76ed-4242-a066-586aa4205714
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cd38ec50-76ed-4242-a066-586aa4205714
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 73fe81cf-9410-4ebd-9466-c143c23a184c
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5775 | 0.0001 | 1 | 0.9028 |
| 0.2297 | 0.0073 | 50 | 0.2803 |
| 0.2411 | 0.0146 | 100 | 0.2375 |
| 0.213 | 0.0218 | 150 | 0.2182 |
| 0.2154 | 0.0291 | 200 | 0.2120 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5604/7a498409-8612-453b-af4b-362335f0adf0 | prxy5604 | 2025-02-03T13:51:26Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
]
| null | 2025-02-03T12:53:55Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a498409-8612-453b-af4b-362335f0adf0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-3B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 78c82b5588259efc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/78c82b5588259efc_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/7a498409-8612-453b-af4b-362335f0adf0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/78c82b5588259efc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cd38ec50-76ed-4242-a066-586aa4205714
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cd38ec50-76ed-4242-a066-586aa4205714
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a498409-8612-453b-af4b-362335f0adf0
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5775 | 0.0001 | 1 | 0.9028 |
| 0.2308 | 0.0073 | 50 | 0.2811 |
| 0.2411 | 0.0146 | 100 | 0.2377 |
| 0.2131 | 0.0218 | 150 | 0.2180 |
| 0.2154 | 0.0291 | 200 | 0.2119 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhungphammmmm/f47c11db-3f53-4d84-ab69-c5d543275f68 | nhungphammmmm | 2025-02-03T13:51:14Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T13:12:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f47c11db-3f53-4d84-ab69-c5d543275f68
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b453216382ea03b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b453216382ea03b_train_data.json
type:
field_instruction: question
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/f47c11db-3f53-4d84-ab69-c5d543275f68
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b453216382ea03b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d267361d-35f8-40a0-bc9b-8de3705c3658
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d267361d-35f8-40a0-bc9b-8de3705c3658
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f47c11db-3f53-4d84-ab69-c5d543275f68
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4575 | 0.0051 | 200 | 2.5181 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cgus/granite-3.1-8b-instruct-abliterated-exl2 | cgus | 2025-02-03T13:50:04Z | 5 | 0 | transformers | [
"transformers",
"granite",
"text-generation",
"language",
"granite-3.1",
"abliterated",
"uncensored",
"conversational",
"base_model:huihui-ai/granite-3.1-8b-instruct-abliterated",
"base_model:quantized:huihui-ai/granite-3.1-8b-instruct-abliterated",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"exl2",
"region:us"
]
| text-generation | 2025-02-03T13:17:58Z | ---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.1
- abliterated
- uncensored
base_model:
- huihui-ai/granite-3.1-8b-instruct-abliterated
---
# granite-3.1-8b-instruct-abliterated-exl2
Model: [granite-3.1-8b-instruct-abliterated](https://huggingface.co/huihui-ai/granite-3.1-8b-instruct-abliterated)
Made by: [huihui-ai](https://huggingface.co/huihui-ai)
Granite 3 authors: [Granite Team, IBM](https://huggingface.co/ibm-granite)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.1-8b-instruct-abliterated-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/granite-3.1-8b-instruct-abliterated-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/granite-3.1-8b-instruct-abliterated-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/granite-3.1-8b-instruct-abliterated-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/granite-3.1-8b-instruct-abliterated-exl2/tree/8bpw-h8)
## Quantization notes
Made with exllamav2 0.2.7 with default dataset. This model requires exllamav2 0.2.7 or newer.
Exl2 quants require to be fully loaded into GPU VRAM, RAM offloading isn't supported natively.
Additionally it requires Nvidia RTX on Windows or Nvidia RTX/AMD ROCm on Linux.
These quants can be used with TabbyAPI or Text-Generation-WebUI.
# Original model card
# huihui-ai/granite-3.1-8b-instruct-abliterated
This is an uncensored version of [ibm-granite/granite-3.1-8b-instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
## Use with ollama
You can use [huihui_ai/granite3.1-dense-abliterated](https://ollama.com/huihui_ai/granite3.1-dense-abliterated) directly,
```
ollama run huihui_ai/granite3.1-dense-abliterated
``` |
jongheeyun/gemma-ko-7b-Q5_K_M-GGUF | jongheeyun | 2025-02-03T13:46:25Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"en",
"base_model:beomi/gemma-ko-7b",
"base_model:quantized:beomi/gemma-ko-7b",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T13:45:45Z | ---
language:
- ko
- en
license: other
library_name: transformers
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
tags:
- pytorch
- llama-cpp
- gguf-my-repo
base_model: beomi/gemma-ko-7b
---
# jongheeyun/gemma-ko-7b-Q5_K_M-GGUF
This model was converted to GGUF format from [`beomi/gemma-ko-7b`](https://huggingface.co/beomi/gemma-ko-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/gemma-ko-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jongheeyun/gemma-ko-7b-Q5_K_M-GGUF --hf-file gemma-ko-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jongheeyun/gemma-ko-7b-Q5_K_M-GGUF --hf-file gemma-ko-7b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jongheeyun/gemma-ko-7b-Q5_K_M-GGUF --hf-file gemma-ko-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jongheeyun/gemma-ko-7b-Q5_K_M-GGUF --hf-file gemma-ko-7b-q5_k_m.gguf -c 2048
```
|
corranm/vit-base-patch16-224-in21k_16batch | corranm | 2025-02-03T13:39:38Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:corranm/first_vote_100_full_pic_without_vote_highlight_square",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-02-02T22:57:35Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k_16batch
results: []
datasets:
- corranm/first_vote_100_full_pic_without_vote_highlight_square
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k_16batch
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2813
- F1 Macro: 0.4280
- F1 Micro: 0.5455
- F1 Weighted: 0.4882
- Precision Macro: 0.4004
- Precision Micro: 0.5455
- Precision Weighted: 0.4529
- Recall Macro: 0.4762
- Recall Micro: 0.5455
- Recall Weighted: 0.5455
- Accuracy: 0.5455
## Model description
Using a batch size of 16
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:|
| 1.9371 | 1.0 | 29 | 1.9372 | 0.0504 | 0.1212 | 0.0604 | 0.0334 | 0.1212 | 0.0403 | 0.1029 | 0.1212 | 0.1212 | 0.1212 |
| 1.9078 | 2.0 | 58 | 1.9066 | 0.0454 | 0.1818 | 0.0602 | 0.0272 | 0.1818 | 0.0361 | 0.1371 | 0.1818 | 0.1818 | 0.1818 |
| 1.9276 | 3.0 | 87 | 1.8808 | 0.0696 | 0.1818 | 0.0968 | 0.0492 | 0.1818 | 0.0682 | 0.1295 | 0.1818 | 0.1818 | 0.1818 |
| 1.8373 | 4.0 | 116 | 1.8696 | 0.0485 | 0.2045 | 0.0695 | 0.0292 | 0.2045 | 0.0418 | 0.1429 | 0.2045 | 0.2045 | 0.2045 |
| 1.8152 | 5.0 | 145 | 1.8490 | 0.1339 | 0.2576 | 0.1745 | 0.1298 | 0.2576 | 0.1640 | 0.1944 | 0.2576 | 0.2576 | 0.2576 |
| 1.8488 | 6.0 | 174 | 1.8281 | 0.1379 | 0.2727 | 0.1817 | 0.1512 | 0.2727 | 0.1891 | 0.1997 | 0.2727 | 0.2727 | 0.2727 |
| 1.7626 | 7.0 | 203 | 1.7917 | 0.2271 | 0.3333 | 0.2718 | 0.1922 | 0.3333 | 0.2298 | 0.2783 | 0.3333 | 0.3333 | 0.3333 |
| 1.7169 | 8.0 | 232 | 1.7478 | 0.2887 | 0.4242 | 0.3465 | 0.2706 | 0.4242 | 0.3154 | 0.3426 | 0.4242 | 0.4242 | 0.4242 |
| 1.5364 | 9.0 | 261 | 1.7098 | 0.2835 | 0.4091 | 0.3409 | 0.2720 | 0.4091 | 0.3245 | 0.3324 | 0.4091 | 0.4091 | 0.4091 |
| 1.7373 | 10.0 | 290 | 1.6765 | 0.2906 | 0.4167 | 0.3463 | 0.2726 | 0.4167 | 0.3157 | 0.3386 | 0.4167 | 0.4167 | 0.4167 |
| 1.5345 | 11.0 | 319 | 1.6423 | 0.2805 | 0.3939 | 0.3342 | 0.3728 | 0.3939 | 0.4258 | 0.3275 | 0.3939 | 0.3939 | 0.3939 |
| 1.6421 | 12.0 | 348 | 1.6103 | 0.3324 | 0.4697 | 0.3978 | 0.4583 | 0.4697 | 0.5178 | 0.3760 | 0.4697 | 0.4697 | 0.4697 |
| 1.5266 | 13.0 | 377 | 1.5835 | 0.3171 | 0.4621 | 0.3822 | 0.2917 | 0.4621 | 0.3483 | 0.3748 | 0.4621 | 0.4621 | 0.4621 |
| 1.5182 | 14.0 | 406 | 1.5633 | 0.3133 | 0.4242 | 0.3680 | 0.3634 | 0.4242 | 0.4009 | 0.3568 | 0.4242 | 0.4242 | 0.4242 |
| 1.5341 | 15.0 | 435 | 1.5528 | 0.3015 | 0.4167 | 0.3585 | 0.3109 | 0.4167 | 0.3638 | 0.3499 | 0.4167 | 0.4167 | 0.4167 |
| 1.3961 | 16.0 | 464 | 1.5273 | 0.3449 | 0.4545 | 0.3991 | 0.4329 | 0.4545 | 0.4704 | 0.3839 | 0.4545 | 0.4545 | 0.4545 |
| 1.3601 | 17.0 | 493 | 1.4971 | 0.3670 | 0.5 | 0.4357 | 0.5047 | 0.5 | 0.5382 | 0.4078 | 0.5 | 0.5 | 0.5 |
| 1.2535 | 18.0 | 522 | 1.5006 | 0.3511 | 0.4621 | 0.4138 | 0.4778 | 0.4621 | 0.5101 | 0.3872 | 0.4621 | 0.4621 | 0.4621 |
| 1.2375 | 19.0 | 551 | 1.4659 | 0.3655 | 0.4924 | 0.4345 | 0.4298 | 0.4924 | 0.4797 | 0.4020 | 0.4924 | 0.4924 | 0.4924 |
| 1.2141 | 20.0 | 580 | 1.4407 | 0.3914 | 0.5076 | 0.4565 | 0.4650 | 0.5076 | 0.5087 | 0.4217 | 0.5076 | 0.5076 | 0.5076 |
| 1.2831 | 21.0 | 609 | 1.4454 | 0.3965 | 0.5152 | 0.4645 | 0.4801 | 0.5152 | 0.5265 | 0.4214 | 0.5152 | 0.5152 | 0.5152 |
| 1.1543 | 22.0 | 638 | 1.4167 | 0.4285 | 0.5455 | 0.4997 | 0.4781 | 0.5455 | 0.5309 | 0.4521 | 0.5455 | 0.5455 | 0.5455 |
| 1.4079 | 23.0 | 667 | 1.4465 | 0.3675 | 0.4621 | 0.4269 | 0.4187 | 0.4621 | 0.4676 | 0.3929 | 0.4621 | 0.4621 | 0.4621 |
| 1.0619 | 24.0 | 696 | 1.4249 | 0.4092 | 0.5076 | 0.4724 | 0.4659 | 0.5076 | 0.5180 | 0.4336 | 0.5076 | 0.5076 | 0.5076 |
| 1.1059 | 25.0 | 725 | 1.3834 | 0.4356 | 0.5530 | 0.5061 | 0.5025 | 0.5530 | 0.5491 | 0.4594 | 0.5530 | 0.5530 | 0.5530 |
| 1.192 | 26.0 | 754 | 1.3784 | 0.4286 | 0.5379 | 0.4893 | 0.4566 | 0.5379 | 0.4969 | 0.4544 | 0.5379 | 0.5379 | 0.5379 |
| 1.21 | 27.0 | 783 | 1.3874 | 0.4409 | 0.5379 | 0.5060 | 0.4709 | 0.5379 | 0.5258 | 0.4616 | 0.5379 | 0.5379 | 0.5379 |
| 1.0901 | 28.0 | 812 | 1.3621 | 0.4402 | 0.5379 | 0.5074 | 0.4635 | 0.5379 | 0.5204 | 0.4557 | 0.5379 | 0.5379 | 0.5379 |
| 1.1254 | 29.0 | 841 | 1.3714 | 0.4265 | 0.5227 | 0.4873 | 0.4492 | 0.5227 | 0.4984 | 0.4449 | 0.5227 | 0.5227 | 0.5227 |
| 0.9345 | 30.0 | 870 | 1.3525 | 0.4425 | 0.5379 | 0.5074 | 0.4736 | 0.5379 | 0.5264 | 0.4557 | 0.5379 | 0.5379 | 0.5379 |
| 1.2036 | 31.0 | 899 | 1.3592 | 0.4363 | 0.5379 | 0.5020 | 0.4869 | 0.5379 | 0.5368 | 0.4533 | 0.5379 | 0.5379 | 0.5379 |
| 1.036 | 32.0 | 928 | 1.3362 | 0.4451 | 0.5455 | 0.5109 | 0.4673 | 0.5455 | 0.5226 | 0.4637 | 0.5455 | 0.5455 | 0.5455 |
| 0.9979 | 33.0 | 957 | 1.3492 | 0.4454 | 0.5455 | 0.5134 | 0.4808 | 0.5455 | 0.5358 | 0.4620 | 0.5455 | 0.5455 | 0.5455 |
| 0.8353 | 34.0 | 986 | 1.3402 | 0.4635 | 0.5606 | 0.5301 | 0.4659 | 0.5606 | 0.5268 | 0.4854 | 0.5606 | 0.5606 | 0.5606 |
| 0.9384 | 35.0 | 1015 | 1.3414 | 0.4408 | 0.5455 | 0.5088 | 0.4664 | 0.5455 | 0.5237 | 0.4602 | 0.5455 | 0.5455 | 0.5455 |
| 0.996 | 36.0 | 1044 | 1.3405 | 0.4559 | 0.5530 | 0.5235 | 0.4795 | 0.5530 | 0.5377 | 0.4715 | 0.5530 | 0.5530 | 0.5530 |
| 0.9613 | 37.0 | 1073 | 1.3357 | 0.4847 | 0.5833 | 0.5535 | 0.5011 | 0.5833 | 0.5612 | 0.5020 | 0.5833 | 0.5833 | 0.5833 |
| 0.8507 | 38.0 | 1102 | 1.3347 | 0.4760 | 0.5758 | 0.5454 | 0.4897 | 0.5758 | 0.5510 | 0.4940 | 0.5758 | 0.5758 | 0.5758 |
| 1.1563 | 39.0 | 1131 | 1.3396 | 0.4553 | 0.5530 | 0.5250 | 0.4608 | 0.5530 | 0.5234 | 0.4735 | 0.5530 | 0.5530 | 0.5530 |
| 0.9681 | 40.0 | 1160 | 1.3371 | 0.4703 | 0.5682 | 0.5396 | 0.4816 | 0.5682 | 0.5445 | 0.4887 | 0.5682 | 0.5682 | 0.5682 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
shibajustfor/58bf624b-2efb-490d-86b6-a51f873cc940 | shibajustfor | 2025-02-03T13:39:20Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
]
| null | 2025-02-03T13:38:54Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 58bf624b-2efb-490d-86b6-a51f873cc940
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8512442b605c78da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8512442b605c78da_train_data.json
type:
field_instruction: Input
field_output: Rephrased Content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/58bf624b-2efb-490d-86b6-a51f873cc940
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8512442b605c78da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc22d19e-3c5a-4a4e-ab3c-1133a5b4060b
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc22d19e-3c5a-4a4e-ab3c-1133a5b4060b
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 58bf624b-2efb-490d-86b6-a51f873cc940
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0028 | 1 | 10.3775 |
| 10.3733 | 0.1400 | 50 | 10.3699 |
| 10.345 | 0.2799 | 100 | 10.3396 |
| 10.3372 | 0.4199 | 150 | 10.3314 |
| 10.3346 | 0.5598 | 200 | 10.3300 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nathanialhunt/cf10142c-9ea9-45c0-bb15-30f31380d47b | nathanialhunt | 2025-02-03T13:38:55Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
]
| null | 2025-02-03T13:38:27Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cf10142c-9ea9-45c0-bb15-30f31380d47b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8512442b605c78da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8512442b605c78da_train_data.json
type:
field_instruction: Input
field_output: Rephrased Content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/cf10142c-9ea9-45c0-bb15-30f31380d47b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8512442b605c78da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc22d19e-3c5a-4a4e-ab3c-1133a5b4060b
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc22d19e-3c5a-4a4e-ab3c-1133a5b4060b
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cf10142c-9ea9-45c0-bb15-30f31380d47b
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0028 | 1 | 10.3776 |
| 10.3753 | 0.1400 | 50 | 10.3731 |
| 10.3625 | 0.2799 | 100 | 10.3601 |
| 10.3502 | 0.4199 | 150 | 10.3462 |
| 10.349 | 0.5598 | 200 | 10.3444 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
outlookAi/A2Dl70wBbz | outlookAi | 2025-02-03T13:38:48Z | 20 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-02-03T13:16:00Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Minnie
---
# A2Dl70Wbbz
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Minnie` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/A2Dl70wBbz', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
earnxus/86342fc0-0278-40bc-b93e-80d3f0956bbd | earnxus | 2025-02-03T13:32:26Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T13:18:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 86342fc0-0278-40bc-b93e-80d3f0956bbd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b33e7113daad43bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b33e7113daad43bd_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/86342fc0-0278-40bc-b93e-80d3f0956bbd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b33e7113daad43bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 2e9ec04e-317e-49f4-b782-0aebbd3b8e64
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 2e9ec04e-317e-49f4-b782-0aebbd3b8e64
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 86342fc0-0278-40bc-b93e-80d3f0956bbd
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8204 | 0.0411 | 200 | 0.7512 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hellum55/llama-3.2-3b-it-Ecommerce-ChatBot-Q4_K_M-GGUF | hellum55 | 2025-02-03T13:31:21Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:hellum55/llama-3.2-3b-it-Ecommerce-ChatBot",
"base_model:quantized:hellum55/llama-3.2-3b-it-Ecommerce-ChatBot",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T13:31:09Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: hellum55/llama-3.2-3b-it-Ecommerce-ChatBot
---
# hellum55/llama-3.2-3b-it-Ecommerce-ChatBot-Q4_K_M-GGUF
This model was converted to GGUF format from [`hellum55/llama-3.2-3b-it-Ecommerce-ChatBot`](https://huggingface.co/hellum55/llama-3.2-3b-it-Ecommerce-ChatBot) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hellum55/llama-3.2-3b-it-Ecommerce-ChatBot) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hellum55/llama-3.2-3b-it-Ecommerce-ChatBot-Q4_K_M-GGUF --hf-file llama-3.2-3b-it-ecommerce-chatbot-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hellum55/llama-3.2-3b-it-Ecommerce-ChatBot-Q4_K_M-GGUF --hf-file llama-3.2-3b-it-ecommerce-chatbot-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hellum55/llama-3.2-3b-it-Ecommerce-ChatBot-Q4_K_M-GGUF --hf-file llama-3.2-3b-it-ecommerce-chatbot-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hellum55/llama-3.2-3b-it-Ecommerce-ChatBot-Q4_K_M-GGUF --hf-file llama-3.2-3b-it-ecommerce-chatbot-q4_k_m.gguf -c 2048
```
|
Romain-XV/52a87998-c170-4662-99a1-bccc16f8207f | Romain-XV | 2025-02-03T13:30:50Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
]
| null | 2025-02-03T09:17:59Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 52a87998-c170-4662-99a1-bccc16f8207f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3dd51fb6372006d9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3dd51fb6372006d9_train_data.json
type:
field_input: original_version
field_instruction: title
field_output: french_version
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/52a87998-c170-4662-99a1-bccc16f8207f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 388
micro_batch_size: 4
mlflow_experiment_name: /tmp/3dd51fb6372006d9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 00e13256-d958-48f7-8ad0-5b9ae4c0322b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 00e13256-d958-48f7-8ad0-5b9ae4c0322b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 52a87998-c170-4662-99a1-bccc16f8207f
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 388
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9258 | 0.0007 | 1 | 1.0433 |
| 0.8099 | 0.1020 | 150 | 0.8550 |
| 0.8976 | 0.2040 | 300 | 0.8359 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Corianas/Microllama_Char_100k_step | Corianas | 2025-02-03T13:29:34Z | 174 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:roneneldan/TinyStories",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T14:11:07Z | ---
license: cc-by-nc-sa-4.0
datasets:
- roneneldan/TinyStories
---
This is a character (english a-z 0-9 and so on) trained model following Andrej karpathy's llama.c project https://github.com/karpathy/llama2.c on both TinyStories and my own internal similar dataset I made.
for it to see/output Uppercase letters this model uses a Shift-Key modifier before the letter to become uppercase, and has never been trained on actual uppercase letters.
This modifier is ↨ and here are the functions I use to convert from straight text to the modified format and back.
```
def add_caseifer(text):
# Using list comprehension for more efficient concatenation
return ''.join(['↨' + char.lower() if char.isupper() else char for char in text
def remove_caseifer(text):
new_text = ""
i = 0
while i < len(text):
if text[i] == "↨":
if i+1 < len(text):
new_text += text[i+1].upper()
i += 1
else:
pass # skip this index
else:
new_text += text[i]
i += 1
return new_text
```
As such for test strings to use in chat try using somthing like:
```
↨hello, my name is ↨clara and ↨i like
``` |
Corianas/Microllama_Char_88k_step | Corianas | 2025-02-03T13:29:32Z | 172 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:roneneldan/TinyStories",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-28T00:20:52Z | ---
license: cc-by-nc-sa-4.0
datasets:
- roneneldan/TinyStories
---
This is a character (english a-z 0-9 and so on) trained model following Andrej karpathy's llama.c project https://github.com/karpathy/llama2.c on both TinyStories and my own internal similar dataset I made.
for it to see/output Uppercase letters this model uses a Shift-Key modifier before the letter to become uppercase, and has never been trained on actual uppercase letters.
This modifier is ↨ and here are the functions I use to convert from straight text to the modified format and back.
```
def add_caseifer(text):
# Using list comprehension for more efficient concatenation
return ''.join(['↨' + char.lower() if char.isupper() else char for char in text
def remove_caseifer(text):
new_text = ""
i = 0
while i < len(text):
if text[i] == "↨":
if i+1 < len(text):
new_text += text[i+1].upper()
i += 1
else:
pass # skip this index
else:
new_text += text[i]
i += 1
return new_text
```
As such for test strings to use in chat try using somthing like:
```
↨hello, my name is ↨clara and ↨i like
```
This model was only uploaded as a test to see if I got it all HF compatible, and was able to use toold like LazyMergekit on it and yes, it did work. *happydance* |
Corianas/MicroTask-mini-guanaco_200k | Corianas | 2025-02-03T13:29:27Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:roneneldan/TinyStories",
"dataset:guanaco/guanaco_clean",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-31T04:48:01Z | ---
license: cc-by-nc-sa-4.0
datasets:
- roneneldan/TinyStories
- guanaco/guanaco_clean
---
This is a character (english a-z 0-9 and so on) trained model following Andrej karpathy's llama.c project https://github.com/karpathy/llama2.c on both TinyStories and my own internal similar dataset I made.
It has been finetuned on a question answer set, without any preamble, you ask the question, give a newline, and wait for the answer.
for it to see/output Uppercase letters this model uses a Shift-Key modifier before the letter to become uppercase, and has never been trained on actual uppercase letters.
This modifier is ↨ and here are the functions I use to convert from straight text to the modified format and back.
```
def add_caseifer(text):
# Using list comprehension for more efficient concatenation
return ''.join(['↨' + char.lower() if char.isupper() else char for char in text
def remove_caseifer(text):
new_text = ""
i = 0
while i < len(text):
if text[i] == "↨":
if i+1 < len(text):
new_text += text[i+1].upper()
i += 1
else:
pass # skip this index
else:
new_text += text[i]
i += 1
return new_text
```
As such for test strings to use in chat try using somthing like:
```
↨hello, my name is ↨clara and ↨i like
``` |
minhnguyennnnnn/ee0df98b-2015-40ec-abe3-59765a8f47c6 | minhnguyennnnnn | 2025-02-03T13:28:50Z | 13 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T12:18:33Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ee0df98b-2015-40ec-abe3-59765a8f47c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ef1d793a471b8e87_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ef1d793a471b8e87_train_data.json
type:
field_instruction: question
field_output: gt_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhnguyennnnnn/ee0df98b-2015-40ec-abe3-59765a8f47c6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ef1d793a471b8e87_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 095657af-6e17-4dfa-ab59-639179ca02ce
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 095657af-6e17-4dfa-ab59-639179ca02ce
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ee0df98b-2015-40ec-abe3-59765a8f47c6
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9826 | 0.0284 | 200 | 0.4077 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
demohong/ee98be97-5c1f-4ecd-8f4d-d96dab6b311a | demohong | 2025-02-03T13:21:04Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T12:25:02Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ee98be97-5c1f-4ecd-8f4d-d96dab6b311a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ed78a5b2ee6f663c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ed78a5b2ee6f663c_train_data.json
type:
field_instruction: instructions
field_output: en_responses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/ee98be97-5c1f-4ecd-8f4d-d96dab6b311a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ed78a5b2ee6f663c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ee98be97-5c1f-4ecd-8f4d-d96dab6b311a
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2508 | 0.7547 | 200 | 0.6443 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/a4b93509-0f55-460a-bd89-c415a94cb2a1 | kk-aivio | 2025-02-03T13:21:00Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-3B",
"base_model:adapter:unsloth/Llama-3.2-3B",
"license:llama3.2",
"region:us"
]
| null | 2025-02-03T12:56:00Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4b93509-0f55-460a-bd89-c415a94cb2a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# a4b93509-0f55-460a-bd89-c415a94cb2a1
This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
romainnn/876f4d95-1958-4c37-b857-66c10e4ca9de | romainnn | 2025-02-03T13:20:35Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T11:44:04Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 876f4d95-1958-4c37-b857-66c10e4ca9de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8219297a1f15c78f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8219297a1f15c78f_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: romainnn/876f4d95-1958-4c37-b857-66c10e4ca9de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 330
micro_batch_size: 4
mlflow_experiment_name: /tmp/8219297a1f15c78f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c51f1bae-cd9b-428d-8471-28dc9d09f87c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c51f1bae-cd9b-428d-8471-28dc9d09f87c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 876f4d95-1958-4c37-b857-66c10e4ca9de
This model is a fine-tuned version of [unsloth/mistral-7b](https://huggingface.co/unsloth/mistral-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 183
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.6042 | 0.0110 | 1 | 1.0677 |
| 11.3095 | 0.5491 | 50 | 0.7183 |
| 9.0108 | 1.0981 | 100 | 0.7052 |
| 9.0997 | 1.6472 | 150 | 0.7021 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiulawaldev/5c1e11c4-0898-474a-8f87-b4db08eaf4c7 | robiulawaldev | 2025-02-03T13:18:53Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
]
| null | 2025-02-03T13:14:47Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c1e11c4-0898-474a-8f87-b4db08eaf4c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0beebe02a7ff1655_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0beebe02a7ff1655_train_data.json
type:
field_input: product_title
field_instruction: text
field_output: preds
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/5c1e11c4-0898-474a-8f87-b4db08eaf4c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0beebe02a7ff1655_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d5d98e9d-ebfa-48d6-a38e-cd840c5c4bcb
wandb_project: Birthday-SN56-36-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d5d98e9d-ebfa-48d6-a38e-cd840c5c4bcb
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5c1e11c4-0898-474a-8f87-b4db08eaf4c7
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 2.4988 |
| 1.8615 | 0.0608 | 50 | 1.7411 |
| 1.7446 | 0.1217 | 100 | 1.6883 |
| 1.6129 | 0.1825 | 150 | 1.6862 |
| 1.5947 | 0.2433 | 200 | 1.6703 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/PathfinderAI5.0-i1-GGUF | mradermacher | 2025-02-03T13:18:39Z | 709 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"reason",
"Chain-of-Thought",
"deep thinking",
"en",
"dataset:bespokelabs/Bespoke-Stratos-17k",
"dataset:Daemontatox/Deepthinking-COT",
"dataset:Daemontatox/Qwqloncotam",
"dataset:Daemontatox/Reasoning_am",
"base_model:Daemontatox/PathfinderAI5.0",
"base_model:quantized:Daemontatox/PathfinderAI5.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-02-03T08:39:35Z | ---
base_model: Daemontatox/PathfinderAI5.0
datasets:
- bespokelabs/Bespoke-Stratos-17k
- Daemontatox/Deepthinking-COT
- Daemontatox/Qwqloncotam
- Daemontatox/Reasoning_am
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- reason
- Chain-of-Thought
- deep thinking
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Daemontatox/PathfinderAI5.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/PathfinderAI5.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/PathfinderAI5.0-i1-GGUF/resolve/main/PathfinderAI5.0.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
earnxus/7e8cb50b-256f-4f7e-9a03-9faddaa94eeb | earnxus | 2025-02-03T13:17:28Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b",
"base_model:adapter:unsloth/mistral-7b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T12:52:21Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e8cb50b-256f-4f7e-9a03-9faddaa94eeb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8219297a1f15c78f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8219297a1f15c78f_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/7e8cb50b-256f-4f7e-9a03-9faddaa94eeb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8219297a1f15c78f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: c51f1bae-cd9b-428d-8471-28dc9d09f87c
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: c51f1bae-cd9b-428d-8471-28dc9d09f87c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 7e8cb50b-256f-4f7e-9a03-9faddaa94eeb
This model is a fine-tuned version of [unsloth/mistral-7b](https://huggingface.co/unsloth/mistral-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1272 | 0.2745 | 200 | 0.7402 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vicky4s4s/gemma-2-2b-instruct | vicky4s4s | 2025-02-03T13:15:08Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:1903.00161",
"arxiv:2206.04615",
"arxiv:2203.09509",
"arxiv:2403.13793",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T12:47:03Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
base_model: google/gemma-2-2b
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 2b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "vicky4s4s/gemma-2-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
|
VGaspar/w2v-bert-2.0-mongolian-colab-CV16.0 | VGaspar | 2025-02-03T13:13:04Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-01-27T16:05:56Z | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: w2v-bert-2.0-mongolian-colab-CV16.0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: hu
split: test
args: hu
metrics:
- name: Wer
type: wer
value: 0.09440154670549343
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-mongolian-colab-CV16.0
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.0944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 50
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 100
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.4147 | 0.6270 | 300 | inf | 0.1737 |
| 0.122 | 1.2529 | 600 | inf | 0.1498 |
| 0.0944 | 1.8798 | 900 | inf | 0.1323 |
| 0.0677 | 2.5057 | 1200 | inf | 0.1214 |
| 0.0548 | 3.1317 | 1500 | inf | 0.1089 |
| 0.0378 | 3.7586 | 1800 | inf | 0.0999 |
| 0.0287 | 4.3845 | 2100 | inf | 0.0944 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cpu
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso/2704074e-9a2e-4729-90ad-ad2fc866d15e | lesso | 2025-02-03T13:12:09Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T13:01:30Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2704074e-9a2e-4729-90ad-ad2fc866d15e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 67ae0a068059de74_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67ae0a068059de74_train_data.json
type:
field_input: Company Name
field_instruction: Position
field_output: Long Description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/2704074e-9a2e-4729-90ad-ad2fc866d15e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god16/67ae0a068059de74_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5a3810f1-954e-4bc6-9cfa-cd7881f9fa67
wandb_project: ab-god16
wandb_run: your_name
wandb_runid: 5a3810f1-954e-4bc6-9cfa-cd7881f9fa67
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2704074e-9a2e-4729-90ad-ad2fc866d15e
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8563 | 0.0019 | 1 | 2.9462 |
| 2.6107 | 0.0951 | 50 | 2.6001 |
| 2.5349 | 0.1901 | 100 | 2.5370 |
| 2.5078 | 0.2852 | 150 | 2.5056 |
| 2.5076 | 0.3802 | 200 | 2.4931 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
critical-hf/Immy_H_7_GGUF | critical-hf | 2025-02-03T13:09:26Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:critical-hf/IMMY_V7_H",
"base_model:quantized:critical-hf/IMMY_V7_H",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T13:09:13Z | ---
base_model: Daemontatox/Immy_Hermes_V2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Daemontatox/Immy_Hermes_V2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Daemontatox/Immy_Hermes_V2`](https://huggingface.co/Daemontatox/Immy_Hermes_V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Daemontatox/Immy_Hermes_V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Daemontatox/Immy_Hermes_V2-Q4_K_M-GGUF --hf-file immy_hermes_v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Daemontatox/Immy_Hermes_V2-Q4_K_M-GGUF --hf-file immy_hermes_v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Daemontatox/Immy_Hermes_V2-Q4_K_M-GGUF --hf-file immy_hermes_v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Daemontatox/Immy_Hermes_V2-Q4_K_M-GGUF --hf-file immy_hermes_v2-q4_k_m.gguf -c 2048
```
|
Nexspear/570698e5-3b80-4f1e-8291-da7188fc9926 | Nexspear | 2025-02-03T13:09:10Z | 34 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
]
| null | 2025-02-03T12:51:04Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 570698e5-3b80-4f1e-8291-da7188fc9926
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1b617ce82c7310e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1b617ce82c7310e_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: Nexspear/570698e5-3b80-4f1e-8291-da7188fc9926
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/c1b617ce82c7310e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 7f85a073-7b5c-430c-9a22-9fdc7c748e1c
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 7f85a073-7b5c-430c-9a22-9fdc7c748e1c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 570698e5-3b80-4f1e-8291-da7188fc9926
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8725 | 0.0018 | 1 | 1.2947 |
| 1.3324 | 0.0900 | 50 | 1.1664 |
| 1.455 | 0.1799 | 100 | 1.1598 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiulawaldev/757532ab-7d8a-47e0-bb98-e9ee33bc69a3 | robiulawaldev | 2025-02-03T13:07:31Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:11:59Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 757532ab-7d8a-47e0-bb98-e9ee33bc69a3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f1b8653716a804b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f1b8653716a804b0_train_data.json
type:
field_input: chunk
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/757532ab-7d8a-47e0-bb98-e9ee33bc69a3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f1b8653716a804b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2eba181b-7a7f-4d93-b07b-38656f37293e
wandb_project: Birthday-SN56-35-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2eba181b-7a7f-4d93-b07b-38656f37293e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 757532ab-7d8a-47e0-bb98-e9ee33bc69a3
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0009 | 50 | nan |
| 0.0 | 0.0017 | 100 | nan |
| 0.316 | 0.0026 | 150 | nan |
| 0.7063 | 0.0035 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
numind/NuNER_Zero-span | numind | 2025-02-03T13:06:57Z | 117 | 15 | gliner | [
"gliner",
"pytorch",
"entity recognition",
"NER",
"named entity recognition",
"zero shot",
"zero-shot",
"token-classification",
"en",
"dataset:numind/NuNER",
"arxiv:2402.15343",
"arxiv:2311.08526",
"license:mit",
"region:us"
]
| token-classification | 2024-04-26T07:35:58Z | ---
license: mit
datasets:
- numind/NuNER
library_name: gliner
language:
- en
pipeline_tag: token-classification
tags:
- entity recognition
- NER
- named entity recognition
- zero shot
- zero-shot
---
NuNER Zero-span is the span-prediction version of [NuNER Zero](https://huggingface.co/numind/NuNER_Zero/edit/main/README.md).
NuNER Zero-span shows slightly better performance than NuNER Zero but cannot detect entities that are larger than 12 tokens.
<p align="center">
<img src="zero_shot_performance_span.png" width="600">
</p>
## Installation & Usage
```
!pip install gliner==0.1.12
```
**NuZero requires labels to be lower-cased**
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("numind/NuNerZero_span")
# NuZero requires labels to be lower-cased!
labels = ["organization", "initiative", "project"]
labels = [l.lower() for l in labels]
text = "At the annual technology summit, the keynote address was delivered by a senior member of the Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory, which recently launched an expansive initiative titled 'Quantum Computing and Algorithmic Innovations: Shaping the Future of Technology'. This initiative explores the implications of quantum mechanics on next-generation computing and algorithm design and is part of a broader effort that includes the 'Global Computational Science Advancement Project'. The latter focuses on enhancing computational methodologies across scientific disciplines, aiming to set new benchmarks in computational efficiency and accuracy."
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory => organization
Quantum Computing and Algorithmic Innovations: Shaping the Future of Technology => initiative
Global Computational Science Advancement Project => project
```
## Fine-tuning
A fine-tuning script can be found [here](https://colab.research.google.com/drive/1fu15tWCi0SiQBBelwB-dUZDZu0RVfx_a?usp=sharing).
## Citation
### This work
```bibtex
@misc{bogdanov2024nuner,
title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data},
author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
year={2024},
eprint={2402.15343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Previous work
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
chchen/Llama-3.1-8B-Instruct-PsyCourse-fold2 | chchen | 2025-02-03T13:05:37Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
]
| null | 2025-01-28T13:27:42Z | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-Instruct-PsyCourse-fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-PsyCourse-fold2
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the course-train-fold2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5122 | 0.0775 | 50 | 0.4118 |
| 0.0913 | 0.1550 | 100 | 0.0831 |
| 0.0638 | 0.2326 | 150 | 0.0662 |
| 0.0589 | 0.3101 | 200 | 0.0558 |
| 0.0628 | 0.3876 | 250 | 0.0528 |
| 0.0482 | 0.4651 | 300 | 0.0484 |
| 0.0429 | 0.5426 | 350 | 0.0443 |
| 0.0526 | 0.6202 | 400 | 0.0432 |
| 0.0446 | 0.6977 | 450 | 0.0391 |
| 0.0503 | 0.7752 | 500 | 0.0377 |
| 0.048 | 0.8527 | 550 | 0.0386 |
| 0.0636 | 0.9302 | 600 | 0.0439 |
| 0.0333 | 1.0078 | 650 | 0.0361 |
| 0.0319 | 1.0853 | 700 | 0.0385 |
| 0.033 | 1.1628 | 750 | 0.0357 |
| 0.0242 | 1.2403 | 800 | 0.0371 |
| 0.0363 | 1.3178 | 850 | 0.0343 |
| 0.0419 | 1.3953 | 900 | 0.0360 |
| 0.0444 | 1.4729 | 950 | 0.0349 |
| 0.0297 | 1.5504 | 1000 | 0.0362 |
| 0.0348 | 1.6279 | 1050 | 0.0348 |
| 0.0271 | 1.7054 | 1100 | 0.0354 |
| 0.0362 | 1.7829 | 1150 | 0.0366 |
| 0.034 | 1.8605 | 1200 | 0.0344 |
| 0.039 | 1.9380 | 1250 | 0.0344 |
| 0.0248 | 2.0155 | 1300 | 0.0340 |
| 0.0209 | 2.0930 | 1350 | 0.0369 |
| 0.0211 | 2.1705 | 1400 | 0.0352 |
| 0.0178 | 2.2481 | 1450 | 0.0379 |
| 0.026 | 2.3256 | 1500 | 0.0355 |
| 0.0166 | 2.4031 | 1550 | 0.0375 |
| 0.0237 | 2.4806 | 1600 | 0.0355 |
| 0.037 | 2.5581 | 1650 | 0.0347 |
| 0.0161 | 2.6357 | 1700 | 0.0387 |
| 0.0174 | 2.7132 | 1750 | 0.0383 |
| 0.0222 | 2.7907 | 1800 | 0.0363 |
| 0.0217 | 2.8682 | 1850 | 0.0376 |
| 0.0198 | 2.9457 | 1900 | 0.0362 |
| 0.0105 | 3.0233 | 1950 | 0.0388 |
| 0.0097 | 3.1008 | 2000 | 0.0440 |
| 0.008 | 3.1783 | 2050 | 0.0482 |
| 0.0114 | 3.2558 | 2100 | 0.0435 |
| 0.0112 | 3.3333 | 2150 | 0.0401 |
| 0.0079 | 3.4109 | 2200 | 0.0448 |
| 0.0133 | 3.4884 | 2250 | 0.0483 |
| 0.0128 | 3.5659 | 2300 | 0.0471 |
| 0.012 | 3.6434 | 2350 | 0.0467 |
| 0.0143 | 3.7209 | 2400 | 0.0464 |
| 0.0063 | 3.7984 | 2450 | 0.0487 |
| 0.0098 | 3.8760 | 2500 | 0.0470 |
| 0.0088 | 3.9535 | 2550 | 0.0461 |
| 0.0057 | 4.0310 | 2600 | 0.0467 |
| 0.006 | 4.1085 | 2650 | 0.0506 |
| 0.0058 | 4.1860 | 2700 | 0.0565 |
| 0.0057 | 4.2636 | 2750 | 0.0592 |
| 0.0043 | 4.3411 | 2800 | 0.0595 |
| 0.0043 | 4.4186 | 2850 | 0.0612 |
| 0.0055 | 4.4961 | 2900 | 0.0617 |
| 0.0029 | 4.5736 | 2950 | 0.0608 |
| 0.0029 | 4.6512 | 3000 | 0.0615 |
| 0.0039 | 4.7287 | 3050 | 0.0620 |
| 0.0034 | 4.8062 | 3100 | 0.0620 |
| 0.0061 | 4.8837 | 3150 | 0.0618 |
| 0.0045 | 4.9612 | 3200 | 0.0619 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO | Vikhrmodels | 2025-02-03T13:04:37Z | 1,926 | 13 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ru",
"en",
"arxiv:2405.13929",
"base_model:Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct",
"base_model:finetune:Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-31T09:29:17Z | ---
library_name: transformers
model_name: Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO
base_model:
- Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct
language:
- ru
- en
license: apache-2.0
---
# 💨🦅 QVikhr-2.5-1.5B-Instruct-SMPO
Инструктивная модель на основе **Qwen-2.5-1.5B-Instruct**, обученная на русскоязычном датасете **GrandMaster-PRO-MAX** с использованием **SMPO** (Simple Margin Preference Optimization).
## Quatized variants:
- [GGUF](https://hf.co/Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO_GGUF)
- MLX
- [4 bit](https://hf.co/Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO_MLX-4bit)
- [8 bit](https://hf.co/Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO_MLX-8bit)
## Особенности:
- 📚 Основа: [Vikhr-Qwen-2.5-1.5B-Instruct](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct)
- 🇷🇺 Специализация: **RU**
- 🌍 Поддержка: **Bilingual RU/EN**
## Описание:
**QVikhr-2.5-1.5B-Instruct-SMPO** представляет собой языковую модель, прошедшую специализированное обучение с использованием метода **SMPO**. Эта модель демонстрирует прогресс в методах выравнивания, особенно в области улучшения качества ответов через оптимизацию предпочтений.
## Попробовать / Try now:
[](https://colab.research.google.com/drive/1xpTj8gLZAl2kbgciEAP9XxF5G18f7znr?usp=sharing)
## Обучение:
### Этап алайнмента с SMPO (Simple Margin Preference Optimization)
[Конфиг обучения](https://github.com/VikhrModels/effective_llm_alignment/blob/e3672f6ec4023109699a951bf08f1bce23338921/training_configs/preference/smpo-qvikhr2.5-1.5b-lora-best-rs.yaml)
Для дальнейшего улучшения качества ответов мы использовали следущий пайплайн:
- Использовали [Skywork/Skywork-Reward-Llama-3.1-8B-v0.2](https://huggingface.co/Skywork/Skywork-Reward-Llama-3.1-8B-v0.2) в качестве Reward модель
- Дедуплицировали и отфилтровали используя RM модель оригинальный датасет Vikhrmodels/GrandMaster-PRO-MAX, получив порядка 10к самых высококачественных и разнообразных диалогов.
- Сделали Rejection Sampling с SFT чекпоинтом [Vikhr-Qwen-2.5-1.5B-Instruct](https://huggingface.co/Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct) используя полученный датасет и Reward модель. (Генерировали 7 гипотез)
- Дообучили SFT чекпоинт с помощью нашего метода SMPO используя полученный датасет из этапа 3.
SMPO был спроектирован и выбран как метод для повышения стабильности тренировки преференсов в условиях Rejection Sampling и достижения нужного margin.
Реализацию SMPO, rejection sampling и тд можно найти в нашей библиотеке [effective_llm_alignment](https://github.com/VikhrModels/effective_llm_alignment) на GitHub
Идея использования именно SMPO, а не другого PO метода, возникла в результате проведения большого количества экспериментов с классическими методами, при необходимости лучшего контроля процесса сходимости. При тщательной настройке других методов (например SimPO), можно добится похожего результата, однако мы постарались стаблизировать этот процесс и объединить лучшие практики из других методов.
## Пример кода для запуска:
**Рекомендуемая температура для генерации: 0.4**.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."
messages = [
{"role": "system", "content": "Вы — Vikhr, ИИ помощник, созданный компанией Vikhr models для предоставления полезной, честной и безопасной информации."},
{"role": "user", "content": input_text},
]
# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
input_ids,
max_length=1512,
temperature=0.4,
)
# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
#### Ответ модели:
>**Краткое описание книги "Гарри Поттер"**
>"Гарри Поттер" – это серия книг о мальчике-волшебнике, который обнаруживает в себе силу волшебства после того, как его семья умирает от злого колдуна Драко Малфоя. Главный герой, Гарри Поттер, живёт с родителями на окраине Хогвартса, школы магии и волшебства.
>В детстве Гарри встречает своего лучшего друга Рона Уизли и его тётку Гермиону Грейнджер. Они вместе отправляются в Хогвартс, где начинают учиться волшебству. В ходе учебы Гарри знакомится с другими учениками: Слизеринами (главные антагонисты) и Хогвартсом как место обучения магии.
>Самым важным событием в жизни Гарри становится то, что он узнаёт о своем происхождении – он является последним из семьи Поттеров, которые когда-то владели всеми знаниями о волшебстве. Это знание открывает ему путь к своей миссии – борьбе против темных сил, которые стремятся уничтожить волшебство.
>По мере развития сюжета Гарри сталкивается с различными препятствиями, включая преследование со стороны Драко Малфоя и его друзей, а также внутренние конфликты внутри самого Хогвартса. Однако благодаря поддержке своих друзей и новых знакомых, таких как Философский камень, Гарри продолжает свой путь к победе над темными силами.
>В конце концов, Гарри и его друзья успешно борются с темными силами, восстанавливают мир в Хогвартсе и получают признание за свои поступки. Книги завершаются тем, что Гарри готовится стать волшебником, но его будущее ещё не определено.
### Авторы
- Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), [Vikhr Team](https://t.me/vikhrlabs)
- Nikolay Kompanets, [LakoMoor](https://t.me/lakomoordev), [Vikhr Team](https://t.me/vikhrlabs)
- Konstantin Korolev, [Vikhr Team](https://t.me/vikhrlabs)
- Aleksandr Nikolich, [Vikhr Team](https://t.me/vikhrlabs)
```
@inproceedings{nikolich2024vikhr,
title={Vikhr: Advancing Open-Source Bilingual Instruction-Following Large Language Models for Russian and English},
author={Aleksandr Nikolich and Konstantin Korolev and Sergei Bratchikov and Nikolay Kompanets and Igor Kiselev and Artem Shelmanov},
booktitle={Proceedings of the 4th Workshop on Multilingual Representation Learning (MRL) @ EMNLP-2024},
year={2024},
publisher={Association for Computational Linguistics},
url={https://arxiv.org/pdf/2405.13929}
}
``` |
clarxus/4dcffa75-9293-4ca0-b757-145a13b7cf2e | clarxus | 2025-02-03T13:02:55Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:04:40Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4dcffa75-9293-4ca0-b757-145a13b7cf2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e1fef425da4a3c20_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e1fef425da4a3c20_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: clarxus/4dcffa75-9293-4ca0-b757-145a13b7cf2e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e1fef425da4a3c20_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a6e32771-53f8-43c6-a89b-5b54a5429bef
wandb_project: Gradients-On-Seven
wandb_run: your_name
wandb_runid: a6e32771-53f8-43c6-a89b-5b54a5429bef
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4dcffa75-9293-4ca0-b757-145a13b7cf2e
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 7.5103 |
| 5.0736 | 0.0059 | 9 | 4.7599 |
| 3.8534 | 0.0117 | 18 | 3.6660 |
| 3.3288 | 0.0176 | 27 | 3.1506 |
| 2.8455 | 0.0234 | 36 | 2.8619 |
| 2.5984 | 0.0293 | 45 | 2.7036 |
| 2.64 | 0.0351 | 54 | 2.5950 |
| 2.4682 | 0.0410 | 63 | 2.5207 |
| 2.5308 | 0.0468 | 72 | 2.4648 |
| 2.4341 | 0.0527 | 81 | 2.4367 |
| 2.2781 | 0.0585 | 90 | 2.4229 |
| 2.4717 | 0.0644 | 99 | 2.4198 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/723f5379-3c9a-4fd2-bf8b-ba81c5b58970 | hongngo | 2025-02-03T13:01:52Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T12:25:02Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 723f5379-3c9a-4fd2-bf8b-ba81c5b58970
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ed78a5b2ee6f663c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ed78a5b2ee6f663c_train_data.json
type:
field_instruction: instructions
field_output: en_responses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/723f5379-3c9a-4fd2-bf8b-ba81c5b58970
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ed78a5b2ee6f663c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 723f5379-3c9a-4fd2-bf8b-ba81c5b58970
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2488 | 0.7547 | 200 | 0.6447 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vaatsav06/sdxl_ft | vaatsav06 | 2025-02-03T13:00:39Z | 5 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2025-02-03T11:57:28Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - vaatsav06/sdxl_ft
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the AdamLucek/oldbookillustrations-small dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lesso/1932aeda-c5a8-477b-8201-ff026662e153 | lesso | 2025-02-03T12:58:43Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:57:15Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1932aeda-c5a8-477b-8201-ff026662e153
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 99c0fdfe29a4359d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/99c0fdfe29a4359d_train_data.json
type:
field_input: document
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/1932aeda-c5a8-477b-8201-ff026662e153
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god16/99c0fdfe29a4359d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9601e4a0-7003-4140-a733-70f5dfcdf433
wandb_project: ab-god16
wandb_run: your_name
wandb_runid: 9601e4a0-7003-4140-a733-70f5dfcdf433
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1932aeda-c5a8-477b-8201-ff026662e153
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7644 | 0.0099 | 1 | 11.7648 |
| 11.7494 | 0.4950 | 50 | 11.7511 |
| 11.746 | 0.9901 | 100 | 11.7479 |
| 11.7471 | 1.4851 | 150 | 11.7471 |
| 11.7461 | 1.9802 | 200 | 11.7468 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
botenius/574a3643-8e90-4399-9af1-3ced5c0970fa | botenius | 2025-02-03T12:58:15Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T12:25:06Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 574a3643-8e90-4399-9af1-3ced5c0970fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ed78a5b2ee6f663c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ed78a5b2ee6f663c_train_data.json
type:
field_instruction: instructions
field_output: en_responses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: botenius/574a3643-8e90-4399-9af1-3ced5c0970fa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ed78a5b2ee6f663c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
wandb_project: Gradients-On-13
wandb_run: your_name
wandb_runid: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 574a3643-8e90-4399-9af1-3ced5c0970fa
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3489 | 0.7547 | 200 | 0.6089 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tryingpro/319a4243-4856-41f7-ad85-f5e81b002c3a | tryingpro | 2025-02-03T12:58:13Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Maykeye/TinyLLama-v0",
"base_model:adapter:Maykeye/TinyLLama-v0",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T11:50:23Z | ---
library_name: peft
license: apache-2.0
base_model: Maykeye/TinyLLama-v0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 319a4243-4856-41f7-ad85-f5e81b002c3a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Maykeye/TinyLLama-v0
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 42e8de18a2a42807_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/42e8de18a2a42807_train_data.json
type:
field_input: observation_2
field_instruction: observation_1
field_output: hypothesis_1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: tryingpro/319a4243-4856-41f7-ad85-f5e81b002c3a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 90
micro_batch_size: 2
mlflow_experiment_name: /tmp/42e8de18a2a42807_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: tryingpro-unicourt
wandb_mode: online
wandb_name: a7f71d85-fbe3-4375-be2b-b0792c861705
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a7f71d85-fbe3-4375-be2b-b0792c861705
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# 319a4243-4856-41f7-ad85-f5e81b002c3a
This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 5.3775 |
| 5.3278 | 0.0032 | 8 | 5.2439 |
| 4.8601 | 0.0063 | 16 | 4.9055 |
| 4.8074 | 0.0095 | 24 | 4.7473 |
| 4.6594 | 0.0126 | 32 | 4.6408 |
| 4.483 | 0.0158 | 40 | 4.5787 |
| 4.4875 | 0.0189 | 48 | 4.5330 |
| 4.4785 | 0.0221 | 56 | 4.5044 |
| 4.5383 | 0.0252 | 64 | 4.4871 |
| 4.4709 | 0.0284 | 72 | 4.4778 |
| 4.467 | 0.0315 | 80 | 4.4742 |
| 4.4683 | 0.0347 | 88 | 4.4733 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chibbert/SmolLM2-FT-MyDataset | chibbert | 2025-02-03T12:57:10Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T12:56:01Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chibbert/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lesso/ac8e7e29-808e-4ca6-ba46-e5b6d74c64ce | lesso | 2025-02-03T12:55:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:15:40Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ac8e7e29-808e-4ca6-ba46-e5b6d74c64ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e1fef425da4a3c20_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e1fef425da4a3c20_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/ac8e7e29-808e-4ca6-ba46-e5b6d74c64ce
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001017
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god17/e1fef425da4a3c20_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a6e32771-53f8-43c6-a89b-5b54a5429bef
wandb_project: ab-god17
wandb_run: your_name
wandb_runid: a6e32771-53f8-43c6-a89b-5b54a5429bef
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ac8e7e29-808e-4ca6-ba46-e5b6d74c64ce
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001017
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9344 | 0.0002 | 1 | 8.0954 |
| 3.9414 | 0.0081 | 50 | 3.3296 |
| 3.4259 | 0.0163 | 100 | 2.8453 |
| 2.7436 | 0.0244 | 150 | 2.5240 |
| 1.8835 | 0.0325 | 200 | 2.4165 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Dans-DiscountModels/Dans-DangerousWinds-V1.1.1-24b-Q5_K_M-GGUF | Dans-DiscountModels | 2025-02-03T12:54:34Z | 120 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-2",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-S",
"base_model:PocketDoc/Dans-DangerousWinds-V1.1.1-24b",
"base_model:quantized:PocketDoc/Dans-DangerousWinds-V1.1.1-24b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T12:53:13Z | ---
license: apache-2.0
datasets:
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-2
- PocketDoc/Dans-Prosemaxx-Cowriter-3-S
language:
- en
base_model: PocketDoc/Dans-DangerousWinds-V1.1.1-24b
tags:
- llama-cpp
- gguf-my-repo
---
# PocketDoc/Dans-DangerousWinds-V1.1.1-24b-Q5_K_M-GGUF
This model was converted to GGUF format from [`PocketDoc/Dans-DangerousWinds-V1.1.1-24b`](https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.1-24b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.1-24b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo PocketDoc/Dans-DangerousWinds-V1.1.1-24b-Q5_K_M-GGUF --hf-file dans-dangerouswinds-v1.1.1-24b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo PocketDoc/Dans-DangerousWinds-V1.1.1-24b-Q5_K_M-GGUF --hf-file dans-dangerouswinds-v1.1.1-24b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo PocketDoc/Dans-DangerousWinds-V1.1.1-24b-Q5_K_M-GGUF --hf-file dans-dangerouswinds-v1.1.1-24b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo PocketDoc/Dans-DangerousWinds-V1.1.1-24b-Q5_K_M-GGUF --hf-file dans-dangerouswinds-v1.1.1-24b-q5_k_m.gguf -c 2048
```
|
tensoralchemistdev01/sv17 | tensoralchemistdev01 | 2025-02-03T12:54:30Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-03T12:52:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cimol/c7f55059-fda8-4771-9d69-9f866c40b111 | cimol | 2025-02-03T12:52:02Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:43:18Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c7f55059-fda8-4771-9d69-9f866c40b111
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- b33e7113daad43bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b33e7113daad43bd_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cimol/c7f55059-fda8-4771-9d69-9f866c40b111
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 7.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b33e7113daad43bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e9ec04e-317e-49f4-b782-0aebbd3b8e64
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e9ec04e-317e-49f4-b782-0aebbd3b8e64
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c7f55059-fda8-4771-9d69-9f866c40b111
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7724 | 0.0008 | 1 | 1.4390 |
| 0.8043 | 0.0411 | 50 | 0.8108 |
| 0.7317 | 0.0821 | 100 | 0.7181 |
| 0.7485 | 0.1232 | 150 | 0.6798 |
| 0.7383 | 0.1642 | 200 | 0.6654 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ciloku/97bdff27-8884-4881-8ac1-47edde132b61 | ciloku | 2025-02-03T12:51:32Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:43:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 97bdff27-8884-4881-8ac1-47edde132b61
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: true
chat_template: llama3
data_processes: 24
dataset_prepared_path: null
datasets:
- data_files:
- b33e7113daad43bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b33e7113daad43bd_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 4
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: ciloku/97bdff27-8884-4881-8ac1-47edde132b61
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 6.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.04
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
lr_scheduler_warmup_steps: 50
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b33e7113daad43bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 17333
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
total_train_batch_size: 32
train_batch_size: 8
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2e9ec04e-317e-49f4-b782-0aebbd3b8e64
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2e9ec04e-317e-49f4-b782-0aebbd3b8e64
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 97bdff27-8884-4881-8ac1-47edde132b61
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 17333
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7724 | 0.0008 | 1 | 1.4390 |
| 0.8168 | 0.0411 | 50 | 0.8460 |
| 0.7537 | 0.0821 | 100 | 0.7190 |
| 0.7606 | 0.1232 | 150 | 0.6846 |
| 0.7394 | 0.1642 | 200 | 0.6740 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dataplayer12/phi-4-Q6_K | dataplayer12 | 2025-02-03T12:51:18Z | 19 | 0 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T12:35:53Z | ---
license: mit
---
Q6_K quantized phi-4 model. Tested working on llama.cpp and LM studio |
shibajustfor/a27fe518-dfae-431a-a85a-4c25e43b608a | shibajustfor | 2025-02-03T12:50:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
]
| null | 2025-02-03T12:44:56Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a27fe518-dfae-431a-a85a-4c25e43b608a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0beebe02a7ff1655_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0beebe02a7ff1655_train_data.json
type:
field_input: product_title
field_instruction: text
field_output: preds
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/a27fe518-dfae-431a-a85a-4c25e43b608a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0beebe02a7ff1655_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d5d98e9d-ebfa-48d6-a38e-cd840c5c4bcb
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d5d98e9d-ebfa-48d6-a38e-cd840c5c4bcb
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a27fe518-dfae-431a-a85a-4c25e43b608a
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0024 | 1 | 3.1381 |
| 1.6873 | 0.1217 | 50 | 1.6542 |
| 1.5916 | 0.2433 | 100 | 1.6018 |
| 1.5797 | 0.3650 | 150 | 1.5753 |
| 1.6374 | 0.4866 | 200 | 1.5651 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/80424291-a1f6-4515-9b8c-823bd6ee71bc | Best000 | 2025-02-03T12:47:29Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:43:21Z | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 80424291-a1f6-4515-9b8c-823bd6ee71bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 614d49ad4a33f1cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/614d49ad4a33f1cc_train_data.json
type:
field_input: starter_code
field_instruction: question_content
field_output: test
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/80424291-a1f6-4515-9b8c-823bd6ee71bc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/614d49ad4a33f1cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b5811a1b-58f0-4def-8887-33362bf088d5
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b5811a1b-58f0-4def-8887-33362bf088d5
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 80424291-a1f6-4515-9b8c-823bd6ee71bc
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0044 | 1 | 2.4066 |
| 0.3594 | 0.2188 | 50 | 0.3583 |
| 0.3201 | 0.4376 | 100 | 0.3174 |
| 0.2792 | 0.6565 | 150 | 0.2877 |
| 0.2909 | 0.8753 | 200 | 0.2833 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/af2782bb-b492-459a-9f7c-9a24a6c89674 | adammandic87 | 2025-02-03T12:47:19Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:43:32Z | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af2782bb-b492-459a-9f7c-9a24a6c89674
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 614d49ad4a33f1cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/614d49ad4a33f1cc_train_data.json
type:
field_input: starter_code
field_instruction: question_content
field_output: test
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/af2782bb-b492-459a-9f7c-9a24a6c89674
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/614d49ad4a33f1cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b5811a1b-58f0-4def-8887-33362bf088d5
wandb_project: Birthday-SN56-34-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b5811a1b-58f0-4def-8887-33362bf088d5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# af2782bb-b492-459a-9f7c-9a24a6c89674
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0044 | 1 | 2.1040 |
| 0.3551 | 0.2188 | 50 | 0.3581 |
| 0.3285 | 0.4376 | 100 | 0.3145 |
| 0.2819 | 0.6565 | 150 | 0.2862 |
| 0.2761 | 0.8753 | 200 | 0.2766 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
earnxus/f78f2aa5-4587-40c2-a6d2-421c2c049655 | earnxus | 2025-02-03T12:45:35Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-02-03T12:21:14Z | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f78f2aa5-4587-40c2-a6d2-421c2c049655
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f971d59dc55e2c3d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f971d59dc55e2c3d_train_data.json
type:
field_instruction: related_work
field_output: abstract
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/f78f2aa5-4587-40c2-a6d2-421c2c049655
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f971d59dc55e2c3d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: af312f24-2fb0-4971-97c7-b836a1dbcff6
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: af312f24-2fb0-4971-97c7-b836a1dbcff6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# f78f2aa5-4587-40c2-a6d2-421c2c049655
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4097 | 0.0417 | 200 | 2.2181 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
T145/KRONOS-8B-V9 | T145 | 2025-02-03T12:45:14Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:T145/KRONOS-8B-V1-P1",
"base_model:merge:T145/KRONOS-8B-V1-P1",
"base_model:T145/KRONOS-8B-V8",
"base_model:merge:T145/KRONOS-8B-V8",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-02-02T16:48:49Z | ---
base_model:
- T145/KRONOS-8B-V8
- T145/KRONOS-8B-V1-P1
library_name: transformers
tags:
- mergekit
- merge
model-index:
- name: KRONOS-8B-V9
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 78.56
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 19.03
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.15
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.32
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.57
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=T145%2FKRONOS-8B-V9
name: Open LLM Leaderboard
---
# Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [T145/KRONOS-8B-V8](https://huggingface.co/T145/KRONOS-8B-V8) as a base.
### Models Merged
The following models were included in the merge:
* [T145/KRONOS-8B-V1-P1](https://huggingface.co/T145/KRONOS-8B-V1-P1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: T145/KRONOS-8B-V8
dtype: bfloat16
merge_method: ties
parameters:
density: 1.0
weight: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: T145/KRONOS-8B-V1-P1
parameters:
density: 1.0
weight: 1.0
- layer_range: [0, 32]
model: T145/KRONOS-8B-V8
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/T145__KRONOS-8B-V9-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=T145%2FKRONOS-8B-V9&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 28.78|
|IFEval (0-Shot) | 78.56|
|BBH (3-Shot) | 30.07|
|MATH Lvl 5 (4-Shot)| 19.03|
|GPQA (0-shot) | 6.15|
|MuSR (0-shot) | 8.32|
|MMLU-PRO (5-shot) | 30.57|
|
mrferr3t/87a81fd3-6ae9-4b8b-a0f9-37e7e9fb7eaf | mrferr3t | 2025-02-03T12:42:21Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"region:us"
]
| null | 2025-02-03T12:32:56Z | ---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 87a81fd3-6ae9-4b8b-a0f9-37e7e9fb7eaf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- d12d898722355cd8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d12d898722355cd8_train_data.json
type:
field_instruction: en_prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 20
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/87a81fd3-6ae9-4b8b-a0f9-37e7e9fb7eaf
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/d12d898722355cd8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 20
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6dcc908a-5968-4a83-9603-7ac44adffc8d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6dcc908a-5968-4a83-9603-7ac44adffc8d
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 87a81fd3-6ae9-4b8b-a0f9-37e7e9fb7eaf
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 8
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0037 | 1 | 1.5695 |
| No log | 0.0739 | 20 | 1.2884 |
| No log | 0.1479 | 40 | 1.2203 |
| No log | 0.2218 | 60 | 1.1892 |
| No log | 0.2957 | 80 | 1.1597 |
| 1.2503 | 0.3697 | 100 | 1.1423 |
| 1.2503 | 0.4436 | 120 | 1.1237 |
| 1.2503 | 0.5176 | 140 | 1.1159 |
| 1.2503 | 0.5915 | 160 | 1.1038 |
| 1.2503 | 0.6654 | 180 | 1.0981 |
| 1.1184 | 0.7394 | 200 | 1.0897 |
| 1.1184 | 0.8133 | 220 | 1.0800 |
| 1.1184 | 0.8872 | 240 | 1.0759 |
| 1.1184 | 0.9612 | 260 | 1.0708 |
| 1.1184 | 1.0351 | 280 | 1.1027 |
| 0.9873 | 1.1091 | 300 | 1.0934 |
| 0.9873 | 1.1830 | 320 | 1.0932 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso/d81cdd2c-609f-405f-b2f1-b453450792db | lesso | 2025-02-03T12:42:19Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:40:41Z | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d81cdd2c-609f-405f-b2f1-b453450792db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 99c0fdfe29a4359d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/99c0fdfe29a4359d_train_data.json
type:
field_input: document
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/d81cdd2c-609f-405f-b2f1-b453450792db
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000101
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/god09/99c0fdfe29a4359d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9601e4a0-7003-4140-a733-70f5dfcdf433
wandb_project: ab-god09
wandb_run: your_name
wandb_runid: 9601e4a0-7003-4140-a733-70f5dfcdf433
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d81cdd2c-609f-405f-b2f1-b453450792db
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7644 | 0.0099 | 1 | 11.7648 |
| 11.7499 | 0.4950 | 50 | 11.7518 |
| 11.7464 | 0.9901 | 100 | 11.7484 |
| 11.7471 | 1.4851 | 150 | 11.7472 |
| 11.7461 | 1.9802 | 200 | 11.7469 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xueyj/task-1-google-gemma-2b | xueyj | 2025-02-03T12:40:09Z | 2,234 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
]
| null | 2025-01-03T05:45:46Z | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
dixedus/8c765e17-2850-4edb-a7e3-00f941efd8c3 | dixedus | 2025-02-03T12:38:15Z | 33 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
]
| null | 2025-02-03T12:01:29Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8c765e17-2850-4edb-a7e3-00f941efd8c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1b617ce82c7310e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1b617ce82c7310e_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dixedus/8c765e17-2850-4edb-a7e3-00f941efd8c3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/c1b617ce82c7310e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 7f85a073-7b5c-430c-9a22-9fdc7c748e1c
wandb_project: Gradients-On-Eight
wandb_run: your_name
wandb_runid: 7f85a073-7b5c-430c-9a22-9fdc7c748e1c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8c765e17-2850-4edb-a7e3-00f941efd8c3
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 1.1472 |
| 1.082 | 0.0162 | 9 | 1.1339 |
| 1.0887 | 0.0324 | 18 | 1.0956 |
| 1.0726 | 0.0486 | 27 | 1.0820 |
| 1.0842 | 0.0648 | 36 | 1.0738 |
| 0.9664 | 0.0810 | 45 | 1.0685 |
| 1.1287 | 0.0972 | 54 | 1.0650 |
| 1.0536 | 0.1134 | 63 | 1.0625 |
| 1.1333 | 0.1296 | 72 | 1.0610 |
| 1.107 | 0.1457 | 81 | 1.0602 |
| 1.042 | 0.1619 | 90 | 1.0600 |
| 1.0985 | 0.1781 | 99 | 1.0599 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso/92525a2d-1b50-44f5-8106-bfdf95b2f2b3 | lesso | 2025-02-03T12:37:05Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T12:26:47Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 92525a2d-1b50-44f5-8106-bfdf95b2f2b3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- ed78a5b2ee6f663c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ed78a5b2ee6f663c_train_data.json
type:
field_instruction: instructions
field_output: en_responses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso/92525a2d-1b50-44f5-8106-bfdf95b2f2b3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001018
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/god18/ed78a5b2ee6f663c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
wandb_project: ab-god18
wandb_run: your_name
wandb_runid: 55836b1e-bfe4-463b-aaad-afb6d5b8557a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 92525a2d-1b50-44f5-8106-bfdf95b2f2b3
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001018
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8048 | 0.0038 | 1 | 0.9731 |
| 0.7066 | 0.1887 | 50 | 0.6927 |
| 0.4882 | 0.3774 | 100 | 0.6361 |
| 0.5408 | 0.5660 | 150 | 0.5995 |
| 0.6515 | 0.7547 | 200 | 0.5768 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF | mradermacher | 2025-02-03T12:35:53Z | 83 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/Poppy_Porpoise-1.0-L3-8B",
"base_model:quantized:Nitral-AI/Poppy_Porpoise-1.0-L3-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-05-31T06:25:25Z | ---
base_model: Nitral-AI/Poppy_Porpoise-1.0-L3-8B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B
***The model creator strongly suggests using the [0.72](https://huggingface.co/mradermacher/Poppy_Porpoise-0.72-L3-8B-GGUF) model at this time, as it is better quality***
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
leixa/9d73136b-bd45-4c12-86ba-a7eedadf8d57 | leixa | 2025-02-03T12:35:33Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
]
| null | 2025-02-03T11:36:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d73136b-bd45-4c12-86ba-a7eedadf8d57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e1fef425da4a3c20_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e1fef425da4a3c20_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: leixa/9d73136b-bd45-4c12-86ba-a7eedadf8d57
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/e1fef425da4a3c20_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a6e32771-53f8-43c6-a89b-5b54a5429bef
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a6e32771-53f8-43c6-a89b-5b54a5429bef
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d73136b-bd45-4c12-86ba-a7eedadf8d57
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 7.5103 |
| 5.0662 | 0.0059 | 9 | 4.7452 |
| 3.8409 | 0.0117 | 18 | 3.6639 |
| 3.3325 | 0.0176 | 27 | 3.1546 |
| 2.8487 | 0.0234 | 36 | 2.8612 |
| 2.6059 | 0.0293 | 45 | 2.7179 |
| 2.6455 | 0.0351 | 54 | 2.6001 |
| 2.4711 | 0.0410 | 63 | 2.5226 |
| 2.5257 | 0.0468 | 72 | 2.4668 |
| 2.4364 | 0.0527 | 81 | 2.4378 |
| 2.2796 | 0.0585 | 90 | 2.4244 |
| 2.4722 | 0.0644 | 99 | 2.4219 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF | mradermacher | 2025-02-03T12:35:05Z | 96 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/Poppy_Porpoise-1.0-L3-8B",
"base_model:quantized:Nitral-AI/Poppy_Porpoise-1.0-L3-8B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2024-05-31T13:09:33Z | ---
base_model: Nitral-AI/Poppy_Porpoise-1.0-L3-8B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B
***The model creator strongly suggests using the [0.72](https://huggingface.co/mradermacher/Poppy_Porpoise-0.72-L3-8B-i1-GGUF) model at this time, as it is better quality***
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Alex01837178373/QVikhr-2.5-1.5B-Instruct-SMPO-Q4_0-GGUF | Alex01837178373 | 2025-02-03T12:32:46Z | 46 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"en",
"base_model:Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO",
"base_model:quantized:Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T12:32:25Z | ---
library_name: transformers
model_name: Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO
base_model: Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO
language:
- ru
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Alex01837178373/QVikhr-2.5-1.5B-Instruct-SMPO-Q4_0-GGUF
This model was converted to GGUF format from [`Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO`](https://huggingface.co/Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Alex01837178373/QVikhr-2.5-1.5B-Instruct-SMPO-Q4_0-GGUF --hf-file qvikhr-2.5-1.5b-instruct-smpo-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Alex01837178373/QVikhr-2.5-1.5B-Instruct-SMPO-Q4_0-GGUF --hf-file qvikhr-2.5-1.5b-instruct-smpo-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Alex01837178373/QVikhr-2.5-1.5B-Instruct-SMPO-Q4_0-GGUF --hf-file qvikhr-2.5-1.5b-instruct-smpo-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Alex01837178373/QVikhr-2.5-1.5B-Instruct-SMPO-Q4_0-GGUF --hf-file qvikhr-2.5-1.5b-instruct-smpo-q4_0.gguf -c 2048
```
|
xueyj/task-1-Qwen-Qwen1.5-0.5B | xueyj | 2025-02-03T12:32:33Z | 2,301 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
]
| null | 2025-01-03T05:38:24Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
akh99/pretrained-on-lmsys | akh99 | 2025-02-03T12:32:17Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-classification",
"generated_from_trainer",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-02-03T09:59:14Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-2-9b-it
tags:
- generated_from_trainer
model-index:
- name: pretrained-on-lmsys
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pretrained-on-lmsys
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.48.1
- Pytorch 2.2.2+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
shibajustfor/2e5de3ee-0fd0-4c10-a384-35499fc85dc8 | shibajustfor | 2025-02-03T12:26:30Z | 29 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
]
| null | 2025-02-03T12:19:10Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2e5de3ee-0fd0-4c10-a384-35499fc85dc8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c1b617ce82c7310e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c1b617ce82c7310e_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/2e5de3ee-0fd0-4c10-a384-35499fc85dc8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c1b617ce82c7310e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7f85a073-7b5c-430c-9a22-9fdc7c748e1c
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7f85a073-7b5c-430c-9a22-9fdc7c748e1c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2e5de3ee-0fd0-4c10-a384-35499fc85dc8
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | nan |
| 0.0 | 0.0225 | 50 | nan |
| 2.5924 | 0.0450 | 100 | nan |
| 7.2084 | 0.0675 | 150 | nan |
| 0.039 | 0.0900 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-6bits | moot20 | 2025-02-03T12:25:13Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"region:us"
]
| text-generation | 2025-02-03T11:51:27Z | ---
license: mit
library_name: transformers
tags:
- mlx
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
---
# moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-6bits
The Model [moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-6bits](https://huggingface.co/moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-6bits) was
converted to MLX format from [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
using mlx-lm version **0.21.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-6bits")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/Llama-3.2-3b-finetune-radiology-GGUF | mradermacher | 2025-02-03T12:23:30Z | 259 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-03T12:07:06Z | ---
base_model: chatsdude/Llama-3.2-3b-finetune-radiology
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chatsdude/Llama-3.2-3b-finetune-radiology
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3b-finetune-radiology-GGUF/resolve/main/Llama-3.2-3b-finetune-radiology.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
FINGU-AI/FINGU-2.5-instruct-32B | FINGU-AI | 2025-02-03T12:22:07Z | 33 | 1 | null | [
"safetensors",
"qwen2",
"arxiv:2202.01764",
"license:mit",
"region:us"
]
| null | 2025-02-03T09:56:45Z | ---
license: mit
---
# FINGU-AI/FINGU-2.5-instruct-32B
## Overview
`FINGU-AI/FINGU-2.5-instruct-32B` is a versatile causal language model designed to excel in various natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. The model demonstrates a strong aptitude for reasoning tasks, particularly in the Japanese language, making it a valuable tool for applications requiring logical inference and complex understanding.
## Reasoning Capabilities
The model's architecture and training regimen have been optimized to enhance its reasoning abilities. This is particularly evident in tasks involving logical deduction and commonsense reasoning in Japanese. For instance, when evaluated on datasets such as JaQuAD—a Japanese Question Answering Dataset—the model exhibits a nuanced understanding of complex logical structures. :contentReference[oaicite:0]{index=0}
Additionally, `FINGU-AI/FINGU-2.5-instruct-32B` has been assessed using the JFLD benchmark, which tests a model's ability for deductive reasoning based on formal logic. The model's performance indicates a robust capacity to handle tasks that require understanding and reasoning over formal logical structures.
## Example Usage
### Installation
Ensure that the required packages are installed:
```python
pip install torch transformers
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Model and Tokenizer
model_id = 'FINGU-AI/FINGU-2.5-instruct-32B'
model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.float16, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.to('cuda')
# Input Messages for Translation
messages = [
{"role": "user", "content": """Please reason step by step, and put your final answer within \boxed{}.
translate korean to Japanese.
새로운 은행 계좌를 개설하는 절차는 다음과 같습니다:
1. 계좌 개설 목적과 신분 확인을 위한 서류 제출
2. 서류 검토 과정을 거치는 것
3. 고객님의 신원 확인 절차를 진행하는 것
4. 모든 절차가 완료되면 계좌 개설이 가능합니다.
계좌 개설을 원하시는 경우, 신분증과 함께 방문해 주시면 됩니다.
"""}
]
# Tokenize and Generate Response
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=500,
do_sample=True,
)
# Decode and Print the Response
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Relevant Datasets
To further evaluate and enhance the reasoning capabilities of `FINGU-AI/FINGU-2.5-instruct-32B`, the following Japanese reasoning datasets are pertinent:
- **JaQuAD (Japanese Question Answering Dataset)**: A human-annotated dataset created for Japanese Machine Reading Comprehension, consisting of 39,696 extractive question-answer pairs on Japanese Wikipedia articles.
[📄 ARXIV.ORG](https://arxiv.org/abs/2202.01764)
- **JFLD (Japanese Formal Logic Dataset)**: A benchmark designed to evaluate deductive reasoning based on formal logic, providing a structured framework to assess logical reasoning capabilities in Japanese.
[📄 ACLANTHOLOGY.ORG](https://aclanthology.org/2024.lrec-main.832.pdf)
- **JEMHopQA (Japanese Explainable Multi-Hop Question-Answering)**: A dataset for multi-hop QA in Japanese, including question-answer pairs and supporting evidence in the form of derivation triples, facilitating the development of explainable QA systems.
[📄 ACLANTHOLOGY.ORG](https://aclanthology.org/2024.lrec-main.831.pdf)
These datasets provide diverse challenges that can help in assessing and improving the model's reasoning abilities across different contexts and complexities.
## Conclusion
`FINGU-AI/FINGU-2.5-instruct-32B` stands as a robust and adaptable language model, particularly distinguished by its reasoning capabilities in the Japanese language. Its performance across various reasoning benchmarks underscores its potential for applications that demand advanced logical inference and nuanced understanding in NLP tasks.
|
hmteams/teams-base-historic-multilingual-generator | hmteams | 2025-02-03T12:21:28Z | 99 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"electra",
"fill-mask",
"en",
"de",
"fr",
"fi",
"sv",
"nl",
"nb",
"nn",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-08-01T08:46:05Z | ---
license: apache-2.0
language:
- en
- de
- fr
- fi
- sv
- nl
- nb
- nn
- 'no'
---
# hmTEAMS
[](https://github.com/stefan-it/hmTEAMS)
Historic Multilingual and Monolingual [TEAMS](https://aclanthology.org/2021.findings-acl.219/) Models.
The following languages are covered:
* English (British Library Corpus - Books)
* German (Europeana Newspaper)
* French (Europeana Newspaper)
* Finnish (Europeana Newspaper, Digilib)
* Swedish (Europeana Newspaper, Digilib)
* Dutch (Delpher Corpus)
* Norwegian (NCC Corpus)
# Architecture
We pretrain a "Training ELECTRA Augmented with Multi-word Selection"
([TEAMS](https://aclanthology.org/2021.findings-acl.219/)) model:

# Results
We perform experiments on various historic NER datasets, such as HIPE-2022 or ICDAR Europeana.
All details incl. hyper-parameters can be found [here](https://github.com/stefan-it/hmTEAMS/tree/main/bench).
## Small Benchmark
We test our pretrained language models on various datasets from HIPE-2020, HIPE-2022 and Europeana.
The following table shows an overview of used datasets.
| Language | Dataset | Additional Dataset |
|----------|--------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|
| English | [AjMC](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) | - |
| German | [AjMC](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) | - |
| French | [AjMC](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) | [ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) |
| Finnish | [NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) | - |
| Swedish | [NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) | - |
| Dutch | [ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) | - |
# Results
| Model | English AjMC | German AjMC | French AjMC | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | Avg. |
|----------------------------------------------------------------------------------------|--------------|--------------|--------------|-----------------|-----------------|--------------|--------------|-----------|
| hmBERT (32k) [Schweter et al.](https://ceur-ws.org/Vol-3180/paper-87.pdf) | 85.36 ± 0.94 | 89.08 ± 0.09 | 85.10 ± 0.60 | 77.28 ± 0.37 | 82.85 ± 0.83 | 82.11 ± 0.61 | 77.21 ± 0.16 | 82.71 |
| hmTEAMS (Ours) | 86.41 ± 0.36 | 88.64 ± 0.42 | 85.41 ± 0.67 | 79.27 ± 1.88 | 82.78 ± 0.60 | 88.21 ± 0.39 | 78.03 ± 0.39 | **84.11** |
# Release
Our pretrained hmTEAMS model can be obtained from the Hugging Face Model Hub:
* [hmTEAMS Discriminator](https://huggingface.co/hmteams/teams-base-historic-multilingual-discriminator)
* [hmTEAMS Generator (**this model**)](https://huggingface.co/hmteams/teams-base-historic-multilingual-generator)
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️ |
CrimsonZockt/SarsaMarkiewicz-FLUXLORA | CrimsonZockt | 2025-02-03T12:19:33Z | 46 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-02-03T12:18:56Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: Sarsa Markiewicz, black tanktop, professional headshot, photoshoot.
output:
url: images/Sarsa Markiewicz, black tanktop, professional h....png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Sarsa Markiewicz
---
# SarsaMarkiewicz
<Gallery />
## Model description
This is a LORA Model that i have train on Weights.gg
## Trigger words
You should use `Sarsa Markiewicz` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/CrimsonZockt/SarsaMarkiewicz-FLUXLORA/tree/main) them in the Files & versions tab.
|
Subsets and Splits