modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
jebish7/LLAMA-8B-B80 | jebish7 | 2025-01-24T18:21:26Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T18:14:51Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jebish7
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nexspear/5c4cedba-7e2b-4e9f-b4d8-c396b060b6cb | Nexspear | 2025-01-24T18:20:21Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T18:16:37Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c4cedba-7e2b-4e9f-b4d8-c396b060b6cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd8c593d36d222e4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd8c593d36d222e4_train_data.json
type:
field_input: tool_nodes
field_instruction: instruction
field_output: tool_steps
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: Nexspear/5c4cedba-7e2b-4e9f-b4d8-c396b060b6cb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/fd8c593d36d222e4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: ba026e41-97ab-4c65-ab9a-114e413b7924
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: ba026e41-97ab-4c65-ab9a-114e413b7924
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 5c4cedba-7e2b-4e9f-b4d8-c396b060b6cb
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0046 | 1 | 2.0566 |
| 2.0191 | 0.0418 | 9 | 2.0483 |
| 2.0478 | 0.0835 | 18 | 1.9791 |
| 1.9304 | 0.1253 | 27 | 1.8460 |
| 1.7329 | 0.1671 | 36 | 1.7157 |
| 1.7101 | 0.2088 | 45 | 1.6132 |
| 1.6279 | 0.2506 | 54 | 1.5299 |
| 1.5152 | 0.2923 | 63 | 1.4712 |
| 1.4687 | 0.3341 | 72 | 1.4346 |
| 1.4527 | 0.3759 | 81 | 1.4155 |
| 1.5095 | 0.4176 | 90 | 1.4085 |
| 1.3827 | 0.4594 | 99 | 1.4068 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/8f678b0d-a0fd-42ed-83d7-1c74caa244be | nttx | 2025-01-24T18:19:36Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-01-24T17:46:07Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8f678b0d-a0fd-42ed-83d7-1c74caa244be
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7498d1d10e472fec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7498d1d10e472fec_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/8f678b0d-a0fd-42ed-83d7-1c74caa244be
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/7498d1d10e472fec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8f678b0d-a0fd-42ed-83d7-1c74caa244be
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.124 | 0.0034 | 1 | 2.3156 |
| 1.4484 | 0.1718 | 50 | 1.6408 |
| 1.1492 | 0.3436 | 100 | 1.5502 |
| 1.2565 | 0.5155 | 150 | 1.4724 |
| 1.3627 | 0.6873 | 200 | 1.4643 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/MS-ManciousWriter-22B-v0.2-GGUF | mradermacher | 2025-01-24T18:18:19Z | 207 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ThomasComics/MS-ManciousWriter-22B-v0.2",
"base_model:quantized:ThomasComics/MS-ManciousWriter-22B-v0.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-13T20:03:45Z | ---
base_model: ThomasComics/MS-ManciousWriter-22B-v0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ThomasComics/MS-ManciousWriter-22B-v0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.2-GGUF/resolve/main/MS-ManciousWriter-22B-v0.2.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF | mradermacher | 2025-01-24T18:18:18Z | 399 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ko",
"base_model:Changgil/K2S3-Mistral-7b-v1.3",
"base_model:quantized:Changgil/K2S3-Mistral-7b-v1.3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-01-24T17:20:05Z | ---
base_model: Changgil/K2S3-Mistral-7b-v1.3
language:
- en
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Changgil/K2S3-Mistral-7b-v1.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ2_M.gguf) | i1-IQ2_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q2_K.gguf) | i1-Q2_K | 2.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.i1-Q6_K.gguf) | i1-Q6_K | 6.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dimasik2987/ba57bca4-83be-4e04-b6a4-efbffe1e4839 | dimasik2987 | 2025-01-24T18:18:09Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T18:16:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ba57bca4-83be-4e04-b6a4-efbffe1e4839
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4f23bbe0afe56f5a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4f23bbe0afe56f5a_train_data.json
type:
field_input: ''
field_instruction: content
field_output: python
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: dimasik2987/ba57bca4-83be-4e04-b6a4-efbffe1e4839
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/4f23bbe0afe56f5a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4e637a71-9f6a-4732-9a25-972d5757d955
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4e637a71-9f6a-4732-9a25-972d5757d955
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# ba57bca4-83be-4e04-b6a4-efbffe1e4839
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0078 | 1 | nan |
| 0.0 | 0.0390 | 5 | nan |
| 0.0 | 0.0780 | 10 | nan |
| 0.0 | 0.1170 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cvoffer/d3627e2f-6b7d-4088-9120-bf6ea9a4fb43 | cvoffer | 2025-01-24T18:17:45Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T18:16:49Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d3627e2f-6b7d-4088-9120-bf6ea9a4fb43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd8c593d36d222e4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd8c593d36d222e4_train_data.json
type:
field_input: tool_nodes
field_instruction: instruction
field_output: tool_steps
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cvoffer/d3627e2f-6b7d-4088-9120-bf6ea9a4fb43
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fd8c593d36d222e4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba026e41-97ab-4c65-ab9a-114e413b7924
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba026e41-97ab-4c65-ab9a-114e413b7924
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# d3627e2f-6b7d-4088-9120-bf6ea9a4fb43
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | nan |
| 0.0 | 0.0058 | 5 | nan |
| 0.0 | 0.0116 | 10 | nan |
| 0.0 | 0.0174 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5606/b85a367b-a28f-4cc1-b61a-11b68c1095d1 | prxy5606 | 2025-01-24T18:17:27Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:dltjdgh0928/test_instruction",
"base_model:adapter:dltjdgh0928/test_instruction",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T17:43:51Z | ---
library_name: peft
license: apache-2.0
base_model: dltjdgh0928/test_instruction
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b85a367b-a28f-4cc1-b61a-11b68c1095d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dltjdgh0928/test_instruction
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 046705ccd51bb8b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/046705ccd51bb8b0_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/b85a367b-a28f-4cc1-b61a-11b68c1095d1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/046705ccd51bb8b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a2f542fe-1aa7-4f7b-a82c-b1cc81bf83c0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a2f542fe-1aa7-4f7b-a82c-b1cc81bf83c0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b85a367b-a28f-4cc1-b61a-11b68c1095d1
This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9298 | 0.0114 | 1 | 1.8512 |
| 6.0764 | 0.5682 | 50 | 1.5800 |
| 4.1832 | 1.1364 | 100 | 1.5370 |
| 3.8924 | 1.7045 | 150 | 1.5123 |
| 4.998 | 2.2727 | 200 | 1.5196 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
denbeo/430d5843-facc-4b45-ad65-62f989a76c2c | denbeo | 2025-01-24T18:17:12Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T18:03:23Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 430d5843-facc-4b45-ad65-62f989a76c2c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d16d347b651ede3e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d16d347b651ede3e_train_data.json
type:
field_instruction: aspect_list
field_output: caption_summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/430d5843-facc-4b45-ad65-62f989a76c2c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d16d347b651ede3e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2fe0c844-e98c-476d-b9a0-1a41beb91022
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2fe0c844-e98c-476d-b9a0-1a41beb91022
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 430d5843-facc-4b45-ad65-62f989a76c2c
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4341 | 0.3166 | 200 | 2.9850 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/zephyr-7b-beta-ExPO-GGUF | mradermacher | 2025-01-24T18:16:19Z | 188 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:chujiezheng/zephyr-7b-beta-ExPO",
"base_model:quantized:chujiezheng/zephyr-7b-beta-ExPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-24T17:16:44Z | ---
base_model: chujiezheng/zephyr-7b-beta-ExPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/chujiezheng/zephyr-7b-beta-ExPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-beta-ExPO-GGUF/resolve/main/zephyr-7b-beta-ExPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
prxy5604/a14c6bf0-6904-416a-ba0f-fadae84368d2 | prxy5604 | 2025-01-24T18:15:42Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | null | 2025-01-24T18:13:17Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a14c6bf0-6904-416a-ba0f-fadae84368d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 0752bb947d5ed0d4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0752bb947d5ed0d4_train_data.json
type:
field_instruction: question
field_output: best_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/a14c6bf0-6904-416a-ba0f-fadae84368d2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/0752bb947d5ed0d4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4a4c67e3-e39f-42b0-b2b7-14d11d56c788
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4a4c67e3-e39f-42b0-b2b7-14d11d56c788
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a14c6bf0-6904-416a-ba0f-fadae84368d2
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 57
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7011 | 0.0526 | 1 | 0.8363 |
| 0.1892 | 2.6316 | 50 | 0.4880 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/7c6d997e-dfea-4a94-a3f3-12d4a258dedf | nhung01 | 2025-01-24T18:15:35Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T18:03:21Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7c6d997e-dfea-4a94-a3f3-12d4a258dedf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d16d347b651ede3e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d16d347b651ede3e_train_data.json
type:
field_instruction: aspect_list
field_output: caption_summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/7c6d997e-dfea-4a94-a3f3-12d4a258dedf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d16d347b651ede3e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2fe0c844-e98c-476d-b9a0-1a41beb91022
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2fe0c844-e98c-476d-b9a0-1a41beb91022
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7c6d997e-dfea-4a94-a3f3-12d4a258dedf
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4118 | 0.3166 | 200 | 2.9716 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/merging_LLM-i1-GGUF | mradermacher | 2025-01-24T18:14:08Z | 104 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MatteoKhan/merging_LLM",
"base_model:quantized:MatteoKhan/merging_LLM",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-03T04:20:54Z | ---
base_model: MatteoKhan/merging_LLM
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MatteoKhan/merging_LLM
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/merging_LLM-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/merging_LLM-i1-GGUF/resolve/main/merging_LLM.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kostiantynk1205/13c1957c-0d33-4b63-9bcb-363546ad53ce | kostiantynk1205 | 2025-01-24T18:14:07Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-24T18:13:15Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 13c1957c-0d33-4b63-9bcb-363546ad53ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b3d963a154e55444_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b3d963a154e55444_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/13c1957c-0d33-4b63-9bcb-363546ad53ce
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b3d963a154e55444_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9f4dab1e-eaba-44fe-b5c4-9380877bfca8
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9f4dab1e-eaba-44fe-b5c4-9380877bfca8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 13c1957c-0d33-4b63-9bcb-363546ad53ce
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0039 | 1 | nan |
| 0.0 | 0.0117 | 3 | nan |
| 0.0 | 0.0234 | 6 | nan |
| 0.0 | 0.0351 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung03/c3e84a01-031e-4cca-9309-f97f7313b925 | nhung03 | 2025-01-24T18:13:48Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T18:03:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3e84a01-031e-4cca-9309-f97f7313b925
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d16d347b651ede3e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d16d347b651ede3e_train_data.json
type:
field_instruction: aspect_list
field_output: caption_summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/c3e84a01-031e-4cca-9309-f97f7313b925
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d16d347b651ede3e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2fe0c844-e98c-476d-b9a0-1a41beb91022
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2fe0c844-e98c-476d-b9a0-1a41beb91022
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c3e84a01-031e-4cca-9309-f97f7313b925
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4131 | 0.3166 | 200 | 2.9711 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jebish7/LLAMA-8B-A90 | jebish7 | 2025-01-24T18:13:28Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T18:06:45Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jebish7
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso11/eff392b1-4dce-45b0-a857-3bf303fa0135 | lesso11 | 2025-01-24T18:13:15Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Artples/L-MChat-7b",
"base_model:adapter:Artples/L-MChat-7b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T17:56:29Z | ---
library_name: peft
license: apache-2.0
base_model: Artples/L-MChat-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eff392b1-4dce-45b0-a857-3bf303fa0135
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Artples/L-MChat-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 8b1f8661a4895405_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8b1f8661a4895405_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer0
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/eff392b1-4dce-45b0-a857-3bf303fa0135
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/8b1f8661a4895405_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|end_of_turn|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ed150454-ab51-4790-a095-bcdf93825195
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ed150454-ab51-4790-a095-bcdf93825195
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# eff392b1-4dce-45b0-a857-3bf303fa0135
This model is a fine-tuned version of [Artples/L-MChat-7b](https://huggingface.co/Artples/L-MChat-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0012 | 5 | nan |
| 0.0 | 0.0024 | 10 | nan |
| 0.0 | 0.0036 | 15 | nan |
| 0.0 | 0.0048 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
philip-hightech/aaf0fdd3-91a4-4681-b6b6-4618e724d754 | philip-hightech | 2025-01-24T18:12:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-24T18:12:05Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aaf0fdd3-91a4-4681-b6b6-4618e724d754
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b3d963a154e55444_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b3d963a154e55444_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/aaf0fdd3-91a4-4681-b6b6-4618e724d754
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b3d963a154e55444_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9f4dab1e-eaba-44fe-b5c4-9380877bfca8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9f4dab1e-eaba-44fe-b5c4-9380877bfca8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# aaf0fdd3-91a4-4681-b6b6-4618e724d754
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0039 | 1 | nan |
| 0.0 | 0.0117 | 3 | nan |
| 0.0 | 0.0234 | 6 | nan |
| 0.0 | 0.0351 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
auxyus/00a30df6-d8ef-4616-a204-487ed973d74d | auxyus | 2025-01-24T18:11:58Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2025-01-24T17:38:39Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 00a30df6-d8ef-4616-a204-487ed973d74d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4020ca7b1bee37ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4020ca7b1bee37ed_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: auxyus/00a30df6-d8ef-4616-a204-487ed973d74d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/4020ca7b1bee37ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: e0a64651-d34f-4f2d-afb3-8417c6fa5a6f
wandb_project: Gradients-On-Two
wandb_run: your_name
wandb_runid: e0a64651-d34f-4f2d-afb3-8417c6fa5a6f
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 00a30df6-d8ef-4616-a204-487ed973d74d
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0032 | 1 | 0.5282 |
| 0.5322 | 0.0285 | 9 | 0.4899 |
| 0.4742 | 0.0569 | 18 | 0.4461 |
| 0.4634 | 0.0854 | 27 | 0.4228 |
| 0.407 | 0.1138 | 36 | 0.4101 |
| 0.4063 | 0.1423 | 45 | 0.4016 |
| 0.3974 | 0.1708 | 54 | 0.3953 |
| 0.4175 | 0.1992 | 63 | 0.3912 |
| 0.4076 | 0.2277 | 72 | 0.3885 |
| 0.4166 | 0.2561 | 81 | 0.3868 |
| 0.3793 | 0.2846 | 90 | 0.3861 |
| 0.4073 | 0.3130 | 99 | 0.3860 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/435aad18-b929-46e2-b79d-f4fe374e52c2 | daniel40 | 2025-01-24T18:11:56Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-01-24T18:11:12Z | ---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 435aad18-b929-46e2-b79d-f4fe374e52c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9864d3b317537d3a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9864d3b317537d3a_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/435aad18-b929-46e2-b79d-f4fe374e52c2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/9864d3b317537d3a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8a06a672-d9ee-4beb-9629-89383f6b4f03
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8a06a672-d9ee-4beb-9629-89383f6b4f03
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 435aad18-b929-46e2-b79d-f4fe374e52c2
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7146 | 0.0026 | 1 | 3.4024 |
| 3.2969 | 0.0077 | 3 | 3.4018 |
| 3.4655 | 0.0155 | 6 | 3.3946 |
| 3.4146 | 0.0232 | 9 | 3.3801 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/K2S3-Mistral-7b-v1.3-GGUF | mradermacher | 2025-01-24T18:11:55Z | 202 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ko",
"base_model:Changgil/K2S3-Mistral-7b-v1.3",
"base_model:quantized:Changgil/K2S3-Mistral-7b-v1.3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-24T17:08:56Z | ---
base_model: Changgil/K2S3-Mistral-7b-v1.3
language:
- en
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/Changgil/K2S3-Mistral-7b-v1.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.3-GGUF/resolve/main/K2S3-Mistral-7b-v1.3.f16.gguf) | f16 | 14.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
prxy5604/07f4e296-6c7a-4f44-a8fe-802ea4743427 | prxy5604 | 2025-01-24T18:11:34Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-01-24T17:24:21Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 07f4e296-6c7a-4f44-a8fe-802ea4743427
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 1a9b97378fcbebe1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1a9b97378fcbebe1_train_data.json
type:
field_input: captions
field_instruction: raw_sentences
field_output: raw_anns
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/07f4e296-6c7a-4f44-a8fe-802ea4743427
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/1a9b97378fcbebe1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 24718b6c-9560-45d6-8f4f-62368e0b0d98
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 24718b6c-9560-45d6-8f4f-62368e0b0d98
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 07f4e296-6c7a-4f44-a8fe-802ea4743427
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6272 | 0.0007 | 1 | 2.0187 |
| 1.3778 | 0.0339 | 50 | 1.4875 |
| 1.3254 | 0.0679 | 100 | 1.4757 |
| 1.3392 | 0.1018 | 150 | 1.4507 |
| 1.3384 | 0.1358 | 200 | 1.3565 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bbytxt/8bda8736-cbac-4fc7-b910-7471dca0e5ee | bbytxt | 2025-01-24T18:11:33Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"region:us"
] | null | 2025-01-24T17:30:53Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8bda8736-cbac-4fc7-b910-7471dca0e5ee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- b9e57da90d2a6f27_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b9e57da90d2a6f27_train_data.json
type:
field_input: target_delexicalized
field_instruction: dialog_act
field_output: target
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: bbytxt/8bda8736-cbac-4fc7-b910-7471dca0e5ee
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b9e57da90d2a6f27_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9064ddb9-e7cd-4c18-a087-a79d45c13b40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9064ddb9-e7cd-4c18-a087-a79d45c13b40
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8bda8736-cbac-4fc7-b910-7471dca0e5ee
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8234 | 0.0052 | 1 | 2.0735 |
| 0.1032 | 0.2594 | 50 | 0.1576 |
| 0.0255 | 0.5188 | 100 | 0.0464 |
| 0.0567 | 0.7782 | 150 | 0.0258 |
| 0.0331 | 1.0376 | 200 | 0.0223 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/16506faf-59e5-4b6f-8cf0-40e39faed635 | ClarenceDan | 2025-01-24T18:11:25Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T18:05:02Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16506faf-59e5-4b6f-8cf0-40e39faed635
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ffdc9c6c7112acb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ffdc9c6c7112acb8_train_data.json
type:
field_instruction: instruction
field_output: original_instruction
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/16506faf-59e5-4b6f-8cf0-40e39faed635
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ffdc9c6c7112acb8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 113bcfa4-1f77-4d9a-972a-d332c234c9bd
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 113bcfa4-1f77-4d9a-972a-d332c234c9bd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 16506faf-59e5-4b6f-8cf0-40e39faed635
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0004 | 1 | nan |
| 0.1712 | 0.0012 | 3 | nan |
| 0.1287 | 0.0024 | 6 | nan |
| 0.0 | 0.0035 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhunglaaaaaaa/3b7c50ad-88cf-4cfd-885d-d72e217d3ab9 | nhunglaaaaaaa | 2025-01-24T18:10:59Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T18:03:04Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b7c50ad-88cf-4cfd-885d-d72e217d3ab9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d16d347b651ede3e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d16d347b651ede3e_train_data.json
type:
field_instruction: aspect_list
field_output: caption_summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/3b7c50ad-88cf-4cfd-885d-d72e217d3ab9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d16d347b651ede3e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2fe0c844-e98c-476d-b9a0-1a41beb91022
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2fe0c844-e98c-476d-b9a0-1a41beb91022
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3b7c50ad-88cf-4cfd-885d-d72e217d3ab9
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4263 | 0.3166 | 200 | 2.9729 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/0365d723-afa0-457e-8233-e6f75e9e4081 | daniel40 | 2025-01-24T18:09:08Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T18:00:55Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0365d723-afa0-457e-8233-e6f75e9e4081
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f8232566451d90d2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f8232566451d90d2_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/0365d723-afa0-457e-8233-e6f75e9e4081
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f8232566451d90d2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c2e82a22-8bc3-4a6a-8590-0acbfe70412e
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: c2e82a22-8bc3-4a6a-8590-0acbfe70412e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0365d723-afa0-457e-8233-e6f75e9e4081
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.6732 | 0.0001 | 1 | 4.9009 |
| 3.1542 | 0.0004 | 3 | 4.8998 |
| 3.261 | 0.0007 | 6 | 4.8769 |
| 5.3195 | 0.0011 | 9 | 4.6863 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thakkkkkk/cecba95c-7b1a-4c91-a3a9-23cbce6a44bb | thakkkkkk | 2025-01-24T18:08:25Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T17:32:33Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cecba95c-7b1a-4c91-a3a9-23cbce6a44bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 13b3bd6e856866ca_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/13b3bd6e856866ca_train_data.json
type:
field_instruction: code
field_output: docstring
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/cecba95c-7b1a-4c91-a3a9-23cbce6a44bb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/13b3bd6e856866ca_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7a6bce7e-cd81-4427-be7c-1a87408232d9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7a6bce7e-cd81-4427-be7c-1a87408232d9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# cecba95c-7b1a-4c91-a3a9-23cbce6a44bb
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0304 | 0.0074 | 200 | 0.0568 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiual-awal/0218c922-f13e-45d7-b4c6-2a2ba57b4fbd | robiual-awal | 2025-01-24T18:07:04Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"region:us"
] | null | 2025-01-24T17:29:30Z | ---
library_name: peft
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0218c922-f13e-45d7-b4c6-2a2ba57b4fbd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a84d6d236b5e1544_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a84d6d236b5e1544_train_data.json
type:
field_input: ''
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/0218c922-f13e-45d7-b4c6-2a2ba57b4fbd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a84d6d236b5e1544_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f64a5563-6776-4c7a-86c0-b07b3fcc4f06
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f64a5563-6776-4c7a-86c0-b07b3fcc4f06
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0218c922-f13e-45d7-b4c6-2a2ba57b4fbd
This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7169 | 0.0000 | 1 | 1.6580 |
| 1.7426 | 0.0001 | 3 | 1.6544 |
| 1.605 | 0.0003 | 6 | 1.5993 |
| 1.6672 | 0.0004 | 9 | 1.5032 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/33fa53b9-24fc-40d4-8c55-1935e518bb3e | great0001 | 2025-01-24T18:06:49Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T17:23:47Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 33fa53b9-24fc-40d4-8c55-1935e518bb3e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 27234bdaba19980e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/27234bdaba19980e_train_data.json
type:
field_input: negative
field_instruction: query
field_output: positive
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/33fa53b9-24fc-40d4-8c55-1935e518bb3e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/27234bdaba19980e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 71a2b5e9-3069-455b-8ee8-6e17d9046ca5
wandb_project: Birthday-SN56-14-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 71a2b5e9-3069-455b-8ee8-6e17d9046ca5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 33fa53b9-24fc-40d4-8c55-1935e518bb3e
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0001 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/2d6d5d7e-299b-4078-89f5-911c83a79a8d | aleegis12 | 2025-01-24T18:04:45Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2025-01-24T17:20:21Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2d6d5d7e-299b-4078-89f5-911c83a79a8d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 0af14c27ef012868_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0af14c27ef012868_train_data.json
type:
field_input: text
field_instruction: subject
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/2d6d5d7e-299b-4078-89f5-911c83a79a8d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/0af14c27ef012868_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 19c8688e-ea72-45f4-ad76-056d1e3fe378
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 19c8688e-ea72-45f4-ad76-056d1e3fe378
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2d6d5d7e-299b-4078-89f5-911c83a79a8d
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5252 | 0.0008 | 1 | 2.3886 |
| 2.3783 | 0.0377 | 50 | 1.5656 |
| 2.1107 | 0.0754 | 100 | 1.4745 |
| 2.1932 | 0.1130 | 150 | 1.4211 |
| 2.2388 | 0.1507 | 200 | 1.4102 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Muadil/Llama-3.2-1B-Instruct_sum_PPO_Skywork_40k_2_2ep | Muadil | 2025-01-24T18:04:30Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T18:03:09Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ardaspear/3b5f9d72-bf72-4910-836a-1a1e05f322ac | ardaspear | 2025-01-24T18:02:11Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-01-24T17:47:27Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b5f9d72-bf72-4910-836a-1a1e05f322ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b3d963a154e55444_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b3d963a154e55444_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/3b5f9d72-bf72-4910-836a-1a1e05f322ac
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/b3d963a154e55444_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 9f4dab1e-eaba-44fe-b5c4-9380877bfca8
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 9f4dab1e-eaba-44fe-b5c4-9380877bfca8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 3b5f9d72-bf72-4910-836a-1a1e05f322ac
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0156 | 1 | 0.4046 |
| 0.3598 | 0.1401 | 9 | 0.3213 |
| 0.2559 | 0.2802 | 18 | 0.2223 |
| 0.2385 | 0.4202 | 27 | 0.2126 |
| 0.226 | 0.5603 | 36 | 0.2080 |
| 0.2161 | 0.7004 | 45 | 0.2060 |
| 0.211 | 0.8405 | 54 | 0.2043 |
| 0.2024 | 0.9805 | 63 | 0.2036 |
| 0.196 | 1.1206 | 72 | 0.2036 |
| 0.2201 | 1.2607 | 81 | 0.2032 |
| 0.1996 | 1.4008 | 90 | 0.2029 |
| 0.1834 | 1.5409 | 99 | 0.2025 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thaffggg/e5c1a89b-434e-432c-a7a4-53e356531f2e | thaffggg | 2025-01-24T18:00:43Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T17:39:57Z | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e5c1a89b-434e-432c-a7a4-53e356531f2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4020ca7b1bee37ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4020ca7b1bee37ed_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/e5c1a89b-434e-432c-a7a4-53e356531f2e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4020ca7b1bee37ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0a64651-d34f-4f2d-afb3-8417c6fa5a6f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e0a64651-d34f-4f2d-afb3-8417c6fa5a6f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e5c1a89b-434e-432c-a7a4-53e356531f2e
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4325 | 0.1582 | 200 | 0.4038 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xotopo/atharv1500 | xotopo | 2025-01-24T18:00:16Z | 88 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-24T17:59:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ATHARV
---
# ATHARV Abtest
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ATHARV` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('xotopo/atharv1500', weight_name='aqV2XR2nv3X_pwJzvMKLo_pytorch_lora_weights.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aleegis09/86a3bfc2-a528-402d-99c2-10327af35449 | aleegis09 | 2025-01-24T17:59:03Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2025-01-24T17:40:16Z | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 86a3bfc2-a528-402d-99c2-10327af35449
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- ed3dcd46464f4dad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ed3dcd46464f4dad_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis09/86a3bfc2-a528-402d-99c2-10327af35449
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/ed3dcd46464f4dad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: be20a502-f469-4b17-bfde-5de1864163d7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: be20a502-f469-4b17-bfde-5de1864163d7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 86a3bfc2-a528-402d-99c2-10327af35449
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4197 | 0.0065 | 1 | 3.2832 |
| 2.0381 | 0.3273 | 50 | 2.0826 |
| 2.0398 | 0.6547 | 100 | 1.8337 |
| 1.5162 | 0.9820 | 150 | 1.7148 |
| 1.6728 | 1.3093 | 200 | 1.7021 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kenzykhaled/model | kenzykhaled | 2025-01-24T17:56:24Z | 198 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T17:51:58Z | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kenzykhaled
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fpadovani/english_wikipedia_mlm_sent_30 | fpadovani | 2025-01-24T17:55:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-01-24T15:43:15Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: wikipedia_mlm_unmasking_sent_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikipedia_mlm_unmasking_sent_30
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 30
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100000
- training_steps: 400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| No log | 0.3197 | 2000 | 7.0282 |
| 7.4367 | 0.6393 | 4000 | 6.8433 |
| 7.4367 | 0.9590 | 6000 | 6.7744 |
| 6.7925 | 1.2787 | 8000 | 6.7245 |
| 6.7925 | 1.5983 | 10000 | 6.6008 |
| 6.5825 | 1.9180 | 12000 | 6.2640 |
| 6.5825 | 2.2377 | 14000 | 5.9391 |
| 5.9941 | 2.5573 | 16000 | 5.6755 |
| 5.9941 | 2.8772 | 18000 | 5.5099 |
| 5.5947 | 3.1968 | 20000 | 5.3885 |
| 5.5947 | 3.5165 | 22000 | 5.2905 |
| 5.3485 | 3.8362 | 24000 | 5.1960 |
| 5.3485 | 4.1558 | 26000 | 5.0899 |
| 5.1427 | 4.4755 | 28000 | 5.0341 |
| 5.1427 | 4.7952 | 30000 | 4.9389 |
| 5.0044 | 5.1148 | 32000 | 4.8858 |
| 5.0044 | 5.4345 | 34000 | 4.8288 |
| 4.8822 | 5.7542 | 36000 | 4.7583 |
| 4.8822 | 6.0738 | 38000 | 4.7035 |
| 4.7849 | 6.3935 | 40000 | 4.6815 |
| 4.7849 | 6.7132 | 42000 | 4.6535 |
| 4.719 | 7.0328 | 44000 | 4.6088 |
| 4.719 | 7.3525 | 46000 | 4.5825 |
| 4.6493 | 7.6722 | 48000 | 4.5544 |
| 4.6493 | 7.9918 | 50000 | 4.5219 |
| 4.6051 | 8.3115 | 52000 | 4.5178 |
| 4.6051 | 8.6312 | 54000 | 4.4834 |
| 4.5757 | 8.9509 | 56000 | 4.4680 |
| 4.5757 | 9.2705 | 58000 | 4.4645 |
| 4.5431 | 9.5902 | 60000 | 4.4370 |
| 4.5431 | 9.9099 | 62000 | 4.4255 |
| 4.5314 | 10.2295 | 64000 | 4.4249 |
| 4.5314 | 10.5492 | 66000 | 4.4295 |
| 4.5195 | 10.8689 | 68000 | 4.4201 |
| 4.5195 | 11.1885 | 70000 | 4.4102 |
| 4.504 | 11.5082 | 72000 | 4.3990 |
| 4.504 | 11.8279 | 74000 | 4.3719 |
| 4.4955 | 12.1475 | 76000 | 4.3765 |
| 4.4955 | 12.4672 | 78000 | 4.3538 |
| 4.4809 | 12.7869 | 80000 | 4.3353 |
| 4.4809 | 13.1065 | 82000 | 4.3930 |
| 4.4757 | 13.4262 | 84000 | 4.3629 |
| 4.4757 | 13.7459 | 86000 | 4.3443 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Melo1512/vit-msn-small-beta-fia-manually-enhanced_test_2 | Melo1512 | 2025-01-24T17:55:14Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit_msn",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/vit-msn-small",
"base_model:finetune:facebook/vit-msn-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-01-24T17:33:06Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/vit-msn-small
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-msn-small-beta-fia-manually-enhanced_test_2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7746478873239436
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-beta-fia-manually-enhanced_test_2
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5203
- Accuracy: 0.7746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:--------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 0.6037 | 0.7465 |
| No log | 1.7143 | 3 | 0.6071 | 0.7324 |
| No log | 2.8571 | 5 | 0.6120 | 0.7183 |
| No log | 4.0 | 7 | 0.6188 | 0.7183 |
| No log | 4.5714 | 8 | 0.6206 | 0.7183 |
| 0.4866 | 5.7143 | 10 | 0.6272 | 0.6972 |
| 0.4866 | 6.8571 | 12 | 0.6355 | 0.6901 |
| 0.4866 | 8.0 | 14 | 0.6399 | 0.6901 |
| 0.4866 | 8.5714 | 15 | 0.6364 | 0.6831 |
| 0.4866 | 9.7143 | 17 | 0.6295 | 0.6831 |
| 0.4866 | 10.8571 | 19 | 0.6288 | 0.6901 |
| 0.4519 | 12.0 | 21 | 0.6185 | 0.6901 |
| 0.4519 | 12.5714 | 22 | 0.6159 | 0.6901 |
| 0.4519 | 13.7143 | 24 | 0.6113 | 0.6972 |
| 0.4519 | 14.8571 | 26 | 0.5987 | 0.6901 |
| 0.4519 | 16.0 | 28 | 0.6017 | 0.6972 |
| 0.4519 | 16.5714 | 29 | 0.6067 | 0.6972 |
| 0.437 | 17.7143 | 31 | 0.6062 | 0.6620 |
| 0.437 | 18.8571 | 33 | 0.5966 | 0.6901 |
| 0.437 | 20.0 | 35 | 0.5858 | 0.7113 |
| 0.437 | 20.5714 | 36 | 0.5889 | 0.7042 |
| 0.437 | 21.7143 | 38 | 0.5768 | 0.7183 |
| 0.4353 | 22.8571 | 40 | 0.5752 | 0.7183 |
| 0.4353 | 24.0 | 42 | 0.5729 | 0.7183 |
| 0.4353 | 24.5714 | 43 | 0.5909 | 0.6972 |
| 0.4353 | 25.7143 | 45 | 0.6038 | 0.6761 |
| 0.4353 | 26.8571 | 47 | 0.5904 | 0.6901 |
| 0.4353 | 28.0 | 49 | 0.5847 | 0.6831 |
| 0.4141 | 28.5714 | 50 | 0.5615 | 0.7113 |
| 0.4141 | 29.7143 | 52 | 0.5544 | 0.7254 |
| 0.4141 | 30.8571 | 54 | 0.5904 | 0.6690 |
| 0.4141 | 32.0 | 56 | 0.5948 | 0.6831 |
| 0.4141 | 32.5714 | 57 | 0.5800 | 0.6972 |
| 0.4141 | 33.7143 | 59 | 0.5902 | 0.6972 |
| 0.4066 | 34.8571 | 61 | 0.5950 | 0.6690 |
| 0.4066 | 36.0 | 63 | 0.5500 | 0.7324 |
| 0.4066 | 36.5714 | 64 | 0.5470 | 0.7324 |
| 0.4066 | 37.7143 | 66 | 0.5859 | 0.6901 |
| 0.4066 | 38.8571 | 68 | 0.5955 | 0.6831 |
| 0.3827 | 40.0 | 70 | 0.5967 | 0.6761 |
| 0.3827 | 40.5714 | 71 | 0.5809 | 0.6901 |
| 0.3827 | 41.7143 | 73 | 0.5721 | 0.6972 |
| 0.3827 | 42.8571 | 75 | 0.6019 | 0.6831 |
| 0.3827 | 44.0 | 77 | 0.6071 | 0.6901 |
| 0.3827 | 44.5714 | 78 | 0.5962 | 0.6972 |
| 0.37 | 45.7143 | 80 | 0.6114 | 0.6831 |
| 0.37 | 46.8571 | 82 | 0.5594 | 0.7183 |
| 0.37 | 48.0 | 84 | 0.5493 | 0.7324 |
| 0.37 | 48.5714 | 85 | 0.5744 | 0.7113 |
| 0.37 | 49.7143 | 87 | 0.5443 | 0.7183 |
| 0.37 | 50.8571 | 89 | 0.5469 | 0.7324 |
| 0.3797 | 52.0 | 91 | 0.6003 | 0.6831 |
| 0.3797 | 52.5714 | 92 | 0.6048 | 0.6901 |
| 0.3797 | 53.7143 | 94 | 0.5203 | 0.7746 |
| 0.3797 | 54.8571 | 96 | 0.5327 | 0.7535 |
| 0.3797 | 56.0 | 98 | 0.6414 | 0.6338 |
| 0.3797 | 56.5714 | 99 | 0.6562 | 0.6197 |
| 0.3715 | 57.7143 | 101 | 0.5754 | 0.7183 |
| 0.3715 | 58.8571 | 103 | 0.5672 | 0.7254 |
| 0.3715 | 60.0 | 105 | 0.6060 | 0.6901 |
| 0.3715 | 60.5714 | 106 | 0.6536 | 0.6197 |
| 0.3715 | 61.7143 | 108 | 0.6177 | 0.6479 |
| 0.3483 | 62.8571 | 110 | 0.5385 | 0.7535 |
| 0.3483 | 64.0 | 112 | 0.5630 | 0.7394 |
| 0.3483 | 64.5714 | 113 | 0.5818 | 0.7254 |
| 0.3483 | 65.7143 | 115 | 0.6055 | 0.6972 |
| 0.3483 | 66.8571 | 117 | 0.5737 | 0.7324 |
| 0.3483 | 68.0 | 119 | 0.5606 | 0.7394 |
| 0.3667 | 68.5714 | 120 | 0.5829 | 0.7183 |
| 0.3667 | 69.7143 | 122 | 0.5931 | 0.7113 |
| 0.3667 | 70.8571 | 124 | 0.5375 | 0.7606 |
| 0.3667 | 72.0 | 126 | 0.5797 | 0.7113 |
| 0.3667 | 72.5714 | 127 | 0.6182 | 0.6690 |
| 0.3667 | 73.7143 | 129 | 0.6497 | 0.6690 |
| 0.3357 | 74.8571 | 131 | 0.6432 | 0.6831 |
| 0.3357 | 76.0 | 133 | 0.6772 | 0.6620 |
| 0.3357 | 76.5714 | 134 | 0.6395 | 0.6479 |
| 0.3357 | 77.7143 | 136 | 0.5895 | 0.7042 |
| 0.3357 | 78.8571 | 138 | 0.5921 | 0.6972 |
| 0.3415 | 80.0 | 140 | 0.5618 | 0.7254 |
| 0.3415 | 80.5714 | 141 | 0.5697 | 0.7183 |
| 0.3415 | 81.7143 | 143 | 0.6535 | 0.6197 |
| 0.3415 | 82.8571 | 145 | 0.6627 | 0.6338 |
| 0.3415 | 84.0 | 147 | 0.6194 | 0.6761 |
| 0.3415 | 84.5714 | 148 | 0.6301 | 0.6901 |
| 0.3296 | 85.7143 | 150 | 0.6436 | 0.6690 |
| 0.3296 | 86.8571 | 152 | 0.6348 | 0.6831 |
| 0.3296 | 88.0 | 154 | 0.6704 | 0.6479 |
| 0.3296 | 88.5714 | 155 | 0.7190 | 0.6338 |
| 0.3296 | 89.7143 | 157 | 0.7064 | 0.6338 |
| 0.3296 | 90.8571 | 159 | 0.6291 | 0.6549 |
| 0.3296 | 92.0 | 161 | 0.6933 | 0.6197 |
| 0.3296 | 92.5714 | 162 | 0.7115 | 0.6197 |
| 0.3296 | 93.7143 | 164 | 0.6229 | 0.6690 |
| 0.3296 | 94.8571 | 166 | 0.5727 | 0.7183 |
| 0.3296 | 96.0 | 168 | 0.5965 | 0.6901 |
| 0.3296 | 96.5714 | 169 | 0.6433 | 0.6690 |
| 0.3174 | 97.7143 | 171 | 0.6634 | 0.6408 |
| 0.3174 | 98.8571 | 173 | 0.6166 | 0.6549 |
| 0.3174 | 100.0 | 175 | 0.5896 | 0.6972 |
| 0.3174 | 100.5714 | 176 | 0.6092 | 0.6549 |
| 0.3174 | 101.7143 | 178 | 0.6022 | 0.6549 |
| 0.3309 | 102.8571 | 180 | 0.5928 | 0.6761 |
| 0.3309 | 104.0 | 182 | 0.6327 | 0.6408 |
| 0.3309 | 104.5714 | 183 | 0.6490 | 0.6338 |
| 0.3309 | 105.7143 | 185 | 0.6155 | 0.6479 |
| 0.3309 | 106.8571 | 187 | 0.6225 | 0.6620 |
| 0.3309 | 108.0 | 189 | 0.6732 | 0.6408 |
| 0.3124 | 108.5714 | 190 | 0.6808 | 0.6408 |
| 0.3124 | 109.7143 | 192 | 0.6585 | 0.6479 |
| 0.3124 | 110.8571 | 194 | 0.6122 | 0.6761 |
| 0.3124 | 112.0 | 196 | 0.6510 | 0.6549 |
| 0.3124 | 112.5714 | 197 | 0.7099 | 0.6408 |
| 0.3124 | 113.7143 | 199 | 0.7192 | 0.6338 |
| 0.3158 | 114.8571 | 201 | 0.6186 | 0.6901 |
| 0.3158 | 116.0 | 203 | 0.6071 | 0.7042 |
| 0.3158 | 116.5714 | 204 | 0.6419 | 0.6831 |
| 0.3158 | 117.7143 | 206 | 0.6679 | 0.6549 |
| 0.3158 | 118.8571 | 208 | 0.6825 | 0.6268 |
| 0.3026 | 120.0 | 210 | 0.6091 | 0.6972 |
| 0.3026 | 120.5714 | 211 | 0.5861 | 0.7394 |
| 0.3026 | 121.7143 | 213 | 0.6037 | 0.7113 |
| 0.3026 | 122.8571 | 215 | 0.6315 | 0.6761 |
| 0.3026 | 124.0 | 217 | 0.6328 | 0.6690 |
| 0.3026 | 124.5714 | 218 | 0.6187 | 0.6831 |
| 0.2968 | 125.7143 | 220 | 0.5843 | 0.7394 |
| 0.2968 | 126.8571 | 222 | 0.6126 | 0.7042 |
| 0.2968 | 128.0 | 224 | 0.6785 | 0.6549 |
| 0.2968 | 128.5714 | 225 | 0.6706 | 0.6479 |
| 0.2968 | 129.7143 | 227 | 0.6070 | 0.7113 |
| 0.2968 | 130.8571 | 229 | 0.5984 | 0.7254 |
| 0.294 | 132.0 | 231 | 0.6533 | 0.6620 |
| 0.294 | 132.5714 | 232 | 0.6802 | 0.6408 |
| 0.294 | 133.7143 | 234 | 0.6804 | 0.6408 |
| 0.294 | 134.8571 | 236 | 0.6228 | 0.7042 |
| 0.294 | 136.0 | 238 | 0.5849 | 0.7676 |
| 0.294 | 136.5714 | 239 | 0.5874 | 0.7676 |
| 0.3009 | 137.7143 | 241 | 0.6230 | 0.7042 |
| 0.3009 | 138.8571 | 243 | 0.6641 | 0.6549 |
| 0.3009 | 140.0 | 245 | 0.6435 | 0.6972 |
| 0.3009 | 140.5714 | 246 | 0.6134 | 0.7254 |
| 0.3009 | 141.7143 | 248 | 0.6063 | 0.7394 |
| 0.2873 | 142.8571 | 250 | 0.6347 | 0.6972 |
| 0.2873 | 144.0 | 252 | 0.6992 | 0.6690 |
| 0.2873 | 144.5714 | 253 | 0.7137 | 0.6408 |
| 0.2873 | 145.7143 | 255 | 0.6738 | 0.6690 |
| 0.2873 | 146.8571 | 257 | 0.6321 | 0.7113 |
| 0.2873 | 148.0 | 259 | 0.6135 | 0.7183 |
| 0.2821 | 148.5714 | 260 | 0.6195 | 0.7113 |
| 0.2821 | 149.7143 | 262 | 0.6544 | 0.6761 |
| 0.2821 | 150.8571 | 264 | 0.6464 | 0.6831 |
| 0.2821 | 152.0 | 266 | 0.6087 | 0.7324 |
| 0.2821 | 152.5714 | 267 | 0.6000 | 0.7394 |
| 0.2821 | 153.7143 | 269 | 0.6170 | 0.7113 |
| 0.3017 | 154.8571 | 271 | 0.6674 | 0.6831 |
| 0.3017 | 156.0 | 273 | 0.7137 | 0.6338 |
| 0.3017 | 156.5714 | 274 | 0.7014 | 0.6479 |
| 0.3017 | 157.7143 | 276 | 0.6091 | 0.7254 |
| 0.3017 | 158.8571 | 278 | 0.5626 | 0.7676 |
| 0.2857 | 160.0 | 280 | 0.5685 | 0.7606 |
| 0.2857 | 160.5714 | 281 | 0.5941 | 0.7113 |
| 0.2857 | 161.7143 | 283 | 0.6219 | 0.7113 |
| 0.2857 | 162.8571 | 285 | 0.6283 | 0.7113 |
| 0.2857 | 164.0 | 287 | 0.6314 | 0.7042 |
| 0.2857 | 164.5714 | 288 | 0.6369 | 0.6972 |
| 0.2819 | 165.7143 | 290 | 0.6446 | 0.6972 |
| 0.2819 | 166.8571 | 292 | 0.6541 | 0.6901 |
| 0.2819 | 168.0 | 294 | 0.6286 | 0.7183 |
| 0.2819 | 168.5714 | 295 | 0.6064 | 0.7183 |
| 0.2819 | 169.7143 | 297 | 0.5995 | 0.7254 |
| 0.2819 | 170.8571 | 299 | 0.6431 | 0.7254 |
| 0.2744 | 172.0 | 301 | 0.6797 | 0.6901 |
| 0.2744 | 172.5714 | 302 | 0.6716 | 0.6972 |
| 0.2744 | 173.7143 | 304 | 0.6510 | 0.7254 |
| 0.2744 | 174.8571 | 306 | 0.6362 | 0.7465 |
| 0.2744 | 176.0 | 308 | 0.6158 | 0.7606 |
| 0.2744 | 176.5714 | 309 | 0.6099 | 0.7676 |
| 0.2867 | 177.7143 | 311 | 0.6112 | 0.7535 |
| 0.2867 | 178.8571 | 313 | 0.6035 | 0.7465 |
| 0.2867 | 180.0 | 315 | 0.5816 | 0.7676 |
| 0.2867 | 180.5714 | 316 | 0.5818 | 0.7676 |
| 0.2867 | 181.7143 | 318 | 0.6078 | 0.7676 |
| 0.2883 | 182.8571 | 320 | 0.6083 | 0.7535 |
| 0.2883 | 184.0 | 322 | 0.5928 | 0.7465 |
| 0.2883 | 184.5714 | 323 | 0.5862 | 0.7535 |
| 0.2883 | 185.7143 | 325 | 0.5625 | 0.7676 |
| 0.2883 | 186.8571 | 327 | 0.5580 | 0.7817 |
| 0.2883 | 188.0 | 329 | 0.5945 | 0.7535 |
| 0.2852 | 188.5714 | 330 | 0.6321 | 0.6972 |
| 0.2852 | 189.7143 | 332 | 0.6650 | 0.6620 |
| 0.2852 | 190.8571 | 334 | 0.6612 | 0.6690 |
| 0.2852 | 192.0 | 336 | 0.6455 | 0.6761 |
| 0.2852 | 192.5714 | 337 | 0.6290 | 0.7113 |
| 0.2852 | 193.7143 | 339 | 0.6036 | 0.7394 |
| 0.2941 | 194.8571 | 341 | 0.5879 | 0.7535 |
| 0.2941 | 196.0 | 343 | 0.6135 | 0.7254 |
| 0.2941 | 196.5714 | 344 | 0.6295 | 0.7113 |
| 0.2941 | 197.7143 | 346 | 0.6445 | 0.6831 |
| 0.2941 | 198.8571 | 348 | 0.6591 | 0.6690 |
| 0.2692 | 200.0 | 350 | 0.6557 | 0.6831 |
| 0.2692 | 200.5714 | 351 | 0.6485 | 0.7113 |
| 0.2692 | 201.7143 | 353 | 0.6520 | 0.7183 |
| 0.2692 | 202.8571 | 355 | 0.6673 | 0.7113 |
| 0.2692 | 204.0 | 357 | 0.6814 | 0.7183 |
| 0.2692 | 204.5714 | 358 | 0.6694 | 0.7113 |
| 0.2666 | 205.7143 | 360 | 0.6350 | 0.7254 |
| 0.2666 | 206.8571 | 362 | 0.6091 | 0.7465 |
| 0.2666 | 208.0 | 364 | 0.6222 | 0.7394 |
| 0.2666 | 208.5714 | 365 | 0.6363 | 0.7394 |
| 0.2666 | 209.7143 | 367 | 0.6398 | 0.7394 |
| 0.2666 | 210.8571 | 369 | 0.6555 | 0.7254 |
| 0.2745 | 212.0 | 371 | 0.6555 | 0.7254 |
| 0.2745 | 212.5714 | 372 | 0.6467 | 0.7394 |
| 0.2745 | 213.7143 | 374 | 0.6216 | 0.7606 |
| 0.2745 | 214.8571 | 376 | 0.6066 | 0.7676 |
| 0.2745 | 216.0 | 378 | 0.6083 | 0.7606 |
| 0.2745 | 216.5714 | 379 | 0.6152 | 0.7535 |
| 0.2578 | 217.7143 | 381 | 0.6162 | 0.7535 |
| 0.2578 | 218.8571 | 383 | 0.6097 | 0.7535 |
| 0.2578 | 220.0 | 385 | 0.6003 | 0.7465 |
| 0.2578 | 220.5714 | 386 | 0.6064 | 0.7535 |
| 0.2578 | 221.7143 | 388 | 0.6182 | 0.7535 |
| 0.2637 | 222.8571 | 390 | 0.6465 | 0.7465 |
| 0.2637 | 224.0 | 392 | 0.6461 | 0.7535 |
| 0.2637 | 224.5714 | 393 | 0.6352 | 0.7535 |
| 0.2637 | 225.7143 | 395 | 0.6018 | 0.7606 |
| 0.2637 | 226.8571 | 397 | 0.5855 | 0.7746 |
| 0.2637 | 228.0 | 399 | 0.5916 | 0.7606 |
| 0.2696 | 228.5714 | 400 | 0.6031 | 0.7606 |
| 0.2696 | 229.7143 | 402 | 0.6308 | 0.7606 |
| 0.2696 | 230.8571 | 404 | 0.6435 | 0.7465 |
| 0.2696 | 232.0 | 406 | 0.6325 | 0.7465 |
| 0.2696 | 232.5714 | 407 | 0.6212 | 0.7535 |
| 0.2696 | 233.7143 | 409 | 0.5986 | 0.7535 |
| 0.2697 | 234.8571 | 411 | 0.5964 | 0.7465 |
| 0.2697 | 236.0 | 413 | 0.5950 | 0.7465 |
| 0.2697 | 236.5714 | 414 | 0.5986 | 0.7465 |
| 0.2697 | 237.7143 | 416 | 0.6066 | 0.7535 |
| 0.2697 | 238.8571 | 418 | 0.6035 | 0.7535 |
| 0.2659 | 240.0 | 420 | 0.6039 | 0.7535 |
| 0.2659 | 240.5714 | 421 | 0.6004 | 0.7535 |
| 0.2659 | 241.7143 | 423 | 0.6001 | 0.7535 |
| 0.2659 | 242.8571 | 425 | 0.5941 | 0.7465 |
| 0.2659 | 244.0 | 427 | 0.5942 | 0.7394 |
| 0.2659 | 244.5714 | 428 | 0.5972 | 0.7465 |
| 0.2529 | 245.7143 | 430 | 0.6077 | 0.7535 |
| 0.2529 | 246.8571 | 432 | 0.6173 | 0.7465 |
| 0.2529 | 248.0 | 434 | 0.6129 | 0.7606 |
| 0.2529 | 248.5714 | 435 | 0.6099 | 0.7606 |
| 0.2529 | 249.7143 | 437 | 0.6005 | 0.7606 |
| 0.2529 | 250.8571 | 439 | 0.5920 | 0.7606 |
| 0.261 | 252.0 | 441 | 0.5946 | 0.7606 |
| 0.261 | 252.5714 | 442 | 0.5992 | 0.7606 |
| 0.261 | 253.7143 | 444 | 0.6142 | 0.7606 |
| 0.261 | 254.8571 | 446 | 0.6289 | 0.7465 |
| 0.261 | 256.0 | 448 | 0.6316 | 0.7465 |
| 0.261 | 256.5714 | 449 | 0.6302 | 0.7535 |
| 0.2675 | 257.7143 | 451 | 0.6241 | 0.7535 |
| 0.2675 | 258.8571 | 453 | 0.6129 | 0.7535 |
| 0.2675 | 260.0 | 455 | 0.6066 | 0.7465 |
| 0.2675 | 260.5714 | 456 | 0.6061 | 0.7465 |
| 0.2675 | 261.7143 | 458 | 0.6098 | 0.7535 |
| 0.2737 | 262.8571 | 460 | 0.6172 | 0.7394 |
| 0.2737 | 264.0 | 462 | 0.6274 | 0.7324 |
| 0.2737 | 264.5714 | 463 | 0.6298 | 0.7324 |
| 0.2737 | 265.7143 | 465 | 0.6296 | 0.7324 |
| 0.2737 | 266.8571 | 467 | 0.6285 | 0.7324 |
| 0.2737 | 268.0 | 469 | 0.6265 | 0.7324 |
| 0.2504 | 268.5714 | 470 | 0.6274 | 0.7465 |
| 0.2504 | 269.7143 | 472 | 0.6286 | 0.7394 |
| 0.2504 | 270.8571 | 474 | 0.6236 | 0.7465 |
| 0.2504 | 272.0 | 476 | 0.6178 | 0.7465 |
| 0.2504 | 272.5714 | 477 | 0.6164 | 0.7465 |
| 0.2504 | 273.7143 | 479 | 0.6161 | 0.7465 |
| 0.2539 | 274.8571 | 481 | 0.6193 | 0.7465 |
| 0.2539 | 276.0 | 483 | 0.6236 | 0.7394 |
| 0.2539 | 276.5714 | 484 | 0.6258 | 0.7394 |
| 0.2539 | 277.7143 | 486 | 0.6308 | 0.7394 |
| 0.2539 | 278.8571 | 488 | 0.6349 | 0.7394 |
| 0.2508 | 280.0 | 490 | 0.6352 | 0.7394 |
| 0.2508 | 280.5714 | 491 | 0.6346 | 0.7394 |
| 0.2508 | 281.7143 | 493 | 0.6336 | 0.7394 |
| 0.2508 | 282.8571 | 495 | 0.6331 | 0.7394 |
| 0.2508 | 284.0 | 497 | 0.6324 | 0.7394 |
| 0.2508 | 284.5714 | 498 | 0.6319 | 0.7394 |
| 0.2393 | 285.7143 | 500 | 0.6316 | 0.7394 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
lesso15/2e8bc4fa-d6cb-43a8-a829-70d21b0cac3d | lesso15 | 2025-01-24T17:54:09Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:39:16Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2e8bc4fa-d6cb-43a8-a829-70d21b0cac3d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 661c83ddd68a38d2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/661c83ddd68a38d2_train_data.json
type:
field_instruction: doc
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso15/2e8bc4fa-d6cb-43a8-a829-70d21b0cac3d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/661c83ddd68a38d2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2249679b-6e87-4f27-bd19-24d76ede50f1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2249679b-6e87-4f27-bd19-24d76ede50f1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2e8bc4fa-d6cb-43a8-a829-70d21b0cac3d
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.2740 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
0x1202/2aa3e0ca-06a8-4584-8e8a-9e33fe39fafe | 0x1202 | 2025-01-24T17:51:34Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"region:us"
] | null | 2025-01-24T17:23:24Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2aa3e0ca-06a8-4584-8e8a-9e33fe39fafe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- a22157a53c308254_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a22157a53c308254_train_data.json
type:
field_input: paraphrase_types
field_instruction: sentence1
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/2aa3e0ca-06a8-4584-8e8a-9e33fe39fafe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a22157a53c308254_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 50bc6a97-3376-4f9e-9686-b3782db5faa9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 50bc6a97-3376-4f9e-9686-b3782db5faa9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2aa3e0ca-06a8-4584-8e8a-9e33fe39fafe
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.5572 | 0.0059 | 1 | 1.5554 |
| 3.3254 | 0.2954 | 50 | 1.0482 |
| 3.1593 | 0.5908 | 100 | 1.0086 |
| 3.2459 | 0.8863 | 150 | 0.9722 |
| 2.4835 | 1.1817 | 200 | 0.9736 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/88e7a543-a8de-4394-8699-5e369ebd5369 | havinash-ai | 2025-01-24T17:48:26Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-01-24T17:46:21Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 88e7a543-a8de-4394-8699-5e369ebd5369
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7498d1d10e472fec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7498d1d10e472fec_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/88e7a543-a8de-4394-8699-5e369ebd5369
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/7498d1d10e472fec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 88e7a543-a8de-4394-8699-5e369ebd5369
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0026 | 3 | nan |
| 0.0 | 0.0052 | 6 | nan |
| 0.0 | 0.0077 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
IAmSkyDra/BARTBana_Translation_Swap_v0 | IAmSkyDra | 2025-01-24T17:48:12Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:IAmSkyDra/BARTBana_v4",
"base_model:finetune:IAmSkyDra/BARTBana_v4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-24T03:50:58Z | ---
base_model: IAmSkyDra/BARTBana_v4
library_name: transformers
license: mit
metrics:
- sacrebleu
tags:
- generated_from_trainer
model-index:
- name: BARTBana_Translation_Swap_v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BARTBana_Translation_Swap_v0
This model is a fine-tuned version of [IAmSkyDra/BARTBana_v4](https://huggingface.co/IAmSkyDra/BARTBana_v4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0094
- Sacrebleu: 21.0811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|
| 0.0167 | 1.0 | 21086 | 0.0149 | 20.8485 |
| 0.0148 | 2.0 | 42172 | 0.0109 | 20.9983 |
| 0.0103 | 3.0 | 63258 | 0.0094 | 21.0811 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
tuanna08go/dd72e981-18a9-41f4-92f4-5064706d8d21 | tuanna08go | 2025-01-24T17:45:19Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | 2025-01-24T17:29:22Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dd72e981-18a9-41f4-92f4-5064706d8d21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 26ca62faf35f3871_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/26ca62faf35f3871_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/dd72e981-18a9-41f4-92f4-5064706d8d21
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/26ca62faf35f3871_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5011dd9e-451a-4f4a-b556-44c0cfbcd585
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5011dd9e-451a-4f4a-b556-44c0cfbcd585
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dd72e981-18a9-41f4-92f4-5064706d8d21
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 3.3160 |
| 3.1726 | 0.0060 | 10 | 2.8142 |
| 2.3084 | 0.0120 | 20 | 2.3126 |
| 2.3688 | 0.0180 | 30 | 2.2768 |
| 1.9317 | 0.0240 | 40 | 2.2329 |
| 1.9194 | 0.0299 | 50 | 2.2274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5608/38461bf2-a926-4639-bb3e-ad2f99eff7d6 | prxy5608 | 2025-01-24T17:43:26Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-24T15:37:32Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 38461bf2-a926-4639-bb3e-ad2f99eff7d6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e64308cf3a0ae5b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e64308cf3a0ae5b0_train_data.json
type:
field_input: src_doc_title
field_instruction: src_text
field_output: trg_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5608/38461bf2-a926-4639-bb3e-ad2f99eff7d6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e64308cf3a0ae5b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7176bde4-6840-4177-a6e5-4ae7849f205d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7176bde4-6840-4177-a6e5-4ae7849f205d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 38461bf2-a926-4639-bb3e-ad2f99eff7d6
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.7055 | 0.0001 | 1 | 3.2970 |
| 8.7409 | 0.0031 | 50 | 1.7901 |
| 6.4935 | 0.0062 | 100 | 1.5863 |
| 7.9466 | 0.0093 | 150 | 1.4815 |
| 7.6805 | 0.0123 | 200 | 1.4606 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
arcars/reactor-mk1-I1 | arcars | 2025-01-24T17:42:40Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T16:59:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/waraml_-_ViLinh-3B-4bits | RichardErkhov | 2025-01-24T17:42:32Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T17:39:35Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ViLinh-3B - bnb 4bits
- Model creator: https://huggingface.co/waraml/
- Original model: https://huggingface.co/waraml/ViLinh-3B/
Original model description:
---
library_name: transformers
license: other
base_model: qnguyen3/vylinh-qwen-3b-merged
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: vylinh-dpo-v4
results: []
---
|
nttx/9ed5806e-9a59-4228-a359-9f40a7f9445a | nttx | 2025-01-24T17:39:04Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-24T17:08:06Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9ed5806e-9a59-4228-a359-9f40a7f9445a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 14e7f20aa0ab15bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14e7f20aa0ab15bf_train_data.json
type:
field_input: input
field_instruction: system
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/9ed5806e-9a59-4228-a359-9f40a7f9445a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/14e7f20aa0ab15bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9ed5806e-9a59-4228-a359-9f40a7f9445a
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.0985 | 0.0005 | 1 | 9.1647 |
| 10.3675 | 0.0257 | 50 | 11.0540 |
| 9.8685 | 0.0514 | 100 | 10.3661 |
| 10.3118 | 0.0771 | 150 | 10.5581 |
| 10.339 | 0.1028 | 200 | 10.5411 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
datlaaaaaaa/40db0833-088d-4333-b8a7-d9c429000a03 | datlaaaaaaa | 2025-01-24T17:38:27Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:38:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 40db0833-088d-4333-b8a7-d9c429000a03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 661c83ddd68a38d2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/661c83ddd68a38d2_train_data.json
type:
field_instruction: doc
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/40db0833-088d-4333-b8a7-d9c429000a03
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/661c83ddd68a38d2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2249679b-6e87-4f27-bd19-24d76ede50f1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2249679b-6e87-4f27-bd19-24d76ede50f1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 40db0833-088d-4333-b8a7-d9c429000a03
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7979 | 0.2740 | 200 | 0.8337 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MaziyarPanahi/xLAM-7b-fc-r-GGUF | MaziyarPanahi | 2025-01-24T17:37:03Z | 199 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Salesforce/xLAM-7b-fc-r",
"base_model:quantized:Salesforce/xLAM-7b-fc-r",
"region:us",
"conversational"
] | text-generation | 2025-01-24T17:18:18Z | ---
base_model: Salesforce/xLAM-7b-fc-r
inference: false
model_creator: Salesforce
model_name: xLAM-7b-fc-r-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/xLAM-7b-fc-r-GGUF](https://huggingface.co/MaziyarPanahi/xLAM-7b-fc-r-GGUF)
- Model creator: [Salesforce](https://huggingface.co/Salesforce)
- Original model: [Salesforce/xLAM-7b-fc-r](https://huggingface.co/Salesforce/xLAM-7b-fc-r)
## Description
[MaziyarPanahi/xLAM-7b-fc-r-GGUF](https://huggingface.co/MaziyarPanahi/xLAM-7b-fc-r-GGUF) contains GGUF format model files for [Salesforce/xLAM-7b-fc-r](https://huggingface.co/Salesforce/xLAM-7b-fc-r).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
8friends/martini | 8friends | 2025-01-24T17:36:45Z | 31 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-24T16:46:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MartiniFLUX
---
# Martini
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MartiniFLUX` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('8friends/martini', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Romain-XV/8d3367e3-cfd2-44de-a49c-9adebd44dadd | Romain-XV | 2025-01-24T17:36:15Z | 7 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T17:33:52Z | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8d3367e3-cfd2-44de-a49c-9adebd44dadd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6be8b7f2d665f721_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6be8b7f2d665f721_train_data.json
type:
field_input: prompt_name
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 5
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/8d3367e3-cfd2-44de-a49c-9adebd44dadd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 207
micro_batch_size: 4
mlflow_experiment_name: /tmp/6be8b7f2d665f721_train_data.json
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc04a606-258f-4bff-8fbe-78a2b190e142
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc04a606-258f-4bff-8fbe-78a2b190e142
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8d3367e3-cfd2-44de-a49c-9adebd44dadd
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 207
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9289 | 0.0005 | 1 | 6.9372 |
| 6.86 | 0.0234 | 50 | 6.8584 |
| 6.8303 | 0.0469 | 100 | 6.8300 |
| 6.8214 | 0.0703 | 150 | 6.8157 |
| 6.8078 | 0.0937 | 200 | 6.8135 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ehristoforu/tmoe-v2 | ehristoforu | 2025-01-24T17:29:36Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T17:22:21Z | ---
library_name: transformers
license: apache-2.0
---
```
base_model: Qwen/Qwen2.5-1.5B-Instruct
gate_mode: cheap_embed
architecture: qwen
experts_per_token: 4
dtype: bfloat16
experts:
- source_model: Qwen/Qwen2.5-1.5B-Instruct
positive_prompts: ["chat assistant"]
- source_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
positive_prompts: ["code assistant"]
- source_model: Qwen/Qwen2.5-Math-1.5B-Instruct
positive_prompts: ["math assistant"]
- source_model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
positive_prompts: ["uncensored assistant"]
- source_model: Rombo-Org/Rombo-LLM-V2.5-Qwen-1.5b
positive_prompts: ["review assistant"]
- source_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
positive_prompts: ["logical assistant"]
- source_model: Vikhrmodels/Vikhr-Qwen-2.5-1.5B-Instruct
positive_prompts: ["writing assistant"]
- source_model: RefalMachine/RuadaptQwen2.5-1.5B-instruct
positive_prompts: ["text editing assistant"]
shared_experts:
- source_model: Qwen/Qwen2.5-1.5B-Instruct
positive_prompts: ["chat assistant"]
residual_scale: 0.1
``` |
lhong4759/b0027da3-2f57-4fec-82a1-491799a29dcc | lhong4759 | 2025-01-24T17:28:40Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T17:00:46Z | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b0027da3-2f57-4fec-82a1-491799a29dcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7349255a0722d332_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7349255a0722d332_train_data.json
type:
field_input: author
field_instruction: title
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/b0027da3-2f57-4fec-82a1-491799a29dcc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7349255a0722d332_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 13fce366-6808-46f8-8029-0af5c3620a55
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 13fce366-6808-46f8-8029-0af5c3620a55
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b0027da3-2f57-4fec-82a1-491799a29dcc
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.165 | 0.1703 | 200 | 2.3166 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duyphu/6e04d534-2b9c-40a7-9892-fbd2a9d5ff8b | duyphu | 2025-01-24T17:28:31Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-24T17:08:27Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6e04d534-2b9c-40a7-9892-fbd2a9d5ff8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 14e7f20aa0ab15bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14e7f20aa0ab15bf_train_data.json
type:
field_input: input
field_instruction: system
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/6e04d534-2b9c-40a7-9892-fbd2a9d5ff8b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/14e7f20aa0ab15bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6e04d534-2b9c-40a7-9892-fbd2a9d5ff8b
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0013 | 10 | nan |
| 0.0 | 0.0026 | 20 | nan |
| 0.0 | 0.0039 | 30 | nan |
| 0.0 | 0.0051 | 40 | nan |
| 0.0 | 0.0064 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hackint0sh/phi-3-clinical | hackint0sh | 2025-01-24T17:28:29Z | 264 | 1 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T13:26:33Z | # 🤖 Phi-3-Clinical
Welcome to the repository for Phi-3-Clinical, a fine-tuned model designed to empower medical researchers and developers in the Bio-Pharma domain. This model has been meticulously trained on clinical trial datasets from the U.S. government to deliver high-quality insights and facilitate research and development in healthcare and pharmaceutical innovation. This model is currently being actively updated and improved as part of my ongoing research and work in Retrieval-Augmented Generation (RAG).
---
## 🚀 Key Features
- **Fine-Tuned on**: Clinical
- **Primary Use Case(s)**: [Summarization, Question Answering, etc.]
- **Updates in Progress**:
- Optimizing for better accuracy with RAG workflows.
- Incorporating new datasets and training strategies.
- Fine-tuning with community feedback.
---
## 📅 What's Next?
I am actively working on:
1. Integrating this model into a RAG pipeline for enhanced retrieval-augmented tasks.
2. Regular updates to improve performance and reduce inference time.
3. Expanding support for [languages/domains/etc.].
Stay tuned for updates and improvements in the coming weeks!
---
## 🛠️ How to Use
Here's a quick example of how you can use this model:
```python
from transformers import pipeline
# Load the model
model = pipeline("task_name", model="hackint0sh/phi-3-clinical")
# Example usage
input_text = "Your input here"
output = model(input_text)
print(output)
```
Replace task_name with the appropriate task (e.g., "text-classification", "question-answering", "Clinical Trial Format").
🙋♂️ Need Help?
I’m here to help! If you have any questions, suggestions, or encounter any issues while using the model, feel free to:
• Open an Issue on this repository.
• DM me directly on Hugging Face.
I’m always happy to collaborate and improve this model further based on your feedback. 😊
🌟 Contributing
Contributions are welcome! If you have ideas for improvements or want to contribute, feel free to fork this repository and open a pull request.
📝 License
This model is released under the MIT license. See the LICENSE file for more details.
Thank you for your interest in Phi-3-Clinical! Your support and feedback help make this model better for everyone. ❤️
|
visdata/cook16 | visdata | 2025-01-24T17:24:53Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T17:06:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thangla01/f4d5a30d-119b-4d05-a18b-d90e9d562e4d | thangla01 | 2025-01-24T17:24:07Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T15:24:49Z | ---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4d5a30d-119b-4d05-a18b-d90e9d562e4d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 575c32387ecb28f7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/575c32387ecb28f7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/f4d5a30d-119b-4d05-a18b-d90e9d562e4d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/575c32387ecb28f7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 077294c5-cf74-4889-92d2-814219f70be0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 077294c5-cf74-4889-92d2-814219f70be0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f4d5a30d-119b-4d05-a18b-d90e9d562e4d
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1301 | 0.0017 | 200 | 1.0137 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Muadil/Llama-3.2-1B-Instruct_sum_PPO_Skywork_1k_1_3ep | Muadil | 2025-01-24T17:24:01Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T17:22:49Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/bruphin-lambda-i1-GGUF | mradermacher | 2025-01-24T17:23:36Z | 359 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/bruphin-lambda",
"base_model:quantized:nbeerbower/bruphin-lambda",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-24T14:36:40Z | ---
base_model: nbeerbower/bruphin-lambda
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/bruphin-lambda
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/bruphin-lambda-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/bruphin-lambda-i1-GGUF/resolve/main/bruphin-lambda.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sergioalves/6b63f873-aa24-49b9-b345-1c0709510c83 | sergioalves | 2025-01-24T17:23:15Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-medium-4k-instruct",
"base_model:adapter:unsloth/Phi-3-medium-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-01-24T17:07:32Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-medium-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6b63f873-aa24-49b9-b345-1c0709510c83
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-medium-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ebc79fbfad09ef54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ebc79fbfad09ef54_train_data.json
type:
field_instruction: message_1
field_output: message_2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: sergioalves/6b63f873-aa24-49b9-b345-1c0709510c83
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/ebc79fbfad09ef54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 714dd9ad-7ff4-43d7-ab89-ab528ee5d7c9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 714dd9ad-7ff4-43d7-ab89-ab528ee5d7c9
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 6b63f873-aa24-49b9-b345-1c0709510c83
This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | nan |
| 0.0 | 0.0043 | 5 | nan |
| 0.0 | 0.0085 | 10 | nan |
| 0.0 | 0.0128 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik1987/3436eb18-86f9-4e2c-b232-019f42f1c8d0 | dimasik1987 | 2025-01-24T17:23:11Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T17:20:29Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3436eb18-86f9-4e2c-b232-019f42f1c8d0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ae575f57377b68c1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ae575f57377b68c1_train_data.json
type:
field_instruction: url
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: dimasik1987/3436eb18-86f9-4e2c-b232-019f42f1c8d0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/ae575f57377b68c1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ee5b7b78-ea0d-47d8-abcc-509e81fc5c60
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ee5b7b78-ea0d-47d8-abcc-509e81fc5c60
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 3436eb18-86f9-4e2c-b232-019f42f1c8d0
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0093 | 1 | 3.4304 |
| 3.0488 | 0.0467 | 5 | 3.2876 |
| 3.1991 | 0.0935 | 10 | 3.0793 |
| 3.175 | 0.1402 | 15 | 2.9395 |
| 2.9589 | 0.1869 | 20 | 2.8401 |
| 2.911 | 0.2336 | 25 | 2.7981 |
| 2.5735 | 0.2804 | 30 | 2.7879 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
0x1202/ecb8379e-81b0-4bd5-a690-c05a5108b46a | 0x1202 | 2025-01-24T17:21:31Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-01-24T16:47:38Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ecb8379e-81b0-4bd5-a690-c05a5108b46a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7498d1d10e472fec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7498d1d10e472fec_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/ecb8379e-81b0-4bd5-a690-c05a5108b46a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/7498d1d10e472fec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ecb8379e-81b0-4bd5-a690-c05a5108b46a
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.124 | 0.0034 | 1 | 2.3156 |
| 1.4537 | 0.1718 | 50 | 1.6402 |
| 1.1582 | 0.3436 | 100 | 1.5493 |
| 1.2614 | 0.5155 | 150 | 1.4730 |
| 1.3643 | 0.6873 | 200 | 1.4644 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
JacksonBrune/d3ecb40b-4ec4-48e8-94b2-c49ee500b18f | JacksonBrune | 2025-01-24T17:21:21Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"region:us"
] | null | 2025-01-24T17:20:12Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d3ecb40b-4ec4-48e8-94b2-c49ee500b18f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a22157a53c308254_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a22157a53c308254_train_data.json
type:
field_input: paraphrase_types
field_instruction: sentence1
field_output: sentence2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/d3ecb40b-4ec4-48e8-94b2-c49ee500b18f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a22157a53c308254_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 50bc6a97-3376-4f9e-9686-b3782db5faa9
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 50bc6a97-3376-4f9e-9686-b3782db5faa9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d3ecb40b-4ec4-48e8-94b2-c49ee500b18f
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0015 | 1 | nan |
| 0.0 | 0.0044 | 3 | nan |
| 0.0 | 0.0089 | 6 | nan |
| 0.0 | 0.0133 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
masint/tiny-random-llama | masint | 2025-01-24T17:20:41Z | 3,681 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-11-13T21:44:32Z | ---
license: apache-2.0
---
|
kk-aivio/63cd7910-a4ac-4ac2-b733-dd3f6a82b707 | kk-aivio | 2025-01-24T17:20:23Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:Eurdem/Defne_llama3_2x8B",
"base_model:adapter:Eurdem/Defne_llama3_2x8B",
"license:llama3",
"region:us"
] | null | 2025-01-24T17:15:08Z | ---
library_name: peft
license: llama3
base_model: Eurdem/Defne_llama3_2x8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 63cd7910-a4ac-4ac2-b733-dd3f6a82b707
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Eurdem/Defne_llama3_2x8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8fba98d9ee650ad7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8fba98d9ee650ad7_train_data.json
type:
field_instruction: original
field_output: generation
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/63cd7910-a4ac-4ac2-b733-dd3f6a82b707
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8fba98d9ee650ad7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eb9ecb18-093a-4fda-8179-596682f7f582
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: eb9ecb18-093a-4fda-8179-596682f7f582
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 63cd7910-a4ac-4ac2-b733-dd3f6a82b707
This model is a fine-tuned version of [Eurdem/Defne_llama3_2x8B](https://huggingface.co/Eurdem/Defne_llama3_2x8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0010 | 3 | nan |
| 0.0 | 0.0020 | 6 | nan |
| 0.0 | 0.0030 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung03/8ce4202e-77f6-4672-9d8c-472b53c319d8 | nhung03 | 2025-01-24T17:19:36Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T17:00:34Z | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8ce4202e-77f6-4672-9d8c-472b53c319d8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7349255a0722d332_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7349255a0722d332_train_data.json
type:
field_input: author
field_instruction: title
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/8ce4202e-77f6-4672-9d8c-472b53c319d8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7349255a0722d332_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 13fce366-6808-46f8-8029-0af5c3620a55
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 13fce366-6808-46f8-8029-0af5c3620a55
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8ce4202e-77f6-4672-9d8c-472b53c319d8
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.156 | 0.1703 | 200 | 2.3156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/9f3f5a89-654e-4064-a6c3-0dab1e098823 | aleegis12 | 2025-01-24T17:18:23Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T14:11:08Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9f3f5a89-654e-4064-a6c3-0dab1e098823
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- aa8537f7fabb44d3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aa8537f7fabb44d3_train_data.json
type:
field_instruction: en
field_output: hi
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/9f3f5a89-654e-4064-a6c3-0dab1e098823
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/aa8537f7fabb44d3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 64219759-3ade-4844-b427-6bd1bf891a27
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 64219759-3ade-4844-b427-6bd1bf891a27
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9f3f5a89-654e-4064-a6c3-0dab1e098823
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4842 | 0.0000 | 1 | 0.6386 |
| 0.7284 | 0.0020 | 50 | 0.4892 |
| 0.5061 | 0.0039 | 100 | 0.3692 |
| 0.4737 | 0.0059 | 150 | 0.3279 |
| 0.5555 | 0.0079 | 200 | 0.3199 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk-out/2fef6e97-a25b-4623-aa01-4851a1b43758 | kostiantynk-out | 2025-01-24T17:17:28Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T16:41:17Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2fef6e97-a25b-4623-aa01-4851a1b43758
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f806d2d7fa7067cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f806d2d7fa7067cd_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/2fef6e97-a25b-4623-aa01-4851a1b43758
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/f806d2d7fa7067cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8b585115-23cd-4a56-9fba-b57318c01a85
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8b585115-23cd-4a56-9fba-b57318c01a85
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2fef6e97-a25b-4623-aa01-4851a1b43758
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/4db533a6-45ba-4b24-8517-f3834b544f68 | nhung01 | 2025-01-24T17:16:09Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T15:26:23Z | ---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4db533a6-45ba-4b24-8517-f3834b544f68
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 575c32387ecb28f7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/575c32387ecb28f7_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/4db533a6-45ba-4b24-8517-f3834b544f68
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/575c32387ecb28f7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 077294c5-cf74-4889-92d2-814219f70be0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 077294c5-cf74-4889-92d2-814219f70be0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4db533a6-45ba-4b24-8517-f3834b544f68
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1263 | 0.0017 | 200 | 1.0132 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
treasure4l/Llama3.2_3B_DPO | treasure4l | 2025-01-24T17:14:42Z | 10 | 0 | peft | [
"peft",
"safetensors",
"dataset:gz25/JGTV_Pref_DS",
"arxiv:1910.09700",
"region:us"
] | null | 2025-01-24T03:27:09Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
library_name: peft
datasets:
- gz25/JGTV_Pref_DS
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
lesso10/f6295c50-f971-420e-b187-e9b89f697b60 | lesso10 | 2025-01-24T17:14:29Z | 5 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T15:46:50Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f6295c50-f971-420e-b187-e9b89f697b60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 959285475cc53225_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/959285475cc53225_train_data.json
type:
field_input: post
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso10/f6295c50-f971-420e-b187-e9b89f697b60
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/959285475cc53225_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ad0763da-bb5f-41ee-89a4-20beb5a03fb3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ad0763da-bb5f-41ee-89a4-20beb5a03fb3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f6295c50-f971-420e-b187-e9b89f697b60
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.1418 | 0.0001 | 1 | 3.0369 |
| 12.1455 | 0.0003 | 5 | 2.9812 |
| 11.799 | 0.0007 | 10 | 2.7262 |
| 9.7705 | 0.0010 | 15 | 2.6347 |
| 11.0196 | 0.0013 | 20 | 2.5897 |
| 10.6078 | 0.0016 | 25 | 2.5797 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cvoffer/26784c63-3b53-402b-8736-e7956fd384d4 | cvoffer | 2025-01-24T17:13:55Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T17:05:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26784c63-3b53-402b-8736-e7956fd384d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 615cebde847ce8a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/615cebde847ce8a5_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: cvoffer/26784c63-3b53-402b-8736-e7956fd384d4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/615cebde847ce8a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3260e223-b692-495b-a4ed-8da11180f21b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3260e223-b692-495b-a4ed-8da11180f21b
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 26784c63-3b53-402b-8736-e7956fd384d4
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 5 | nan |
| 0.0 | 0.0008 | 10 | nan |
| 0.0 | 0.0012 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dim-eleftheriou/Mistral-7B-Instruct-v0.3-mitre-v0.1-GGUF-Q4_k_m | dim-eleftheriou | 2025-01-24T17:13:16Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-24T17:12:21Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dim-eleftheriou
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Colezwhy/dreambooth | Colezwhy | 2025-01-24T17:11:49Z | 31 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-01-23T10:49:33Z | ---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- lora
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Colezwhy/dreambooth
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
taopanda-4/57c3a30c-e883-4216-a074-89f10c859369 | taopanda-4 | 2025-01-24T17:09:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:39:59Z | ---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: unsloth/SmolLM-135M
model-index:
- name: 57c3a30c-e883-4216-a074-89f10c859369
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
datasets:
- data_files:
- 12918daafc7e29ad_train_data.json
ds_type: json
format: custom
path: 12918daafc7e29ad_train_data.json
type:
field: null
field_input: null
field_instruction: premise
field_output: entailment
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_sample_packing: false
eval_table_size: null
evals_per_epoch: 0
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: taopanda-4/57c3a30c-e883-4216-a074-89f10c859369
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: ./outputs/out/taopanda-4_f45cb749-0cc3-4d0c-ab1a-6f0cbfef80b5
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
seed: 99684
sequence_len: 4096
special_tokens: null
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.1
wandb_entity: fatcat87-taopanda
wandb_log_model: null
wandb_mode: online
wandb_name: taopanda-4_f45cb749-0cc3-4d0c-ab1a-6f0cbfef80b5
wandb_project: subnet56
wandb_runid: taopanda-4_f45cb749-0cc3-4d0c-ab1a-6f0cbfef80b5
wandb_watch: null
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/wiw4ie03)
# 57c3a30c-e883-4216-a074-89f10c859369
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 99684
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.015 | 1.0 | 318 | 1.0362 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Romain-XV/c61e0709-7239-4e4e-b7c6-559f047b8349 | Romain-XV | 2025-01-24T17:08:49Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"region:us"
] | null | 2025-01-24T16:06:24Z | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c61e0709-7239-4e4e-b7c6-559f047b8349
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 26bc9bf8a02efd49_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/26bc9bf8a02efd49_train_data.json
type:
field_input: candidates
field_instruction: article
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 5
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/c61e0709-7239-4e4e-b7c6-559f047b8349
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 518
micro_batch_size: 4
mlflow_experiment_name: /tmp/26bc9bf8a02efd49_train_data.json
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8a6d5fac-bf65-456e-9711-c9932c0698ec
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8a6d5fac-bf65-456e-9711-c9932c0698ec
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c61e0709-7239-4e4e-b7c6-559f047b8349
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 115
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8074 | 0.0087 | 1 | 2.7402 |
| 1.2987 | 0.4372 | 50 | 1.2585 |
| 1.2285 | 0.8743 | 100 | 1.2308 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/df433dd8-2194-48eb-bfc1-26d525d435da | thalllsssss | 2025-01-24T17:08:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:38:43Z | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df433dd8-2194-48eb-bfc1-26d525d435da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- defb6f4b19d9c212_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/defb6f4b19d9c212_train_data.json
type:
field_input: schema
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/df433dd8-2194-48eb-bfc1-26d525d435da
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/defb6f4b19d9c212_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 80d8fb58-c71c-4396-8734-552dcb079eba
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 80d8fb58-c71c-4396-8734-552dcb079eba
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# df433dd8-2194-48eb-bfc1-26d525d435da
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.17 | 0.0824 | 200 | 0.2418 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
minhnguyennnnnn/b733d243-a79a-4af4-8c87-1f693595ce68 | minhnguyennnnnn | 2025-01-24T17:07:22Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:38:36Z | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b733d243-a79a-4af4-8c87-1f693595ce68
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- defb6f4b19d9c212_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/defb6f4b19d9c212_train_data.json
type:
field_input: schema
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhnguyennnnnn/b733d243-a79a-4af4-8c87-1f693595ce68
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/defb6f4b19d9c212_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 80d8fb58-c71c-4396-8734-552dcb079eba
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 80d8fb58-c71c-4396-8734-552dcb079eba
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b733d243-a79a-4af4-8c87-1f693595ce68
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1719 | 0.0824 | 200 | 0.2418 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duyphu/4aed99e3-5b57-4283-ab6d-b673b38a5eb9 | duyphu | 2025-01-24T17:04:46Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T16:57:46Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4aed99e3-5b57-4283-ab6d-b673b38a5eb9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8412839835a5e4c8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8412839835a5e4c8_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/4aed99e3-5b57-4283-ab6d-b673b38a5eb9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/8412839835a5e4c8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 11fc7f5c-dcf3-4f4f-8023-8f38c8ae1ae9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 11fc7f5c-dcf3-4f4f-8023-8f38c8ae1ae9
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4aed99e3-5b57-4283-ab6d-b673b38a5eb9
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0039 | 1 | 1.2903 |
| 1.2628 | 0.0386 | 10 | 1.1766 |
| 1.0907 | 0.0773 | 20 | 1.0021 |
| 0.9467 | 0.1159 | 30 | 0.9219 |
| 0.8976 | 0.1546 | 40 | 0.9010 |
| 0.8781 | 0.1932 | 50 | 0.8983 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/dc6ce67f-edea-4eec-9121-398d6416781d | adammandic87 | 2025-01-24T17:02:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T16:44:38Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc6ce67f-edea-4eec-9121-398d6416781d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 12918daafc7e29ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/12918daafc7e29ad_train_data.json
type:
field_instruction: premise
field_output: entailment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/dc6ce67f-edea-4eec-9121-398d6416781d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/12918daafc7e29ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f45cb749-0cc3-4d0c-ab1a-6f0cbfef80b5
wandb_project: Birthday-SN56-13-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f45cb749-0cc3-4d0c-ab1a-6f0cbfef80b5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dc6ce67f-edea-4eec-9121-398d6416781d
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0002 | 6 | nan |
| 0.0 | 0.0003 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/latin_english_translation_model-GGUF | mradermacher | 2025-01-24T17:02:34Z | 197 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:Kankanaghosh/latin_english_translation_model",
"base_model:quantized:Kankanaghosh/latin_english_translation_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-24T17:01:59Z | ---
base_model: Kankanaghosh/latin_english_translation_model
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kankanaghosh/latin_english_translation_model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/latin_english_translation_model-GGUF/resolve/main/latin_english_translation_model.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
robiual-awal/8a3835ec-e768-41ce-ae75-68847c981f37 | robiual-awal | 2025-01-24T17:02:19Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer",
"license:other",
"region:us"
] | null | 2025-01-24T17:00:32Z | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8a3835ec-e768-41ce-ae75-68847c981f37
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7349255a0722d332_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7349255a0722d332_train_data.json
type:
field_input: author
field_instruction: title
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/8a3835ec-e768-41ce-ae75-68847c981f37
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/7349255a0722d332_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 13fce366-6808-46f8-8029-0af5c3620a55
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 13fce366-6808-46f8-8029-0af5c3620a55
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8a3835ec-e768-41ce-ae75-68847c981f37
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Alternate-Tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.511 | 0.0009 | 1 | 2.4499 |
| 2.3078 | 0.0026 | 3 | 2.4492 |
| 2.3958 | 0.0051 | 6 | 2.4416 |
| 2.2283 | 0.0077 | 9 | 2.4155 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
choco58/MistraMystic | choco58 | 2025-01-24T17:00:14Z | 6 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"MistraMystic",
"Conversational AI",
"Personality",
"Persona-dialogue",
"Dialogue-systems",
"Human-like assistant",
"Mistral-7B",
"Mistral",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-17T04:07:16Z | ---
library_name: transformers
tags: [MistraMystic, Conversational AI, Personality, Persona-dialogue, Dialogue-systems, Human-like assistant, Mistral-7B, Mistral]
---
# MistraMystic: Conversational Personality Model 🌊✨
Welcome to **MistraMystic**—a conversational model fine-tuned from Mistral-7B v0.3, capturing nuanced personality traits that make AI interactions feel more authentic and relatable. Whether it’s about balancing conscientious responses or tapping into empathetic reflections, MistraMystic is here to explore the depths of the human-like personality spectrum.
---
## Model Name: MistraMystic
- **Architecture**: Mistral-7B v0.3
- **Training Objective**: Personality-Enhanced Conversational AI
- **Training Dataset**: Fine-tuned on conversational data to reflect Big 5 personality traits.
- JIC: [Journal Intensive Conversations](https://huggingface.co/datasets/chocokiddo/jic) dataset
- **Training Duration**: 4-5 days on A100 GPU (training parameters can be found in appendix of the paper)
---
## Why "MistraMystic"?
The name "MistraMystic" combines the mystique of deep conversation with Mistral's adaptability. Designed to capture the essence of personality through the Big 5 OCEAN traits, MistraMystic works to reflect the nuances of human interactions within its AI responses. The result? A model that speaks with more than just words—it reflects aspects of personality, adding richness and realism to every interaction.
---
## Scope of Applications
MistraMystic is crafted for a range of applications where understanding personality-driven conversation is essential. Here’s what it’s especially good for:
- **Conversational Agents**: Engage users with relatable and personality-driven conversations.
- **Text Generation**: Generate human-like text for articles, chats, and creative writing with a personal touch.
- **Question-Answering**: Answer questions with a flair of personality, making responses more relatable.
- **Educational and Therapy Bots**: Assist in applications where personality-sensitive responses can improve user engagement and retention.
---
## Intended Use
MistraMystic is built for those aiming to inject personality into conversational systems, whether it’s for customer service bots, therapy support, or just plain fun AI companions. It’s particularly suited to applications where capturing nuances like openness, agreeableness, and neuroticism (yes, even those angsty replies!) can enhance user experience.
### Data and Training
The model has been trained on an extensive conversational dataset. Our goal was to align model responses with intrinsic personality traits, enabling MistraMystic to tailor its tone and style depending on conversational context. More information on the dataset will be shared soon.
### Results
**Personality Evaluation on [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) (OCEAN Personality Benchmark)**
| Model | Description | Openness | Conscientiousness | Extraversion | Agreeableness | Neuroticism | Average |
|-------------------------|--------------------------------|----------|-------------------|--------------|---------------|-------------|---------|
| Mistral 7B v0.3 | Zero-shot | 0.8360 | 0.6390 | 0.5140 | 0.8160 | 0.5350 | 0.6680 |
| MistraMystic | Fine-tuned on Conversational Data | 0.9340 | 0.8260 | 0.6250 | 0.9530 | 0.5700 | 0.7816 |
MistraMystic demonstrates notable improvements across all Big 5 traits.
---
## Performance and Limitations
While MistraMystic brings vibrant and personality-driven conversations to the table, it does have limitations:
- **Personality Representation**: MistraMystic is trained for personality alignment, so it may sacrifice some general knowledge capabilities in favor of personality-specific responses. A detailed evaluation will be updated soon.
- **Sensitive Topics**: Despite strong filtering, caution is advised when deploying in high-stakes environments.
- **Computational Load**: The Mistral 7B backbone requires substantial resources, which may limit deployment in real-time settings without sufficient hardware.
---
## Ethical Considerations
We made sure to avoid toxic or inappropriate dialogues by tagging any dialogue with over 25% toxic utterances for separate review. Ethical considerations are a priority, and MistraMystic was designed with responsible AI practices in mind. For details on ethical data practices, see the Appendix (coming soon!).
---
## Future Updates
Stay tuned for more information on MistraMystic!
---
## Citation
```bibtex
@inproceedings{pal-etal-2025-beyond,
title = "Beyond Discrete Personas: Personality Modeling Through Journal Intensive Conversations",
author = "Pal, Sayantan and
Das, Souvik and
Srihari, Rohini K.",
editor = "Rambow, Owen and
Wanner, Leo and
Apidianaki, Marianna and
Al-Khalifa, Hend and
Eugenio, Barbara Di and
Schockaert, Steven",
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
month = jan,
year = "2025",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.coling-main.470/",
pages = "7055--7074",
abstract = "Large Language Models (LLMs) have significantly improved personalized conversational capabilities. However, existing datasets like Persona Chat, Synthetic Persona Chat, and Blended Skill Talk rely on static, predefined personas. This approach often results in dialogues that fail to capture human personalities' fluid and evolving nature. To overcome these limitations, we introduce a novel dataset with around 400,000 dialogues and a framework for generating personalized conversations using long-form journal entries from Reddit. Our approach clusters journal entries for each author and filters them by selecting the most representative cluster, ensuring that the retained entries best reflect the author`s personality. We further refine the data by capturing the Big Five personality traits{---}openness, conscientiousness, extraversion, agreeableness, and neuroticism{---}ensuring that dialogues authentically reflect an individual`s personality. Using Llama 3 70B, we generate high-quality, personality-rich dialogues grounded in these journal entries. Fine-tuning models on this dataset leads to an 11{\%} improvement in capturing personality traits on average, outperforming existing approaches in generating more coherent and personality-driven dialogues."
}
```
|
MaziyarPanahi/DeepSeek-R1-Distill-Llama-8B-GGUF | MaziyarPanahi | 2025-01-24T17:00:07Z | 413 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B",
"region:us",
"conversational"
] | text-generation | 2025-01-24T16:39:48Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
inference: false
model_creator: unsloth
model_name: DeepSeek-R1-Distill-Llama-8B-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/DeepSeek-R1-Distill-Llama-8B-GGUF](https://huggingface.co/MaziyarPanahi/DeepSeek-R1-Distill-Llama-8B-GGUF)
- Model creator: [unsloth](https://huggingface.co/unsloth)
- Original model: [unsloth/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B)
## Description
[MaziyarPanahi/DeepSeek-R1-Distill-Llama-8B-GGUF](https://huggingface.co/MaziyarPanahi/DeepSeek-R1-Distill-Llama-8B-GGUF) contains GGUF format model files for [unsloth/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
vermoney/2ca47c1e-0163-4529-95f9-e4e8d71f7c1f | vermoney | 2025-01-24T16:59:55Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-14B",
"base_model:adapter:unsloth/Qwen2.5-14B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T08:25:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2ca47c1e-0163-4529-95f9-e4e8d71f7c1f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-14B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 05a8f2f79a604342_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/05a8f2f79a604342_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vermoney/2ca47c1e-0163-4529-95f9-e4e8d71f7c1f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/05a8f2f79a604342_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3c0c5817-87f8-4599-ba7a-4d46ec410da6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3c0c5817-87f8-4599-ba7a-4d46ec410da6
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# 2ca47c1e-0163-4529-95f9-e4e8d71f7c1f
This model is a fine-tuned version of [unsloth/Qwen2.5-14B](https://huggingface.co/unsloth/Qwen2.5-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0002 | 10 | nan |
| 0.0 | 0.0003 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/a72f98c2-87d5-4db6-a416-a77fcf3932f8 | hongngo | 2025-01-24T16:54:26Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:33:33Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a72f98c2-87d5-4db6-a416-a77fcf3932f8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 14e7f20aa0ab15bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14e7f20aa0ab15bf_train_data.json
type:
field_input: input
field_instruction: system
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/a72f98c2-87d5-4db6-a416-a77fcf3932f8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/14e7f20aa0ab15bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a72f98c2-87d5-4db6-a416-a77fcf3932f8
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.7557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.9056 | 0.0257 | 200 | 8.7557 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tarabukinivan/56a53b9f-ea9f-4381-ba90-8bcc18074866 | tarabukinivan | 2025-01-24T16:54:20Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-24T16:33:40Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 56a53b9f-ea9f-4381-ba90-8bcc18074866
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 14e7f20aa0ab15bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14e7f20aa0ab15bf_train_data.json
type:
field_input: input
field_instruction: system
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/56a53b9f-ea9f-4381-ba90-8bcc18074866
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/14e7f20aa0ab15bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 56a53b9f-ea9f-4381-ba90-8bcc18074866
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0006 | 5 | nan |
| 0.0 | 0.0013 | 10 | nan |
| 0.0 | 0.0019 | 15 | nan |
| 0.0 | 0.0026 | 20 | nan |
| 0.0 | 0.0032 | 25 | nan |
| 0.0 | 0.0039 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
andreichis/moni-lora | andreichis | 2025-01-24T16:52:55Z | 31 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-24T16:39:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: moni
---
# Moni Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `moni` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('andreichis/moni-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
minhtrannnn/c17939c8-0261-4e1e-9629-c501461a4d8f | minhtrannnn | 2025-01-24T16:52:40Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-24T16:33:24Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c17939c8-0261-4e1e-9629-c501461a4d8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 14e7f20aa0ab15bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14e7f20aa0ab15bf_train_data.json
type:
field_input: input
field_instruction: system
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhtrannnn/c17939c8-0261-4e1e-9629-c501461a4d8f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/14e7f20aa0ab15bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c17939c8-0261-4e1e-9629-c501461a4d8f
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.7751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.0168 | 0.0257 | 200 | 8.7751 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vital-ai/watt-tool-70B-awq | vital-ai | 2025-01-24T16:52:30Z | 2,385 | 0 | null | [
"safetensors",
"llama",
"quantized",
"4bit",
"AWQ",
"base_model:watt-ai/watt-tool-70B",
"base_model:quantized:watt-ai/watt-tool-70B",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | null | 2025-01-24T02:21:48Z | ---
license: apache-2.0
base_model: watt-ai/watt-tool-70B
tags:
- quantized
- 4bit
- AWQ
---
# Model Card: `vital-ai/watt-tool-70B-awq`
## Model Description
This model, **`vital-ai/watt-tool-70B-awq`**, is a quantized version of the base model **[`watt-ai/watt-tool-70B`](https://huggingface.co/watt-ai/watt-tool-70B)**. The quantization process was performed to reduce the model size and improve inference speed while maintaining high performance.
**Base Model:** [`watt-ai/watt-tool-70B`](https://huggingface.co/watt-ai/watt-tool-70B)
**Quantization Method:** 4-bit AWQ
|
aleegis11/52884f00-e251-4d0c-897d-b76e09c9846a | aleegis11 | 2025-01-24T16:51:51Z | 8 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:numind/NuExtract-1.5",
"base_model:adapter:numind/NuExtract-1.5",
"license:mit",
"region:us"
] | null | 2025-01-24T16:24:43Z | ---
library_name: peft
license: mit
base_model: numind/NuExtract-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 52884f00-e251-4d0c-897d-b76e09c9846a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: numind/NuExtract-v1.5
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- dc2351e325261d28_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc2351e325261d28_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis11/52884f00-e251-4d0c-897d-b76e09c9846a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/dc2351e325261d28_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f586585-730a-40cb-960a-9745f62e3dd1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f586585-730a-40cb-960a-9745f62e3dd1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 52884f00-e251-4d0c-897d-b76e09c9846a
This model is a fine-tuned version of [numind/NuExtract-v1.5](https://huggingface.co/numind/NuExtract-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8266 | 0.0007 | 1 | 1.2865 |
| 4.4222 | 0.0338 | 50 | 1.0381 |
| 5.2868 | 0.0677 | 100 | 0.9709 |
| 4.2989 | 0.1015 | 150 | 0.9399 |
| 4.8094 | 0.1354 | 200 | 0.9382 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5606/98312ef8-ef70-4d8c-8efd-b3273962b4d9 | prxy5606 | 2025-01-24T16:51:22Z | 16 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | 2025-01-24T16:49:25Z | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 98312ef8-ef70-4d8c-8efd-b3273962b4d9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 764a1f78f3b0faa6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/764a1f78f3b0faa6_train_data.json
type:
field_input: rational_answer
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/98312ef8-ef70-4d8c-8efd-b3273962b4d9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/764a1f78f3b0faa6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 253a846d-0a75-4cca-b4ab-eb033fd80f91
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 253a846d-0a75-4cca-b4ab-eb033fd80f91
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 98312ef8-ef70-4d8c-8efd-b3273962b4d9
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9385 | 0.0003 | 1 | 6.9434 |
| 6.8661 | 0.0166 | 50 | 6.8642 |
| 6.8282 | 0.0333 | 100 | 6.8437 |
| 6.8196 | 0.0499 | 150 | 6.8352 |
| 6.8165 | 0.0666 | 200 | 6.8340 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiual-awal/b12d83a5-28b9-4365-9fbd-5b9e68c2da04 | robiual-awal | 2025-01-24T16:50:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-01-24T16:48:30Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b12d83a5-28b9-4365-9fbd-5b9e68c2da04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7498d1d10e472fec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7498d1d10e472fec_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/b12d83a5-28b9-4365-9fbd-5b9e68c2da04
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/7498d1d10e472fec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 145b78ab-4197-45e4-b32d-0f5f5e9a56ab
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b12d83a5-28b9-4365-9fbd-5b9e68c2da04
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0026 | 3 | nan |
| 0.0 | 0.0052 | 6 | nan |
| 0.0 | 0.0077 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/c36e98e9-29b0-4860-a7bb-079bd6e9b109 | daniel40 | 2025-01-24T16:50:24Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-24T16:45:01Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c36e98e9-29b0-4860-a7bb-079bd6e9b109
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 14e7f20aa0ab15bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/14e7f20aa0ab15bf_train_data.json
type:
field_input: input
field_instruction: system
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/c36e98e9-29b0-4860-a7bb-079bd6e9b109
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/14e7f20aa0ab15bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c66e9d0-7fa2-4ea5-83d0-56a779006d22
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c36e98e9-29b0-4860-a7bb-079bd6e9b109
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0008 | 6 | nan |
| 0.0 | 0.0012 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF | mradermacher | 2025-01-24T16:43:56Z | 776 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:netcat420/DEFUNCT-EXPERIMENT2_2",
"base_model:quantized:netcat420/DEFUNCT-EXPERIMENT2_2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-22T14:49:23Z | ---
base_model: netcat420/DEFUNCT-EXPERIMENT2_2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/netcat420/DEFUNCT-EXPERIMENT2_2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-MFANN-TIES-unretrained-7b-GGUF/resolve/main/DeepSeek-R1-MFANN-TIES-unretrained-7b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF | mradermacher | 2025-01-24T16:43:23Z | 848 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:netcat420/DEFUNCT-EXPERIMENT2",
"base_model:quantized:netcat420/DEFUNCT-EXPERIMENT2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T05:35:04Z | ---
base_model: netcat420/DEFUNCT-EXPERIMENT2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/netcat420/DEFUNCT-EXPERIMENT2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-DeepSeek-R1-MFANN-7b-GGUF/resolve/main/Qwen2.5-DeepSeek-R1-MFANN-7b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
safe-models/RMVPE | safe-models | 2025-01-24T16:42:58Z | 47 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-01-24T16:42:30Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
mradermacher/Quasar-1.5-Pro-GGUF | mradermacher | 2025-01-24T16:42:41Z | 339 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:silx-ai/Quasar-1.5-Pro",
"base_model:quantized:silx-ai/Quasar-1.5-Pro",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T15:32:53Z | ---
base_model: silx-ai/Quasar-1.5-Pro
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/silx-ai/Quasar-1.5-Pro
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Quasar-1.5-Pro-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Quasar-1.5-Pro-GGUF/resolve/main/Quasar-1.5-Pro.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF | mradermacher | 2025-01-24T16:42:36Z | 3,821 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored",
"base_model:quantized:thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-23T16:14:06Z | ---
base_model: thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/thirdeyeai/DeepSeek-R1-Distill-Qwen-1.5B-uncensored
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-uncensored-i1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-uncensored.i1-Q6_K.gguf) | i1-Q6_K | 1.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Calcium-Opus-20B-v1-i1-GGUF | mradermacher | 2025-01-24T16:42:23Z | 739 | 0 | transformers | [
"transformers",
"gguf",
"Opus",
"text-generation-inference",
"en",
"base_model:prithivMLmods/Calcium-Opus-20B-v1",
"base_model:quantized:prithivMLmods/Calcium-Opus-20B-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-23T17:33:30Z | ---
base_model: prithivMLmods/Calcium-Opus-20B-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Opus
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Calcium-Opus-20B-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 4.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 7.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q2_K.gguf) | i1-Q2_K | 7.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 8.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 11.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q4_1.gguf) | i1-Q4_1 | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Calcium-Opus-20B-v1-i1-GGUF/resolve/main/Calcium-Opus-20B-v1.i1-Q6_K.gguf) | i1-Q6_K | 15.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits