modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF | mradermacher | 2025-01-26T09:45:24Z | 162 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:juewang/Meta-Llama-3-4B-mlp-pruned",
"base_model:quantized:juewang/Meta-Llama-3-4B-mlp-pruned",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-27T09:47:00Z | ---
base_model: juewang/Meta-Llama-3-4B-mlp-pruned
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/juewang/Meta-Llama-3-4B-mlp-pruned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ1_S.gguf) | i1-IQ1_S | 1.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ1_M.gguf) | i1-IQ1_M | 1.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ2_S.gguf) | i1-IQ2_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ2_M.gguf) | i1-IQ2_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ3_S.gguf) | i1-IQ3_S | 2.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-4B-mlp-pruned-i1-GGUF/resolve/main/Meta-Llama-3-4B-mlp-pruned.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF | mradermacher | 2025-01-26T09:45:16Z | 57 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"jeiku/Synthetic_Soul_1k_Mistral_128",
"jeiku/Theory_of_Mind_Roleplay_Mistral",
"jeiku/Alpaca_NSFW_Shuffled_Mistral",
"jeiku/Luna_LoRA_Mistral",
"jsfs11/WONMSeverusDevilv2-TIES",
"en",
"base_model:jsfs11/WONMSeverusDevilv3-LORAMERGED",
"base_model:quantized:jsfs11/WONMSeverusDevilv3-LORAMERGED",
"endpoints_compatible",
"region:us"
] | null | 2024-12-27T09:53:04Z | ---
base_model: jsfs11/WONMSeverusDevilv3-LORAMERGED
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- jeiku/Synthetic_Soul_1k_Mistral_128
- jeiku/Theory_of_Mind_Roleplay_Mistral
- jeiku/Alpaca_NSFW_Shuffled_Mistral
- jeiku/Luna_LoRA_Mistral
- jsfs11/WONMSeverusDevilv2-TIES
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jsfs11/WONMSeverusDevilv3-LORAMERGED
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/WONMSeverusDevilv3-LORAMERGED-GGUF/resolve/main/WONMSeverusDevilv3-LORAMERGED.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Kosmos-Elusive-VENN-8B-GGUF | mradermacher | 2025-01-26T09:44:02Z | 61 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-Elusive-VENN-8B",
"base_model:quantized:jaspionjader/Kosmos-Elusive-VENN-8B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-27T12:17:08Z | ---
base_model: jaspionjader/Kosmos-Elusive-VENN-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jaspionjader/Kosmos-Elusive-VENN-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duyphu/6d65b9b0-95e9-4290-b9d2-441d4803fa27 | duyphu | 2025-01-26T09:43:56Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-26T09:30:56Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6d65b9b0-95e9-4290-b9d2-441d4803fa27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 323546a4310179cb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/323546a4310179cb_train_data.json
type:
field_instruction: text
field_output: caption
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/6d65b9b0-95e9-4290-b9d2-441d4803fa27
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/323546a4310179cb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b99d6895-46b0-40d9-83fd-c1c9c26d613d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b99d6895-46b0-40d9-83fd-c1c9c26d613d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6d65b9b0-95e9-4290-b9d2-441d4803fa27
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 2.4728 |
| 2.1751 | 0.0086 | 10 | 2.1014 |
| 1.6291 | 0.0172 | 20 | 1.8043 |
| 1.6754 | 0.0258 | 30 | 1.7723 |
| 1.762 | 0.0344 | 40 | 1.7616 |
| 1.8069 | 0.0430 | 50 | 1.7592 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF | mradermacher | 2025-01-26T09:43:42Z | 175 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B",
"base_model:quantized:jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-27T13:10:32Z | ---
base_model: jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Asymmetric-8B-GGUF/resolve/main/Kosmos-Elusive-VENN-Asymmetric-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kostiantynk1205/82cc7582-9548-4299-b19d-67e80414436c | kostiantynk1205 | 2025-01-26T09:43:35Z | 5 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-01-26T09:39:56Z | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 82cc7582-9548-4299-b19d-67e80414436c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1643630c3c18b7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1643630c3c18b7c_train_data.json
type:
field_input: selected_word
field_instruction: original
field_output: perturbed
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/82cc7582-9548-4299-b19d-67e80414436c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1643630c3c18b7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a918c65d-c22f-44cf-830d-7a641192ea86
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a918c65d-c22f-44cf-830d-7a641192ea86
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 82cc7582-9548-4299-b19d-67e80414436c
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9552 | 0.0001 | 1 | 0.6996 |
| 1.8039 | 0.0002 | 3 | 0.6996 |
| 3.0025 | 0.0005 | 6 | 0.6981 |
| 5.0241 | 0.0007 | 9 | 0.6922 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Xinging/llama2-7b_sft_0.3_ratio_alpaca_gpt4_proj_by_bbh_ntrain_256 | Xinging | 2025-01-26T09:43:24Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T09:16:05Z | ---
library_name: transformers
license: other
base_model: meta-llama/Llama-2-7b-hf
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama2-7b_sft_0.3_ratio_alpaca_gpt4_proj_by_bbh_ntrain_256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b_sft_0.3_ratio_alpaca_gpt4_proj_by_bbh_ntrain_256
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the 0.3_ratio_alpaca_gpt4_proj_by_bbh_ntrain_256 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.20.3
|
mradermacher/Kosmos-VENN-8B-GGUF | mradermacher | 2025-01-26T09:42:53Z | 139 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-VENN-8B",
"base_model:quantized:jaspionjader/Kosmos-VENN-8B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-27T15:28:55Z | ---
base_model: jaspionjader/Kosmos-VENN-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jaspionjader/Kosmos-VENN-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF/resolve/main/Kosmos-VENN-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
0x1202/2944c79d-1080-4f84-a6c5-dfad7ffb45b5 | 0x1202 | 2025-01-26T09:42:34Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T07:46:29Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2944c79d-1080-4f84-a6c5-dfad7ffb45b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Math-7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e0c41a65c97fb0ab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e0c41a65c97fb0ab_train_data.json
type:
field_instruction: prompt
field_output: org_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/2944c79d-1080-4f84-a6c5-dfad7ffb45b5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e0c41a65c97fb0ab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc469934-f65d-4554-a373-c57006d470f3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc469934-f65d-4554-a373-c57006d470f3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2944c79d-1080-4f84-a6c5-dfad7ffb45b5
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6431 | 0.0001 | 1 | 2.8517 |
| 3.3182 | 0.0056 | 50 | 2.0713 |
| 3.4362 | 0.0112 | 100 | 1.7239 |
| 1.8644 | 0.0169 | 150 | 1.6366 |
| 1.9021 | 0.0225 | 200 | 1.6242 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/ORANSight_Phi_Mini_Instruct-GGUF | mradermacher | 2025-01-26T09:41:25Z | 48 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:NextGLab/ORANSight_Phi_Mini_Instruct",
"base_model:quantized:NextGLab/ORANSight_Phi_Mini_Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-28T00:44:40Z | ---
base_model: NextGLab/ORANSight_Phi_Mini_Instruct
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NextGLab/ORANSight_Phi_Mini_Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q4_K_M.gguf) | Q4_K_M | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q5_K_M.gguf) | Q5_K_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ORANSight_Phi_Mini_Instruct-GGUF/resolve/main/ORANSight_Phi_Mini_Instruct.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aleegis12/67935861-1515-4621-8200-b7a56c2ae166 | aleegis12 | 2025-01-26T09:40:27Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T07:46:29Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 67935861-1515-4621-8200-b7a56c2ae166
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Math-7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- e0c41a65c97fb0ab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e0c41a65c97fb0ab_train_data.json
type:
field_instruction: prompt
field_output: org_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/67935861-1515-4621-8200-b7a56c2ae166
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/e0c41a65c97fb0ab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bc469934-f65d-4554-a373-c57006d470f3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc469934-f65d-4554-a373-c57006d470f3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 67935861-1515-4621-8200-b7a56c2ae166
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6431 | 0.0001 | 1 | 2.8517 |
| 3.3472 | 0.0056 | 50 | 2.0680 |
| 3.3505 | 0.0112 | 100 | 1.7277 |
| 1.9661 | 0.0169 | 150 | 1.6410 |
| 1.919 | 0.0225 | 200 | 1.6262 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF | mradermacher | 2025-01-26T09:40:27Z | 153 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jan-hq/Ichigo-llama3.2-base-1B-T2S-2048c",
"base_model:quantized:jan-hq/Ichigo-llama3.2-base-1B-T2S-2048c",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T03:48:30Z | ---
base_model: jan-hq/Ichigo-llama3.2-base-1B-T2S-2048c
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jan-hq/Ichigo-llama3.2-base-1B-T2S-2048c
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ2_M.gguf) | i1-IQ2_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ3_S.gguf) | i1-IQ3_S | 0.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q4_0.gguf) | i1-Q4_0 | 0.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ichigo-llama3.2-base-1B-T2S-2048c-i1-GGUF/resolve/main/Ichigo-llama3.2-base-1B-T2S-2048c.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF | mradermacher | 2025-01-26T09:39:50Z | 281 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B",
"base_model:quantized:jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T06:48:57Z | ---
base_model: jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaspionjader/Kosmos-Elusive-VENN-Aurora_faustus-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-VENN-Aurora_faustus-8B-i1-GGUF/resolve/main/Kosmos-Elusive-VENN-Aurora_faustus-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
thakkkkkk/6eff970d-b4bc-4420-be91-9f9273dc7159 | thakkkkkk | 2025-01-26T09:39:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:21:56Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6eff970d-b4bc-4420-be91-9f9273dc7159
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1aa78909d4a8478f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1aa78909d4a8478f_train_data.json
type:
field_input: authors
field_instruction: bibtext
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/6eff970d-b4bc-4420-be91-9f9273dc7159
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/1aa78909d4a8478f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6eff970d-b4bc-4420-be91-9f9273dc7159
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 130
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3067 | 1.0 | 130 | 3.2962 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/TheSpice-7b-FT-v0.3.1-GGUF | mradermacher | 2025-01-26T09:39:36Z | 57 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cgato/TheSpice-7b-FT-v0.3.1",
"base_model:quantized:cgato/TheSpice-7b-FT-v0.3.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-28T07:14:58Z | ---
base_model: cgato/TheSpice-7b-FT-v0.3.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cgato/TheSpice-7b-FT-v0.3.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF | mradermacher | 2025-01-26T09:39:31Z | 97 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cgato/TheSpice-7b-FT-v0.3.1",
"base_model:quantized:cgato/TheSpice-7b-FT-v0.3.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T07:24:00Z | ---
base_model: cgato/TheSpice-7b-FT-v0.3.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cgato/TheSpice-7b-FT-v0.3.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/TheSpice-7b-FT-v0.3.1-i1-GGUF/resolve/main/TheSpice-7b-FT-v0.3.1.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nbninh/f25f1ddd-387c-4b4e-b0e4-974bbd362c8f | nbninh | 2025-01-26T09:38:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:21:59Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f25f1ddd-387c-4b4e-b0e4-974bbd362c8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1aa78909d4a8478f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1aa78909d4a8478f_train_data.json
type:
field_input: authors
field_instruction: bibtext
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/f25f1ddd-387c-4b4e-b0e4-974bbd362c8f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1aa78909d4a8478f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f25f1ddd-387c-4b4e-b0e4-974bbd362c8f
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9065 | 0.7700 | 200 | 3.3285 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bunnycore/Qwen-2.5-7B-R1-Stock | bunnycore | 2025-01-26T09:38:46Z | 56 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:bunnycore/Qwen-2.5-7b-rp-lora",
"base_model:merge:bunnycore/Qwen-2.5-7b-rp-lora",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-24T09:30:37Z | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- Qwen/Qwen2.5-7B-Instruct
- bunnycore/Qwen-2.5-7b-rp-lora
- Qwen/Qwen2.5-7B-Instruct
model-index:
- name: Qwen-2.5-7B-R1-Stock
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 75.73
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7B-R1-Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7B-R1-Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7B-R1-Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.6
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7B-R1-Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7B-R1-Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.6
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Qwen-2.5-7B-R1-Stock
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) + [bunnycore/Qwen-2.5-7b-rp-lora](https://huggingface.co/bunnycore/Qwen-2.5-7b-rp-lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- model: Qwen/Qwen2.5-7B-Instruct
- model: Qwen/Qwen2.5-7B-Instruct+bunnycore/Qwen-2.5-7b-rp-lora
base_model: Qwen/Qwen2.5-7B-Instruct
merge_method: model_stock
parameters:
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-7B-Instruct
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Qwen-2.5-7B-R1-Stock-details)
| Metric |Value|
|-------------------|----:|
|Avg. |26.97|
|IFEval (0-Shot) |75.73|
|BBH (3-Shot) |34.85|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 6.60|
|MuSR (0-shot) | 8.05|
|MMLU-PRO (5-shot) |36.60|
|
mradermacher/Kosmos-Elusive-8b-i1-GGUF | mradermacher | 2025-01-26T09:38:41Z | 385 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-Elusive-8b",
"base_model:quantized:jaspionjader/Kosmos-Elusive-8b",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T09:24:44Z | ---
base_model: jaspionjader/Kosmos-Elusive-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaspionjader/Kosmos-Elusive-8b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kosmos-Elusive-8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-Elusive-8b-i1-GGUF/resolve/main/Kosmos-Elusive-8b.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
trenden/a8fa3fbc-f651-46a7-98fd-cbbbadc7c348 | trenden | 2025-01-26T09:38:33Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-01-26T09:34:58Z | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a8fa3fbc-f651-46a7-98fd-cbbbadc7c348
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1643630c3c18b7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1643630c3c18b7c_train_data.json
type:
field_input: selected_word
field_instruction: original
field_output: perturbed
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/a8fa3fbc-f651-46a7-98fd-cbbbadc7c348
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1643630c3c18b7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a918c65d-c22f-44cf-830d-7a641192ea86
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a918c65d-c22f-44cf-830d-7a641192ea86
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a8fa3fbc-f651-46a7-98fd-cbbbadc7c348
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9552 | 0.0001 | 1 | 0.6996 |
| 1.8161 | 0.0002 | 3 | 0.6996 |
| 2.9898 | 0.0005 | 6 | 0.6980 |
| 5.0345 | 0.0007 | 9 | 0.6917 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/DDPOO-7B-slerp-GGUF | mradermacher | 2025-01-26T09:37:46Z | 55 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"en",
"base_model:jsfs11/DDPOO-7B-slerp",
"base_model:quantized:jsfs11/DDPOO-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-28T11:16:12Z | ---
base_model: jsfs11/DDPOO-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jsfs11/DDPOO-7B-slerp
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF/resolve/main/DDPOO-7B-slerp.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nhungphammmmm/6112f4ee-d945-4abd-888c-8795eb91d5bc | nhungphammmmm | 2025-01-26T09:37:43Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:21:35Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6112f4ee-d945-4abd-888c-8795eb91d5bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1aa78909d4a8478f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1aa78909d4a8478f_train_data.json
type:
field_input: authors
field_instruction: bibtext
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/6112f4ee-d945-4abd-888c-8795eb91d5bc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/1aa78909d4a8478f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6112f4ee-d945-4abd-888c-8795eb91d5bc
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9123 | 0.7700 | 200 | 3.3296 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/DDPOO-7B-slerp-i1-GGUF | mradermacher | 2025-01-26T09:37:41Z | 160 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"en",
"base_model:jsfs11/DDPOO-7B-slerp",
"base_model:quantized:jsfs11/DDPOO-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T11:21:52Z | ---
base_model: jsfs11/DDPOO-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jsfs11/DDPOO-7B-slerp
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DDPOO-7B-slerp-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/DDPOO-7B-slerp-i1-GGUF/resolve/main/DDPOO-7B-slerp.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Kosmos-VENN-8B-i1-GGUF | mradermacher | 2025-01-26T09:37:08Z | 697 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-VENN-8B",
"base_model:quantized:jaspionjader/Kosmos-VENN-8B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T12:36:29Z | ---
base_model: jaspionjader/Kosmos-VENN-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaspionjader/Kosmos-VENN-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kosmos-VENN-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-VENN-8B-i1-GGUF/resolve/main/Kosmos-VENN-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/An4-7Bv2.1-GGUF | mradermacher | 2025-01-26T09:36:47Z | 54 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:Smuggling1710/An4-7Bv2.1",
"base_model:quantized:Smuggling1710/An4-7Bv2.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-28T13:09:42Z | ---
base_model: Smuggling1710/An4-7Bv2.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/Smuggling1710/An4-7Bv2.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/An4-7Bv2.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/An4-7Bv2.1-GGUF/resolve/main/An4-7Bv2.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mrhunghd/34c41e56-0ae6-4c06-a21d-ca19dc53ce62 | mrhunghd | 2025-01-26T09:36:38Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:16:53Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 34c41e56-0ae6-4c06-a21d-ca19dc53ce62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6f19f313a38a1c32_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6f19f313a38a1c32_train_data.json
type:
field_instruction: prompt
field_output: reference_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/34c41e56-0ae6-4c06-a21d-ca19dc53ce62
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6f19f313a38a1c32_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55ec2690-2b2b-4297-a55a-e986a52e6c77
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55ec2690-2b2b-4297-a55a-e986a52e6c77
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 34c41e56-0ae6-4c06-a21d-ca19dc53ce62
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6672 | 0.0252 | 200 | 0.8003 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
demohong/3b7f8bfd-67d9-4bb3-85cc-65ac66d49d21 | demohong | 2025-01-26T09:36:30Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:16:49Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b7f8bfd-67d9-4bb3-85cc-65ac66d49d21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6f19f313a38a1c32_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6f19f313a38a1c32_train_data.json
type:
field_instruction: prompt
field_output: reference_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/3b7f8bfd-67d9-4bb3-85cc-65ac66d49d21
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6f19f313a38a1c32_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 55ec2690-2b2b-4297-a55a-e986a52e6c77
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 55ec2690-2b2b-4297-a55a-e986a52e6c77
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3b7f8bfd-67d9-4bb3-85cc-65ac66d49d21
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6683 | 0.0252 | 200 | 0.8007 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/EVA-Gutenberg3-Qwen2.5-32B-Q4_K_S-GGUF | Triangle104 | 2025-01-26T09:36:24Z | 37 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"base_model:quantized:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T09:14:45Z | ---
license: apache-2.0
library_name: transformers
base_model: nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/EVA-Gutenberg3-Qwen2.5-32B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/EVA-Gutenberg3-Qwen2.5-32B`](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) for more details on the model.
---
Model details:
-
EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.
Method
ORPO tuned with 8x A100 for 2 epochs.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/EVA-Gutenberg3-Qwen2.5-32B-Q4_K_S-GGUF --hf-file eva-gutenberg3-qwen2.5-32b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/EVA-Gutenberg3-Qwen2.5-32B-Q4_K_S-GGUF --hf-file eva-gutenberg3-qwen2.5-32b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/EVA-Gutenberg3-Qwen2.5-32B-Q4_K_S-GGUF --hf-file eva-gutenberg3-qwen2.5-32b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/EVA-Gutenberg3-Qwen2.5-32B-Q4_K_S-GGUF --hf-file eva-gutenberg3-qwen2.5-32b-q4_k_s.gguf -c 2048
```
|
FaridaElhusseiny/TATR_V2_26 | FaridaElhusseiny | 2025-01-26T09:36:13Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-01-26T09:35:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nblinh/357a41a2-b834-43e9-90f1-7c0ee46ded5d | nblinh | 2025-01-26T09:35:45Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:27:16Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 357a41a2-b834-43e9-90f1-7c0ee46ded5d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a81446d4442a33f3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a81446d4442a33f3_train_data.json
type:
field_input: source
field_instruction: instruction
field_output: q&a
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/357a41a2-b834-43e9-90f1-7c0ee46ded5d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a81446d4442a33f3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06dd0c8c-4fbb-4087-a031-e690941dfc43
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 06dd0c8c-4fbb-4087-a031-e690941dfc43
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 357a41a2-b834-43e9-90f1-7c0ee46ded5d
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 106
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.9976 | 105 | nan |
| 0.0 | 1.0071 | 106 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF | featherless-ai-quants | 2025-01-26T09:35:39Z | 344 | 0 | null | [
"gguf",
"text-generation",
"base_model:ockerman0/MN-12B-Starcannon-v4-unofficial",
"base_model:quantized:ockerman0/MN-12B-Starcannon-v4-unofficial",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-26T09:23:52Z | ---
base_model: ockerman0/MN-12B-Starcannon-v4-unofficial
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ockerman0/MN-12B-Starcannon-v4-unofficial GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ockerman0-MN-12B-Starcannon-v4-unofficial-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [ockerman0-MN-12B-Starcannon-v4-unofficial-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ockerman0-MN-12B-Starcannon-v4-unofficial-GGUF/blob/main/ockerman0-MN-12B-Starcannon-v4-unofficial-Q8_0.gguf) | 12419.10 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
tarabukinivan/9cea168d-febb-42ce-9cac-f053e1b0a304 | tarabukinivan | 2025-01-26T09:35:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:43:44Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9cea168d-febb-42ce-9cac-f053e1b0a304
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb74d07584199815_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb74d07584199815_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: tarabukinivan/9cea168d-febb-42ce-9cac-f053e1b0a304
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb74d07584199815_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c1c1215-65d4-42d2-985c-d9d272adff15
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4c1c1215-65d4-42d2-985c-d9d272adff15
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9cea168d-febb-42ce-9cac-f053e1b0a304
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 1.5632 |
| 1.4343 | 0.0002 | 5 | 1.4375 |
| 1.3342 | 0.0003 | 10 | 1.2045 |
| 1.0446 | 0.0005 | 15 | 0.9289 |
| 0.8852 | 0.0007 | 20 | 0.6848 |
| 0.6156 | 0.0008 | 25 | 0.5921 |
| 0.6075 | 0.0010 | 30 | 0.5796 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Monah-8b-Uncensored-GGUF | mradermacher | 2025-01-26T09:35:31Z | 261 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"llama",
"trl",
"sft",
"en",
"base_model:ross-dev/Monah-8b-Uncensored",
"base_model:quantized:ross-dev/Monah-8b-Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-28T15:25:50Z | ---
base_model: ross-dev/Monah-8b-Uncensored
extra_gated_fields:
Company: text
Country: country
I want to use this model for:
options:
- Research
- Education
- label: Other
value: other
type: select
Name: text
? You agree to not use the model to conduct experiments that cause harm to human
subjects or use it to obtain illeagal knowladge and I also agree to use this model
for non-commercial use ONLY
: checkbox
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ross-dev/Monah-8b-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Monah-8b-Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Monah-8b-Uncensored-GGUF/resolve/main/Monah-8b-Uncensored.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SQL-Llama-v0.5-i1-GGUF | mradermacher | 2025-01-26T09:35:15Z | 143 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:IceKingBing/SQL-Llama-v0.5",
"base_model:quantized:IceKingBing/SQL-Llama-v0.5",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T15:54:24Z | ---
base_model: IceKingBing/SQL-Llama-v0.5
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/IceKingBing/SQL-Llama-v0.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SQL-Llama-v0.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-Llama-v0.5-i1-GGUF/resolve/main/SQL-Llama-v0.5.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/KukulStanta-7B-GGUF | mradermacher | 2025-01-26T09:33:37Z | 64 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/KukulStanta-7B",
"base_model:quantized:Nitral-AI/KukulStanta-7B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-12-28T20:49:28Z | ---
base_model: Nitral-AI/KukulStanta-7B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-AI/KukulStanta-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-GGUF/resolve/main/KukulStanta-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/KukulStanta-7B-i1-GGUF | mradermacher | 2025-01-26T09:33:33Z | 197 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/KukulStanta-7B",
"base_model:quantized:Nitral-AI/KukulStanta-7B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-28T21:08:36Z | ---
base_model: Nitral-AI/KukulStanta-7B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nitral-AI/KukulStanta-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/KukulStanta-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/KukulStanta-7B-i1-GGUF/resolve/main/KukulStanta-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
laquythang/a9400f56-5d58-4f68-bf15-34c0fae196bb | laquythang | 2025-01-26T09:33:19Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:27:24Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9400f56-5d58-4f68-bf15-34c0fae196bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a81446d4442a33f3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a81446d4442a33f3_train_data.json
type:
field_input: source
field_instruction: instruction
field_output: q&a
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/a9400f56-5d58-4f68-bf15-34c0fae196bb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a81446d4442a33f3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06dd0c8c-4fbb-4087-a031-e690941dfc43
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 06dd0c8c-4fbb-4087-a031-e690941dfc43
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a9400f56-5d58-4f68-bf15-34c0fae196bb
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 106
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.9976 | 105 | nan |
| 0.0 | 1.0071 | 106 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Tibetan-Llama2-7B-GGUF | mradermacher | 2025-01-26T09:32:56Z | 63 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ymaoj/Tibetan-Llama2-7B",
"base_model:quantized:ymaoj/Tibetan-Llama2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-28T22:25:36Z | ---
base_model: ymaoj/Tibetan-Llama2-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ymaoj/Tibetan-Llama2-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.Q8_0.gguf) | Q8_0 | 7.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF/resolve/main/Tibetan-Llama2-7B.f16.gguf) | f16 | 14.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Breeze-7B-32k-Base-v1_0-GGUF | mradermacher | 2025-01-26T09:32:23Z | 67 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"en",
"base_model:MediaTek-Research/Breeze-7B-32k-Base-v1_0",
"base_model:quantized:MediaTek-Research/Breeze-7B-32k-Base-v1_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T00:47:18Z | ---
base_model: MediaTek-Research/Breeze-7B-32k-Base-v1_0
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-32k-Base-v1_0-GGUF/resolve/main/Breeze-7B-32k-Base-v1_0.f16.gguf) | f16 | 15.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Shay1309/Tester | Shay1309 | 2025-01-26T09:31:26Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:quantized:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T09:30:32Z | ---
base_model: unsloth/Phi-3.5-mini-instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Shay1309
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/infinite-lemonade-SLERP-7B-GGUF | mradermacher | 2025-01-26T09:30:54Z | 91 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/infinite-lemonade-SLERP-7B",
"base_model:quantized:grimjim/infinite-lemonade-SLERP-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T07:28:46Z | ---
base_model: grimjim/infinite-lemonade-SLERP-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/grimjim/infinite-lemonade-SLERP-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/infinite-lemonade-SLERP-7B-GGUF/resolve/main/infinite-lemonade-SLERP-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
daniel40/300a49dd-e59c-4c82-9ae2-bca7472e1df2 | daniel40 | 2025-01-26T09:30:30Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-01-26T09:27:12Z | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 300a49dd-e59c-4c82-9ae2-bca7472e1df2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b1643630c3c18b7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1643630c3c18b7c_train_data.json
type:
field_input: selected_word
field_instruction: original
field_output: perturbed
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/300a49dd-e59c-4c82-9ae2-bca7472e1df2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/b1643630c3c18b7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a918c65d-c22f-44cf-830d-7a641192ea86
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: a918c65d-c22f-44cf-830d-7a641192ea86
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 300a49dd-e59c-4c82-9ae2-bca7472e1df2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9552 | 0.0001 | 1 | 0.6996 |
| 1.8172 | 0.0002 | 3 | 0.6996 |
| 2.9968 | 0.0005 | 6 | 0.6979 |
| 5.0159 | 0.0007 | 9 | 0.6912 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF | mradermacher | 2025-01-26T09:29:48Z | 104 | 0 | transformers | [
"transformers",
"gguf",
"cy",
"dataset:techiaith/cofnodycynulliad_en-cy",
"dataset:BangorAI/hysbysiadau-llyw-cymru-1",
"base_model:BangorAI/cyfieithydd-7b-fersiwn-3",
"base_model:quantized:BangorAI/cyfieithydd-7b-fersiwn-3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-29T10:42:47Z | ---
base_model: BangorAI/cyfieithydd-7b-fersiwn-3
datasets:
- techiaith/cofnodycynulliad_en-cy
- BangorAI/hysbysiadau-llyw-cymru-1
language:
- cy
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BangorAI/cyfieithydd-7b-fersiwn-3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/cyfieithydd-7b-fersiwn-3-i1-GGUF/resolve/main/cyfieithydd-7b-fersiwn-3.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF | mradermacher | 2025-01-26T09:29:33Z | 82 | 0 | transformers | [
"transformers",
"gguf",
"japanese-stablelm",
"causal-lm",
"ja",
"base_model:stabilityai/japanese-stablelm-3b-4e1t-instruct",
"base_model:quantized:stabilityai/japanese-stablelm-3b-4e1t-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T11:17:03Z | ---
base_model: stabilityai/japanese-stablelm-3b-4e1t-instruct
extra_gated_fields:
Country: text
Email: text
I allow Stability AI to contact me about information related to its models and research: checkbox
Name: text
Organization or Affiliation: text
language:
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- japanese-stablelm
- causal-lm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-instruct-GGUF/resolve/main/japanese-stablelm-3b-4e1t-instruct.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF | mradermacher | 2025-01-26T09:29:25Z | 67 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Locutusque/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha",
"base_model:quantized:Locutusque/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T11:31:41Z | ---
base_model: Locutusque/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Locutusque/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha-GGUF/resolve/main/OpenCerebrum-1.5-Mistral-7b-v0.2-alpha.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Viking-7B-GGUF | mradermacher | 2025-01-26T09:29:02Z | 77 | 0 | transformers | [
"transformers",
"gguf",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-7B",
"base_model:quantized:LumiOpen/Viking-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T12:25:57Z | ---
base_model: LumiOpen/Viking-7B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- no
- nn
- is
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LumiOpen/Viking-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Viking-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q4_K_S.gguf) | Q4_K_S | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q6_K.gguf) | Q6_K | 6.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Viking-7B-GGUF/resolve/main/Viking-7B.f16.gguf) | f16 | 15.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/sabia-7b-GGUF | mradermacher | 2025-01-26T09:28:59Z | 78 | 0 | transformers | [
"transformers",
"gguf",
"pt",
"base_model:maritaca-ai/sabia-7b",
"base_model:quantized:maritaca-ai/sabia-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T12:29:54Z | ---
base_model: maritaca-ai/sabia-7b
language:
- pt
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/maritaca-ai/sabia-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/sabia-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/sabia-7b-GGUF/resolve/main/sabia-7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF | mradermacher | 2025-01-26T09:28:04Z | 122 | 0 | transformers | [
"transformers",
"gguf",
"NER",
"Token Classification",
"en",
"dataset:Universal-NER/Pile-NER-definition",
"dataset:Universal-NER/Pile-NER-type",
"dataset:Isotonic/Universal_ner_chatml",
"base_model:Isotonic/TinyMixtral_4x220M-UniversalNER",
"base_model:quantized:Isotonic/TinyMixtral_4x220M-UniversalNER",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T13:39:57Z | ---
base_model: Isotonic/TinyMixtral_4x220M-UniversalNER
datasets:
- Universal-NER/Pile-NER-definition
- Universal-NER/Pile-NER-type
- Isotonic/Universal_ner_chatml
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- NER
- Token Classification
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Isotonic/TinyMixtral_4x220M-UniversalNER
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyMixtral_4x220M-UniversalNER-GGUF/resolve/main/TinyMixtral_4x220M-UniversalNER.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JacksonBrune/bb372c15-a216-41a9-a081-a68a71c35158 | JacksonBrune | 2025-01-26T09:27:53Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T09:07:56Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bb372c15-a216-41a9-a081-a68a71c35158
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb74d07584199815_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb74d07584199815_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/bb372c15-a216-41a9-a081-a68a71c35158
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb74d07584199815_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c1c1215-65d4-42d2-985c-d9d272adff15
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4c1c1215-65d4-42d2-985c-d9d272adff15
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bb372c15-a216-41a9-a081-a68a71c35158
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8943 | 0.0000 | 1 | 1.1122 |
| 0.9854 | 0.0001 | 3 | 1.1077 |
| 0.8193 | 0.0002 | 6 | 1.0597 |
| 0.8947 | 0.0003 | 9 | 0.9860 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/2f051ffe-4ad3-495a-8e9a-019db52d16c2 | Best000 | 2025-01-26T09:27:51Z | 7 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-01-26T09:27:13Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2f051ffe-4ad3-495a-8e9a-019db52d16c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a81446d4442a33f3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a81446d4442a33f3_train_data.json
type:
field_input: source
field_instruction: instruction
field_output: q&a
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/2f051ffe-4ad3-495a-8e9a-019db52d16c2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a81446d4442a33f3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 06dd0c8c-4fbb-4087-a031-e690941dfc43
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 06dd0c8c-4fbb-4087-a031-e690941dfc43
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2f051ffe-4ad3-495a-8e9a-019db52d16c2
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0095 | 1 | nan |
| 0.0 | 0.0285 | 3 | nan |
| 0.0 | 0.0570 | 6 | nan |
| 0.0 | 0.0855 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/japanese-stablelm-3b-4e1t-base-GGUF | mradermacher | 2025-01-26T09:27:41Z | 83 | 0 | transformers | [
"transformers",
"gguf",
"japanese-stablelm",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:oscar-corpus/OSCAR-2201",
"dataset:cerebras/SlimPajama-627B",
"base_model:stabilityai/japanese-stablelm-3b-4e1t-base",
"base_model:quantized:stabilityai/japanese-stablelm-3b-4e1t-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T14:18:15Z | ---
base_model: stabilityai/japanese-stablelm-3b-4e1t-base
datasets:
- wikipedia
- mc4
- cc100
- oscar-corpus/OSCAR-2301
- oscar-corpus/OSCAR-2201
- cerebras/SlimPajama-627B
extra_gated_fields:
Country: text
Email: text
I allow Stability AI to contact me about information related to its models and research: checkbox
Name: text
Organization or Affiliation: text
language:
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- japanese-stablelm
- causal-lm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/japanese-stablelm-3b-4e1t-base-GGUF/resolve/main/japanese-stablelm-3b-4e1t-base.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/llama-3-8b-1m-PoSE-GGUF | mradermacher | 2025-01-26T09:26:09Z | 95 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"axolotl",
"en",
"base_model:winglian/llama-3-8b-1m-PoSE",
"base_model:quantized:winglian/llama-3-8b-1m-PoSE",
"endpoints_compatible",
"region:us"
] | null | 2024-12-29T16:21:14Z | ---
base_model: winglian/llama-3-8b-1m-PoSE
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/winglian/llama-3-8b-1m-PoSE
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-1m-PoSE-GGUF/resolve/main/llama-3-8b-1m-PoSE.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/ConspLLM-7b-i1-GGUF | mradermacher | 2025-01-26T09:25:50Z | 121 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:lzw1008/ConspLLM-7b",
"base_model:quantized:lzw1008/ConspLLM-7b",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-29T18:38:25Z | ---
base_model: lzw1008/ConspLLM-7b
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lzw1008/ConspLLM-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ConspLLM-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ConspLLM-7b-i1-GGUF/resolve/main/ConspLLM-7b.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shrabani0708/test-trainer-sd-123 | shrabani0708 | 2025-01-26T09:24:48Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-26T09:19:09Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test-trainer-sd-123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer-sd-123
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6958
- Accuracy: 0.8603
- F1: 0.9055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4489 | 0.8260 | 0.8849 |
| 0.5612 | 2.0 | 918 | 0.3560 | 0.8578 | 0.8990 |
| 0.3362 | 3.0 | 1377 | 0.6958 | 0.8603 | 0.9055 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/finemath-ablation-infiwebmath-i1-GGUF | mradermacher | 2025-01-26T09:23:44Z | 493 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:HuggingFaceTB/finemath",
"base_model:HuggingFaceTB/finemath-ablation-infiwebmath",
"base_model:quantized:HuggingFaceTB/finemath-ablation-infiwebmath",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-29T23:48:33Z | ---
base_model: HuggingFaceTB/finemath-ablation-infiwebmath
datasets:
- HuggingFaceTB/finemath
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/HuggingFaceTB/finemath-ablation-infiwebmath
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/finemath-ablation-infiwebmath-i1-GGUF/resolve/main/finemath-ablation-infiwebmath.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF | mradermacher | 2025-01-26T09:23:17Z | 184 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-EVAA-PRP-light-8B",
"base_model:quantized:jaspionjader/Kosmos-EVAA-PRP-light-8B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-30T00:42:49Z | ---
base_model: jaspionjader/Kosmos-EVAA-PRP-light-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaspionjader/Kosmos-EVAA-PRP-light-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-light-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-light-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nathanialhunt/56d82666-8d5e-44fc-bbf9-7d8eca01d933 | nathanialhunt | 2025-01-26T09:22:59Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T09:21:51Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 56d82666-8d5e-44fc-bbf9-7d8eca01d933
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1aa78909d4a8478f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1aa78909d4a8478f_train_data.json
type:
field_input: authors
field_instruction: bibtext
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/56d82666-8d5e-44fc-bbf9-7d8eca01d933
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/1aa78909d4a8478f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9ebf6d0-6fd4-49e9-a309-27f30a2c515b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 56d82666-8d5e-44fc-bbf9-7d8eca01d933
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0038 | 1 | nan |
| 0.0 | 0.0500 | 13 | nan |
| 0.0 | 0.1001 | 26 | nan |
| 0.0 | 0.1501 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Kosmos-EVAA-PRP-8B-GGUF | mradermacher | 2025-01-26T09:22:49Z | 62 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-EVAA-PRP-8B",
"base_model:quantized:jaspionjader/Kosmos-EVAA-PRP-8B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-30T01:45:39Z | ---
base_model: jaspionjader/Kosmos-EVAA-PRP-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jaspionjader/Kosmos-EVAA-PRP-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Luongdzung/hoa-1b4-sft-his-olora | Luongdzung | 2025-01-26T09:22:45Z | 9 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vlsp-2023-vllm/hoa-1b4",
"base_model:adapter:vlsp-2023-vllm/hoa-1b4",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-26T09:22:41Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: vlsp-2023-vllm/hoa-1b4
tags:
- generated_from_trainer
model-index:
- name: hoa-1b4-sft-his-olora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hoa-1b4-sft-his-olora
This model is a fine-tuned version of [vlsp-2023-vllm/hoa-1b4](https://huggingface.co/vlsp-2023-vllm/hoa-1b4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1 |
mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF | mradermacher | 2025-01-26T09:22:36Z | 131 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Kosmos-EVAA-PRP-8B",
"base_model:quantized:jaspionjader/Kosmos-EVAA-PRP-8B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-30T02:27:22Z | ---
base_model: jaspionjader/Kosmos-EVAA-PRP-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaspionjader/Kosmos-EVAA-PRP-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kosmos-EVAA-PRP-8B-i1-GGUF/resolve/main/Kosmos-EVAA-PRP-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DitDahDitDit/ppo-LunarLander-v2 | DitDahDitDit | 2025-01-26T09:21:57Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-01-26T09:21:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 227.62 +/- 18.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Tibetan-Llama2-7B-i1-GGUF | mradermacher | 2025-01-26T09:20:51Z | 142 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ymaoj/Tibetan-Llama2-7B",
"base_model:quantized:ymaoj/Tibetan-Llama2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-30T08:55:46Z | ---
base_model: ymaoj/Tibetan-Llama2-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ymaoj/Tibetan-Llama2-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tibetan-Llama2-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tibetan-Llama2-7B-i1-GGUF/resolve/main/Tibetan-Llama2-7B.i1-Q6_K.gguf) | i1-Q6_K | 5.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
visdata/pi9 | visdata | 2025-01-26T09:19:14Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-26T09:13:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sok-fm/crypt_Labse_v2 | sok-fm | 2025-01-26T09:18:54Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-26T09:17:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MathCoder-L-7B-GGUF | mradermacher | 2025-01-26T09:18:50Z | 69 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MathLLMs/MathCoder-L-7B",
"base_model:quantized:MathLLMs/MathCoder-L-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-30T13:58:48Z | ---
base_model: MathLLMs/MathCoder-L-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MathLLMs/MathCoder-L-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MathCoder-L-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-L-7B-GGUF/resolve/main/MathCoder-L-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mrHungddddh/eeb1ac35-9151-4a39-9ceb-de0f51c4f648 | mrHungddddh | 2025-01-26T09:18:44Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/GPT4-x-Vicuna-13b-fp16",
"base_model:adapter:NousResearch/GPT4-x-Vicuna-13b-fp16",
"license:gpl",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:38:30Z | ---
library_name: peft
license: gpl
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eeb1ac35-9151-4a39-9ceb-de0f51c4f648
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e52b680221744693_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e52b680221744693_train_data.json
type:
field_instruction: Context
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/eeb1ac35-9151-4a39-9ceb-de0f51c4f648
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e52b680221744693_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdb465f5-8f34-4b10-be4d-8f69f9d27469
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdb465f5-8f34-4b10-be4d-8f69f9d27469
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# eeb1ac35-9151-4a39-9ceb-de0f51c4f648
This model is a fine-tuned version of [NousResearch/GPT4-x-Vicuna-13b-fp16](https://huggingface.co/NousResearch/GPT4-x-Vicuna-13b-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6849 | 0.7319 | 200 | 1.6280 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/marathi-gpt-gemma-2b-i1-GGUF | mradermacher | 2025-01-26T09:18:42Z | 199 | 0 | transformers | [
"transformers",
"gguf",
"mr",
"base_model:l3cube-pune/marathi-gpt-gemma-2b",
"base_model:quantized:l3cube-pune/marathi-gpt-gemma-2b",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-30T14:04:30Z | ---
base_model: l3cube-pune/marathi-gpt-gemma-2b
language: mr
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/l3cube-pune/marathi-gpt-gemma-2b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/marathi-gpt-gemma-2b-i1-GGUF/resolve/main/marathi-gpt-gemma-2b.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
havinash-ai/07c14bcd-4435-4eb7-bef8-a5f3f2c92c61 | havinash-ai | 2025-01-26T09:18:16Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T09:01:08Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 07c14bcd-4435-4eb7-bef8-a5f3f2c92c61
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb74d07584199815_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb74d07584199815_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/07c14bcd-4435-4eb7-bef8-a5f3f2c92c61
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb74d07584199815_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c1c1215-65d4-42d2-985c-d9d272adff15
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4c1c1215-65d4-42d2-985c-d9d272adff15
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 07c14bcd-4435-4eb7-bef8-a5f3f2c92c61
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8943 | 0.0000 | 1 | 1.1122 |
| 0.9857 | 0.0001 | 3 | 1.1074 |
| 0.8194 | 0.0002 | 6 | 1.0583 |
| 0.8958 | 0.0003 | 9 | 0.9852 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF | mradermacher | 2025-01-26T09:17:34Z | 80 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"base_model:quantized:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-12-30T16:37:58Z | ---
base_model: mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF | mradermacher | 2025-01-26T09:17:21Z | 184 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"base_model:quantized:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-30T16:53:03Z | ---
base_model: mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kunoichi-DPO-v2-Instruct-32k-7B-i1-GGUF/resolve/main/Kunoichi-DPO-v2-Instruct-32k-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/LLaMA-2-7B-32K-GGUF | mradermacher | 2025-01-26T09:16:44Z | 87 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:togethercomputer/RedPajama-Data-Instruct",
"dataset:EleutherAI/pile",
"dataset:togethercomputer/Long-Data-Collections",
"base_model:togethercomputer/LLaMA-2-7B-32K",
"base_model:quantized:togethercomputer/LLaMA-2-7B-32K",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-30T20:12:42Z | ---
base_model: togethercomputer/LLaMA-2-7B-32K
datasets:
- togethercomputer/RedPajama-Data-1T
- togethercomputer/RedPajama-Data-Instruct
- EleutherAI/pile
- togethercomputer/Long-Data-Collections
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/togethercomputer/LLaMA-2-7B-32K
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF/resolve/main/LLaMA-2-7B-32K.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
prxy5606/f2cb2b81-5744-49cf-990b-1931613a1cc2 | prxy5606 | 2025-01-26T09:16:41Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:trl-internal-testing/tiny-random-LlamaForCausalLM",
"base_model:adapter:trl-internal-testing/tiny-random-LlamaForCausalLM",
"region:us"
] | null | 2025-01-26T09:14:12Z | ---
library_name: peft
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f2cb2b81-5744-49cf-990b-1931613a1cc2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 5f3fb26c99847c1d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f3fb26c99847c1d_train_data.json
type:
field_input: post
field_instruction: title
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5606/f2cb2b81-5744-49cf-990b-1931613a1cc2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/5f3fb26c99847c1d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 821c1640-29f7-45fe-90e6-e51d46a553fe
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 821c1640-29f7-45fe-90e6-e51d46a553fe
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f2cb2b81-5744-49cf-990b-1931613a1cc2
This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3731 | 0.0003 | 1 | 10.3739 |
| 10.3556 | 0.0130 | 50 | 10.3561 |
| 10.3491 | 0.0260 | 100 | 10.3490 |
| 10.3531 | 0.0390 | 150 | 10.3464 |
| 10.3448 | 0.0520 | 200 | 10.3462 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/LLaMA-2-7B-32K-i1-GGUF | mradermacher | 2025-01-26T09:16:40Z | 295 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:togethercomputer/RedPajama-Data-Instruct",
"dataset:EleutherAI/pile",
"dataset:togethercomputer/Long-Data-Collections",
"base_model:togethercomputer/LLaMA-2-7B-32K",
"base_model:quantized:togethercomputer/LLaMA-2-7B-32K",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-30T20:31:12Z | ---
base_model: togethercomputer/LLaMA-2-7B-32K
datasets:
- togethercomputer/RedPajama-Data-1T
- togethercomputer/RedPajama-Data-Instruct
- EleutherAI/pile
- togethercomputer/Long-Data-Collections
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/togethercomputer/LLaMA-2-7B-32K
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LLaMA-2-7B-32K-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMA-2-7B-32K-i1-GGUF/resolve/main/LLaMA-2-7B-32K.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/TinySolar-248m-4k-py-instruct-GGUF | mradermacher | 2025-01-26T09:16:09Z | 81 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:upstage/TinySolar-248m-4k-py-instruct",
"base_model:quantized:upstage/TinySolar-248m-4k-py-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-30T22:23:44Z | ---
base_model: upstage/TinySolar-248m-4k-py-instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/upstage/TinySolar-248m-4k-py-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinySolar-248m-4k-py-instruct-GGUF/resolve/main/TinySolar-248m-4k-py-instruct.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lesso07/b99e49d9-fe3b-4793-9814-9f3c75d6e4c9 | lesso07 | 2025-01-26T09:15:38Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T09:10:48Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b99e49d9-fe3b-4793-9814-9f3c75d6e4c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: true
chat_template: llama3
datasets:
- data_files:
- fc0e058f4946c2d4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fc0e058f4946c2d4_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso07/b99e49d9-fe3b-4793-9814-9f3c75d6e4c9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/fc0e058f4946c2d4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 661e8058-e07d-4d32-92e9-9549011511db
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 661e8058-e07d-4d32-92e9-9549011511db
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b99e49d9-fe3b-4793-9814-9f3c75d6e4c9
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4617 | 0.0036 | 1 | 3.1741 |
| 1.9123 | 0.0179 | 5 | 2.3461 |
| 2.6044 | 0.0358 | 10 | 1.3319 |
| 1.2787 | 0.0538 | 15 | 1.1911 |
| 1.1416 | 0.0717 | 20 | 1.1370 |
| 1.5961 | 0.0896 | 25 | 1.1306 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cunghoctienganh/6c8b2d56-2b09-4ea2-a746-e072aff13953 | cunghoctienganh | 2025-01-26T09:15:13Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:58:06Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6c8b2d56-2b09-4ea2-a746-e072aff13953
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d54b8bbf3f45bb00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d54b8bbf3f45bb00_train_data.json
type:
field_input: reply
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/6c8b2d56-2b09-4ea2-a746-e072aff13953
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d54b8bbf3f45bb00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f573a5a1-33e7-4cca-af15-6e4e2e847f12
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f573a5a1-33e7-4cca-af15-6e4e2e847f12
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6c8b2d56-2b09-4ea2-a746-e072aff13953
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6635 | 0.4978 | 200 | 0.6069 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF | mradermacher | 2025-01-26T09:14:17Z | 297 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"math",
"en",
"base_model:microsoft/rho-math-7b-interpreter-v0.1",
"base_model:quantized:microsoft/rho-math-7b-interpreter-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-31T07:35:34Z | ---
base_model: microsoft/rho-math-7b-interpreter-v0.1
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- nlp
- math
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/rho-math-7b-interpreter-v0.1-i1-GGUF/resolve/main/rho-math-7b-interpreter-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/NeuralLlama-3-ORPO-GGUF | mradermacher | 2025-01-26T09:13:21Z | 69 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"orpo",
"en",
"base_model:cookinai/NeuralLlama-3-ORPO",
"base_model:quantized:cookinai/NeuralLlama-3-ORPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-31T10:12:32Z | ---
base_model: cookinai/NeuralLlama-3-ORPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cookinai/NeuralLlama-3-ORPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF/resolve/main/NeuralLlama-3-ORPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/NeuralLlama-3-ORPO-i1-GGUF | mradermacher | 2025-01-26T09:13:14Z | 308 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"orpo",
"en",
"base_model:cookinai/NeuralLlama-3-ORPO",
"base_model:quantized:cookinai/NeuralLlama-3-ORPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-31T10:18:40Z | ---
base_model: cookinai/NeuralLlama-3-ORPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cookinai/NeuralLlama-3-ORPO
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralLlama-3-ORPO-i1-GGUF/resolve/main/NeuralLlama-3-ORPO.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF | mradermacher | 2025-01-26T09:12:49Z | 145 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralMaxime-7B-slerp",
"Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT",
"Kukedlc/Neural4gsm8k",
"en",
"base_model:Kukedlc/NeuralExperiment-7b-dare-ties",
"base_model:quantized:Kukedlc/NeuralExperiment-7b-dare-ties",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-31T11:29:34Z | ---
base_model: Kukedlc/NeuralExperiment-7b-dare-ties
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralMaxime-7B-slerp
- Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT
- Kukedlc/Neural4gsm8k
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Kukedlc/NeuralExperiment-7b-dare-ties
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralExperiment-7b-dare-ties-i1-GGUF/resolve/main/NeuralExperiment-7b-dare-ties.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
chauhoang/ed84ebb0-6bf9-484d-a116-7e1c4190adaa | chauhoang | 2025-01-26T09:08:43Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T08:57:47Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed84ebb0-6bf9-484d-a116-7e1c4190adaa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d54b8bbf3f45bb00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d54b8bbf3f45bb00_train_data.json
type:
field_input: reply
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/ed84ebb0-6bf9-484d-a116-7e1c4190adaa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d54b8bbf3f45bb00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f573a5a1-33e7-4cca-af15-6e4e2e847f12
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f573a5a1-33e7-4cca-af15-6e4e2e847f12
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed84ebb0-6bf9-484d-a116-7e1c4190adaa
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0025 | 1 | 0.8898 |
| 0.8429 | 0.0249 | 10 | 0.8405 |
| 0.7508 | 0.0498 | 20 | 0.7405 |
| 0.7041 | 0.0747 | 30 | 0.6970 |
| 0.6825 | 0.0996 | 40 | 0.6783 |
| 0.6719 | 0.1245 | 50 | 0.6749 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
arash-rasouli/bert-base-uncased-idiom-classification | arash-rasouli | 2025-01-26T09:08:39Z | 264 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-01-26T08:59:58Z | ---
license: apache-2.0
language:
- en
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
---
|
winnieyangwannan/Yi-6B-Chat_honest_lying_sft_to_lie_lora_False | winnieyangwannan | 2025-01-26T09:08:33Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"Yi-6B-Chat",
"honest_lying",
"sft_to_lie",
"lora_False",
"trl",
"sft",
"conversational",
"base_model:01-ai/Yi-6B-Chat",
"base_model:finetune:01-ai/Yi-6B-Chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-25T13:07:50Z | ---
base_model: 01-ai/Yi-6B-Chat
library_name: transformers
model_name: Yi-6B-Chat_honest_lying_sft_to_lie_lora_False
tags:
- generated_from_trainer
- Yi-6B-Chat
- honest_lying
- sft_to_lie
- lora_False
- trl
- sft
licence: license
---
# Model Card for Yi-6B-Chat_honest_lying_sft_to_lie_lora_False
This model is a fine-tuned version of [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="winnieyangwannan/Yi-6B-Chat_honest_lying_sft_to_lie_lora_False", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/winnie96/huggingface/runs/8qrszu0f)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0.dev0
- Transformers: 4.47.1
- Pytorch: 2.3.1+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
datlaaaaaaa/db66bb84-9297-4517-a271-1bc6e304b4ad | datlaaaaaaa | 2025-01-26T09:07:33Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/GPT4-x-Vicuna-13b-fp16",
"base_model:adapter:NousResearch/GPT4-x-Vicuna-13b-fp16",
"license:gpl",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:36:31Z | ---
library_name: peft
license: gpl
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: db66bb84-9297-4517-a271-1bc6e304b4ad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e52b680221744693_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e52b680221744693_train_data.json
type:
field_instruction: Context
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/db66bb84-9297-4517-a271-1bc6e304b4ad
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e52b680221744693_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdb465f5-8f34-4b10-be4d-8f69f9d27469
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdb465f5-8f34-4b10-be4d-8f69f9d27469
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# db66bb84-9297-4517-a271-1bc6e304b4ad
This model is a fine-tuned version of [NousResearch/GPT4-x-Vicuna-13b-fp16](https://huggingface.co/NousResearch/GPT4-x-Vicuna-13b-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6847 | 0.7319 | 200 | 1.6276 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhoxinh/35c71a61-945b-48b5-9f41-f6bd4d4ea4b0 | nhoxinh | 2025-01-26T09:04:54Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/GPT4-x-Vicuna-13b-fp16",
"base_model:adapter:NousResearch/GPT4-x-Vicuna-13b-fp16",
"license:gpl",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:38:43Z | ---
library_name: peft
license: gpl
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 35c71a61-945b-48b5-9f41-f6bd4d4ea4b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e52b680221744693_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e52b680221744693_train_data.json
type:
field_instruction: Context
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/35c71a61-945b-48b5-9f41-f6bd4d4ea4b0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e52b680221744693_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdb465f5-8f34-4b10-be4d-8f69f9d27469
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdb465f5-8f34-4b10-be4d-8f69f9d27469
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 35c71a61-945b-48b5-9f41-f6bd4d4ea4b0
This model is a fine-tuned version of [NousResearch/GPT4-x-Vicuna-13b-fp16](https://huggingface.co/NousResearch/GPT4-x-Vicuna-13b-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6829 | 0.7319 | 200 | 1.6267 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gvo1112/task-1-microsoft-Phi-3-mini-4k-instruct-1737882221 | gvo1112 | 2025-01-26T09:04:32Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-01-26T09:04:21Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
THU-KEG/OpenSAE-LLaMA-3.1-Layer_01 | THU-KEG | 2025-01-26T09:04:21Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-26T08:51:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
philip-hightech/6ab73dde-501d-4cc2-ad5c-504df19abb39 | philip-hightech | 2025-01-26T09:03:38Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T08:44:28Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6ab73dde-501d-4cc2-ad5c-504df19abb39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb74d07584199815_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb74d07584199815_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/6ab73dde-501d-4cc2-ad5c-504df19abb39
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb74d07584199815_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c1c1215-65d4-42d2-985c-d9d272adff15
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4c1c1215-65d4-42d2-985c-d9d272adff15
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6ab73dde-501d-4cc2-ad5c-504df19abb39
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8943 | 0.0000 | 1 | 1.1122 |
| 0.9835 | 0.0001 | 3 | 1.1074 |
| 0.8215 | 0.0002 | 6 | 1.0586 |
| 0.8926 | 0.0003 | 9 | 0.9833 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/57950896-2375-4fa7-8ad1-d07d0df672fb | daniel40 | 2025-01-26T09:02:35Z | 7 | 0 | peft | [
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-01-26T09:01:55Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 57950896-2375-4fa7-8ad1-d07d0df672fb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1ec521f976f6a750_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1ec521f976f6a750_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/57950896-2375-4fa7-8ad1-d07d0df672fb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/1ec521f976f6a750_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 57950896-2375-4fa7-8ad1-d07d0df672fb
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.0005 | 1 | 11.5 |
| 46.0 | 0.0014 | 3 | 11.5 |
| 46.0 | 0.0028 | 6 | 11.5 |
| 46.0 | 0.0043 | 9 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
myhaaaaaaa/5d34f3e8-982e-438b-b6ca-c4f941ff9a17 | myhaaaaaaa | 2025-01-26T09:02:27Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/GPT4-x-Vicuna-13b-fp16",
"base_model:adapter:NousResearch/GPT4-x-Vicuna-13b-fp16",
"license:gpl",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:36:31Z | ---
library_name: peft
license: gpl
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5d34f3e8-982e-438b-b6ca-c4f941ff9a17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/GPT4-x-Vicuna-13b-fp16
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e52b680221744693_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e52b680221744693_train_data.json
type:
field_instruction: Context
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: myhaaaaaaa/5d34f3e8-982e-438b-b6ca-c4f941ff9a17
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e52b680221744693_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bdb465f5-8f34-4b10-be4d-8f69f9d27469
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bdb465f5-8f34-4b10-be4d-8f69f9d27469
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5d34f3e8-982e-438b-b6ca-c4f941ff9a17
This model is a fine-tuned version of [NousResearch/GPT4-x-Vicuna-13b-fp16](https://huggingface.co/NousResearch/GPT4-x-Vicuna-13b-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6828 | 0.7319 | 200 | 1.6278 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/3d7591d8-5fc3-45a0-86a6-ae43e34bf30c | trenden | 2025-01-26T09:02:18Z | 6 | 0 | peft | [
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-01-26T09:01:37Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d7591d8-5fc3-45a0-86a6-ae43e34bf30c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1ec521f976f6a750_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1ec521f976f6a750_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/3d7591d8-5fc3-45a0-86a6-ae43e34bf30c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/1ec521f976f6a750_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d7591d8-5fc3-45a0-86a6-ae43e34bf30c
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.0005 | 1 | 11.5 |
| 46.0 | 0.0062 | 13 | 11.5 |
| 46.0 | 0.0123 | 26 | 11.5 |
| 46.0 | 0.0185 | 39 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thdihan/Gemma_9Bit_ftoPsych8k_GGUF | thdihan | 2025-01-26T09:02:13Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-26T08:58:20Z | ---
base_model: unsloth/gemma-2-9b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thdihan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hks1444/xlm_hate_span_detection_final | hks1444 | 2025-01-26T09:02:07Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-01-26T07:41:22Z | ---
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm_hate_span_detection_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm_hate_span_detection_final
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1929
- Precision: 0.4481
- Recall: 0.6142
- F1: 0.5182
- Accuracy: 0.9517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.2048 | 0.0 | 0.0 | 0.0 | 0.9476 |
| 0.4588 | 2.0 | 150 | 0.1353 | 0.5556 | 0.4748 | 0.5120 | 0.9600 |
| 0.1302 | 3.0 | 225 | 0.1309 | 0.5541 | 0.5163 | 0.5346 | 0.9614 |
| 0.0744 | 4.0 | 300 | 0.1342 | 0.5825 | 0.5341 | 0.5573 | 0.9625 |
| 0.0744 | 5.0 | 375 | 0.1495 | 0.6047 | 0.5312 | 0.5656 | 0.9637 |
| 0.0433 | 6.0 | 450 | 0.1733 | 0.5385 | 0.5608 | 0.5494 | 0.9578 |
| 0.0283 | 7.0 | 525 | 0.1675 | 0.5497 | 0.5905 | 0.5694 | 0.9596 |
| 0.019 | 8.0 | 600 | 0.1749 | 0.5360 | 0.6409 | 0.5838 | 0.9591 |
| 0.019 | 9.0 | 675 | 0.1938 | 0.5363 | 0.4599 | 0.4952 | 0.9599 |
| 0.0117 | 10.0 | 750 | 0.2017 | 0.5417 | 0.5401 | 0.5409 | 0.9590 |
| 0.0087 | 11.0 | 825 | 0.2162 | 0.5435 | 0.5935 | 0.5674 | 0.9574 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
robiual-awal/1327334f-fd9d-4421-a442-f1946a84013e | robiual-awal | 2025-01-26T09:01:56Z | 6 | 0 | peft | [
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-01-26T09:01:18Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1327334f-fd9d-4421-a442-f1946a84013e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1ec521f976f6a750_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1ec521f976f6a750_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/1327334f-fd9d-4421-a442-f1946a84013e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/1ec521f976f6a750_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1327334f-fd9d-4421-a442-f1946a84013e
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.0005 | 1 | 11.5 |
| 46.0 | 0.0014 | 3 | 11.5 |
| 46.0 | 0.0028 | 6 | 11.5 |
| 46.0 | 0.0043 | 9 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/jaim-just-another-illustrious-merge-v2-sdxl | John6666 | 2025-01-26T09:01:54Z | 171 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"2.5D",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:finetune:Laxhar/noobai-XL-1.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-26T08:54:22Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- 2.5D
- illustrious
base_model: Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1165105?modelVersionId=1331502).
This model created by [infamous__fish](https://civitai.com/user/infamous__fish).
|
ClarenceDan/43f4ed27-97bf-4b03-a6c3-f5ced7578983 | ClarenceDan | 2025-01-26T09:01:38Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T08:43:28Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 43f4ed27-97bf-4b03-a6c3-f5ced7578983
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb74d07584199815_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb74d07584199815_train_data.json
type:
field_input: my_solu
field_instruction: prompt
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/43f4ed27-97bf-4b03-a6c3-f5ced7578983
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb74d07584199815_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4c1c1215-65d4-42d2-985c-d9d272adff15
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4c1c1215-65d4-42d2-985c-d9d272adff15
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 43f4ed27-97bf-4b03-a6c3-f5ced7578983
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8943 | 0.0000 | 1 | 1.1122 |
| 0.9833 | 0.0001 | 3 | 1.1074 |
| 0.8187 | 0.0002 | 6 | 1.0593 |
| 0.8942 | 0.0003 | 9 | 0.9855 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso/7a1ad27e-8298-408e-9f09-983fecc34aa7 | lesso | 2025-01-26T09:01:30Z | 6 | 0 | peft | [
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | 2025-01-26T09:00:50Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a1ad27e-8298-408e-9f09-983fecc34aa7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1ec521f976f6a750_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1ec521f976f6a750_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso/7a1ad27e-8298-408e-9f09-983fecc34aa7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/1ec521f976f6a750_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
wandb_project: lesso18
wandb_run: your_name
wandb_runid: 0c734ece-7ae1-4e12-a8ca-c4bd260d197a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7a1ad27e-8298-408e-9f09-983fecc34aa7
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.0946 | 200 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
subhamiiita1/t5-model | subhamiiita1 | 2025-01-26T09:00:22Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-26T08:59:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathanialhunt/368e740a-f8e0-448a-8452-1c82cf05c622 | nathanialhunt | 2025-01-26T09:00:09Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"license:apache-2.0",
"region:us"
] | null | 2025-01-26T08:58:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/llama-2-7b-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 368e740a-f8e0-448a-8452-1c82cf05c622
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-2-7b-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d54b8bbf3f45bb00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d54b8bbf3f45bb00_train_data.json
type:
field_input: reply
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/368e740a-f8e0-448a-8452-1c82cf05c622
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d54b8bbf3f45bb00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f573a5a1-33e7-4cca-af15-6e4e2e847f12
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f573a5a1-33e7-4cca-af15-6e4e2e847f12
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 368e740a-f8e0-448a-8452-1c82cf05c622
This model is a fine-tuned version of [unsloth/llama-2-7b-chat](https://huggingface.co/unsloth/llama-2-7b-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0025 | 1 | nan |
| 0.0 | 0.0324 | 13 | nan |
| 0.0 | 0.0647 | 26 | nan |
| 0.0 | 0.0971 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thangla01/98eae68c-3ece-46c7-ab65-96eecac90486 | thangla01 | 2025-01-26T08:59:50Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-26T08:25:23Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 98eae68c-3ece-46c7-ab65-96eecac90486
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3b1817e1a326e619_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3b1817e1a326e619_train_data.json
type:
field_instruction: data
field_output: criteria
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/98eae68c-3ece-46c7-ab65-96eecac90486
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3b1817e1a326e619_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a5b61cfd-85d2-4880-97d8-24759f842d7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a5b61cfd-85d2-4880-97d8-24759f842d7d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 98eae68c-3ece-46c7-ab65-96eecac90486
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3802 | 0.0424 | 200 | 1.2712 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits