modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
minhtrannnn/93f34ccb-4359-4558-aa70-097a6d651c99 | minhtrannnn | 2025-01-29T08:01:38Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:16:11Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 93f34ccb-4359-4558-aa70-097a6d651c99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 425476553ab111b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/425476553ab111b0_train_data.json
type:
field_input: Content
field_instruction: Title
field_output: Summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhtrannnn/93f34ccb-4359-4558-aa70-097a6d651c99
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/425476553ab111b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6972c938-4c63-447c-ab05-b15cf2af5926
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6972c938-4c63-447c-ab05-b15cf2af5926
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 93f34ccb-4359-4558-aa70-097a6d651c99
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9832 | 0.0233 | 200 | 1.6902 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NalDice/askvox-1.3 | NalDice | 2025-01-29T08:00:46Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T07:55:58Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NalDice
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF | mradermacher | 2025-01-29T08:00:16Z | 115 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bruhzair/Behemoth-Magnum-v4-SLERP-123b",
"base_model:quantized:bruhzair/Behemoth-Magnum-v4-SLERP-123b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-29T00:43:58Z | ---
base_model: bruhzair/Behemoth-Magnum-v4-SLERP-123b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bruhzair/Behemoth-Magnum-v4-SLERP-123b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 41.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 76.8 | |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Behemoth-Magnum-v4-SLERP-123b-i1-GGUF/resolve/main/Behemoth-Magnum-v4-SLERP-123b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nat-hunt/75eb649a-a9fb-4ee5-86ca-d9762e8c3e38 | nat-hunt | 2025-01-29T07:59:59Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:58:24Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75eb649a-a9fb-4ee5-86ca-d9762e8c3e38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c9e29dc819e749df_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9e29dc819e749df_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/75eb649a-a9fb-4ee5-86ca-d9762e8c3e38
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c9e29dc819e749df_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: afe39e60-71c0-4e45-bd5f-eb3ea571cc42
wandb_project: Birthday-SN56-4-Gradients-On-Demand
wandb_run: your_name
wandb_runid: afe39e60-71c0-4e45-bd5f-eb3ea571cc42
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75eb649a-a9fb-4ee5-86ca-d9762e8c3e38
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0022 | 1 | 0.7618 |
| 0.6419 | 0.0288 | 13 | 0.5557 |
| 0.4787 | 0.0576 | 26 | 0.4413 |
| 0.4391 | 0.0864 | 39 | 0.4198 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/424d7de6-e9cf-4f1c-91c1-0a71050e5d95 | daniel40 | 2025-01-29T07:59:55Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-29T07:59:29Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 424d7de6-e9cf-4f1c-91c1-0a71050e5d95
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70b74cfb5fc6b710_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70b74cfb5fc6b710_train_data.json
type:
field_input: provided_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/424d7de6-e9cf-4f1c-91c1-0a71050e5d95
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/70b74cfb5fc6b710_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 630a89cd-b8d4-4e00-a067-68d12cb2361e
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 630a89cd-b8d4-4e00-a067-68d12cb2361e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 424d7de6-e9cf-4f1c-91c1-0a71050e5d95
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9369 | 0.0036 | 1 | 11.9376 |
| 11.9375 | 0.0474 | 13 | 11.9374 |
| 11.937 | 0.0949 | 26 | 11.9370 |
| 11.9358 | 0.1423 | 39 | 11.9368 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/e1ea3a15-242b-45ff-86cb-34d56b81e954 | trenden | 2025-01-29T07:59:12Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-29T07:58:45Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e1ea3a15-242b-45ff-86cb-34d56b81e954
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70b74cfb5fc6b710_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70b74cfb5fc6b710_train_data.json
type:
field_input: provided_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/e1ea3a15-242b-45ff-86cb-34d56b81e954
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/70b74cfb5fc6b710_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 630a89cd-b8d4-4e00-a067-68d12cb2361e
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 630a89cd-b8d4-4e00-a067-68d12cb2361e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e1ea3a15-242b-45ff-86cb-34d56b81e954
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 11.9376 |
| 11.9381 | 0.0474 | 13 | 11.9373 |
| 11.9372 | 0.0949 | 26 | 11.9369 |
| 11.9365 | 0.1423 | 39 | 11.9367 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/66c5900c-d44d-4065-83eb-0be8f4bec9c1 | Best000 | 2025-01-29T07:59:01Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-29T07:58:34Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 66c5900c-d44d-4065-83eb-0be8f4bec9c1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70b74cfb5fc6b710_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70b74cfb5fc6b710_train_data.json
type:
field_input: provided_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/66c5900c-d44d-4065-83eb-0be8f4bec9c1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/70b74cfb5fc6b710_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 630a89cd-b8d4-4e00-a067-68d12cb2361e
wandb_project: Birthday-SN56-32-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 630a89cd-b8d4-4e00-a067-68d12cb2361e
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 66c5900c-d44d-4065-83eb-0be8f4bec9c1
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 11.9376 |
| 11.9381 | 0.0474 | 13 | 11.9376 |
| 11.9375 | 0.0949 | 26 | 11.9374 |
| 11.937 | 0.1423 | 39 | 11.9372 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/be00ff20-4c88-491e-a941-8fed010baafe | daniel40 | 2025-01-29T07:59:01Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-29T07:58:35Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be00ff20-4c88-491e-a941-8fed010baafe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70b74cfb5fc6b710_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70b74cfb5fc6b710_train_data.json
type:
field_input: provided_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/be00ff20-4c88-491e-a941-8fed010baafe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/70b74cfb5fc6b710_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 630a89cd-b8d4-4e00-a067-68d12cb2361e
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 630a89cd-b8d4-4e00-a067-68d12cb2361e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# be00ff20-4c88-491e-a941-8fed010baafe
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 11.9376 |
| 11.9381 | 0.0474 | 13 | 11.9374 |
| 11.9373 | 0.0949 | 26 | 11.9370 |
| 11.9366 | 0.1423 | 39 | 11.9368 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nadejdatarabukina/6cc8f6cc-c87a-4dfc-99f5-45a9367cb99a | nadejdatarabukina | 2025-01-29T07:58:57Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-29T07:58:36Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6cc8f6cc-c87a-4dfc-99f5-45a9367cb99a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70b74cfb5fc6b710_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70b74cfb5fc6b710_train_data.json
type:
field_input: provided_answer
field_instruction: question
field_output: reference_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/6cc8f6cc-c87a-4dfc-99f5-45a9367cb99a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/70b74cfb5fc6b710_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 630a89cd-b8d4-4e00-a067-68d12cb2361e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 630a89cd-b8d4-4e00-a067-68d12cb2361e
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6cc8f6cc-c87a-4dfc-99f5-45a9367cb99a
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 11.9373 |
| 11.9368 | 0.0182 | 5 | 11.9372 |
| 11.936 | 0.0365 | 10 | 11.9370 |
| 11.9364 | 0.0547 | 15 | 11.9367 |
| 11.9356 | 0.0730 | 20 | 11.9364 |
| 11.9358 | 0.0912 | 25 | 11.9363 |
| 11.936 | 0.1095 | 30 | 11.9362 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso09/2c629e4c-15a1-44c5-95ec-c69efcfae813 | lesso09 | 2025-01-29T07:58:16Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:24:58Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2c629e4c-15a1-44c5-95ec-c69efcfae813
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 8091ecea1323ab3c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8091ecea1323ab3c_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso09/2c629e4c-15a1-44c5-95ec-c69efcfae813
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/8091ecea1323ab3c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9916096-d50d-4acf-9c1a-53873dbe493a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9916096-d50d-4acf-9c1a-53873dbe493a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2c629e4c-15a1-44c5-95ec-c69efcfae813
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0003 | 10 | nan |
| 0.0 | 0.0004 | 15 | nan |
| 0.0 | 0.0005 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kartikgupta373/e3-ad15570-705525-olive-green | kartikgupta373 | 2025-01-29T07:55:58Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:55:57Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# E3 Ad15570 705525 Olive Green
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/e3-ad15570-705525-olive-green', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Theros/Qwen2.5-ColdBrew-R1-test5 | Theros | 2025-01-29T07:55:21Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Theros/Qwen2.5-ColdBrew-R1-test3",
"base_model:merge:Theros/Qwen2.5-ColdBrew-R1-test3",
"base_model:bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2",
"base_model:merge:bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T07:50:05Z | ---
base_model:
- Theros/Qwen2.5-ColdBrew-R1-test3
- bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Theros/Qwen2.5-ColdBrew-R1-test3](https://huggingface.co/Theros/Qwen2.5-ColdBrew-R1-test3)
* [bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2](https://huggingface.co/bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Theros/Qwen2.5-ColdBrew-R1-test3
layer_range: [0, 28]
- model: bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2
layer_range: [0, 28]
merge_method: slerp
base_model: Theros/Qwen2.5-ColdBrew-R1-test3
parameters:
t:
- filter: self_attn
value: [0.3, 0.5, 0.6, 0.6, 0.7] # Avoids extreme low/high fluctuations
- filter: mlp
value: [0.7, 0.6, 0.5, 0.4, 0.3] # Gradual shift, avoiding an early MLP spike
- value: 0.5
dtype: bfloat16
tokenizer_source: union
```
|
lesso07/b679df98-e3ce-41be-ac81-986ed1d85cee | lesso07 | 2025-01-29T07:54:37Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:24:37Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b679df98-e3ce-41be-ac81-986ed1d85cee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 8091ecea1323ab3c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8091ecea1323ab3c_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso07/b679df98-e3ce-41be-ac81-986ed1d85cee
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/8091ecea1323ab3c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9916096-d50d-4acf-9c1a-53873dbe493a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9916096-d50d-4acf-9c1a-53873dbe493a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b679df98-e3ce-41be-ac81-986ed1d85cee
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 5 | nan |
| 0.0 | 0.0003 | 10 | nan |
| 0.0 | 0.0004 | 15 | nan |
| 0.0 | 0.0005 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kartikgupta373/e2-ad15572-705523-beige | kartikgupta373 | 2025-01-29T07:54:28Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:54:26Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# E2 Ad15572 705523 Beige
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/e2-ad15572-705523-beige', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kartikgupta373/c17-as15617-508804-blue | kartikgupta373 | 2025-01-29T07:54:12Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:54:09Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# C17 As15617 508804 Blue
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/c17-as15617-508804-blue', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso17/065a3f75-f659-4f9f-aedd-b61bc6248916 | lesso17 | 2025-01-29T07:52:20Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:31:46Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 065a3f75-f659-4f9f-aedd-b61bc6248916
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 682a834cc2a59bd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/682a834cc2a59bd6_train_data.json
type:
field_input: context
field_instruction: question
field_output: cleaned_atom
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/065a3f75-f659-4f9f-aedd-b61bc6248916
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/682a834cc2a59bd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 313417c2-c5dc-47a4-9b02-d2be42090d8e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 313417c2-c5dc-47a4-9b02-d2be42090d8e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 065a3f75-f659-4f9f-aedd-b61bc6248916
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9544 | 0.0513 | 200 | 0.2752 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso01/79fdca8c-f2d9-497c-b20c-2b20f113a10c | lesso01 | 2025-01-29T07:52:10Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:42:46Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 79fdca8c-f2d9-497c-b20c-2b20f113a10c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
datasets:
- data_files:
- f04259c91cb5f8b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f04259c91cb5f8b9_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/79fdca8c-f2d9-497c-b20c-2b20f113a10c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 79fdca8c-f2d9-497c-b20c-2b20f113a10c
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0006 | 1 | nan |
| 0.0 | 0.0030 | 5 | nan |
| 0.0 | 0.0060 | 10 | nan |
| 0.0 | 0.0090 | 15 | nan |
| 0.0 | 0.0121 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kartikgupta373/c16-as15619-508803-blue | kartikgupta373 | 2025-01-29T07:51:29Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:51:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# C16 As15619 508803 Blue
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/c16-as15619-508803-blue', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kartikgupta373/c15-as15616-608091-white | kartikgupta373 | 2025-01-29T07:51:17Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:51:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# C15 As15616 608091 White
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/c15-as15616-608091-white', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mrferr3t/0681d514-17be-4b02-9e4d-74cc17a75330 | mrferr3t | 2025-01-29T07:50:52Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:44:37Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0681d514-17be-4b02-9e4d-74cc17a75330
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f04259c91cb5f8b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f04259c91cb5f8b9_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/0681d514-17be-4b02-9e4d-74cc17a75330
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 12
micro_batch_size: 2
mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0681d514-17be-4b02-9e4d-74cc17a75330
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3891 | 0.0006 | 1 | 0.6590 |
| 1.2638 | 0.0018 | 3 | 0.6189 |
| 2.3539 | 0.0036 | 6 | 0.5214 |
| 1.8702 | 0.0054 | 9 | 0.4780 |
| 2.0651 | 0.0072 | 12 | 0.4419 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
WUw0/596601857-1 | WUw0 | 2025-01-29T07:49:49Z | 19 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-28T21:10:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 596601857-1
---
# 596601857 1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `596601857-1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WUw0/596601857-1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kartikgupta373/as15833-509072-evergreen | kartikgupta373 | 2025-01-29T07:49:29Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:49:27Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15833 509072 Evergreen
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15833-509072-evergreen', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
shibajustfor/3c068898-1d16-488d-993e-8f9a6c3a7f85 | shibajustfor | 2025-01-29T07:47:38Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:43:35Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3c068898-1d16-488d-993e-8f9a6c3a7f85
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f04259c91cb5f8b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f04259c91cb5f8b9_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/3c068898-1d16-488d-993e-8f9a6c3a7f85
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3c068898-1d16-488d-993e-8f9a6c3a7f85
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | nan |
| 0.0 | 0.0078 | 13 | nan |
| 0.0 | 0.0157 | 26 | nan |
| 0.0 | 0.0235 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/ed22e003-10c3-425d-a914-2b5063c64906 | trenden | 2025-01-29T07:47:29Z | 7 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | 2025-01-29T07:46:50Z | ---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ed22e003-10c3-425d-a914-2b5063c64906
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0df5b3e9787ca7a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0df5b3e9787ca7a4_train_data.json
type:
field_input: Genre
field_instruction: Title
field_output: Overview
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/ed22e003-10c3-425d-a914-2b5063c64906
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/0df5b3e9787ca7a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 354c2a43-076f-4bcc-92cb-a8316275eb69
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 354c2a43-076f-4bcc-92cb-a8316275eb69
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ed22e003-10c3-425d-a914-2b5063c64906
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 11.0880 |
| 44.3578 | 0.0023 | 13 | 11.0854 |
| 44.3371 | 0.0045 | 26 | 11.0818 |
| 44.3264 | 0.0068 | 39 | 11.0802 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
beingbatman/CTMAE-P2-V2-S2 | beingbatman | 2025-01-29T07:46:17Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-01-29T04:09:56Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CTMAE-P2-V2-S2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CTMAE-P2-V2-S2
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5069
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 6500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.6135 | 0.0202 | 131 | 0.7942 | 0.5556 |
| 0.4654 | 1.0202 | 262 | 2.2124 | 0.5556 |
| 1.1122 | 2.0202 | 393 | 1.8386 | 0.5556 |
| 0.7797 | 3.0202 | 524 | 0.9344 | 0.5556 |
| 1.379 | 4.0202 | 655 | 1.5755 | 0.5556 |
| 0.7305 | 5.0202 | 786 | 1.4677 | 0.5556 |
| 0.9115 | 6.0202 | 917 | 1.5456 | 0.5556 |
| 1.6622 | 7.0202 | 1048 | 1.2113 | 0.5556 |
| 0.6868 | 8.0202 | 1179 | 1.8451 | 0.5556 |
| 1.199 | 9.0202 | 1310 | 1.3622 | 0.5556 |
| 0.7459 | 10.0202 | 1441 | 1.4034 | 0.5556 |
| 0.5574 | 11.0202 | 1572 | 0.9836 | 0.5556 |
| 0.3742 | 12.0202 | 1703 | 0.6934 | 0.6889 |
| 0.3303 | 13.0202 | 1834 | 0.7161 | 0.6889 |
| 0.8856 | 14.0202 | 1965 | 1.5608 | 0.5556 |
| 0.186 | 15.0202 | 2096 | 0.7782 | 0.6 |
| 0.7263 | 16.0202 | 2227 | 1.4438 | 0.5778 |
| 1.552 | 17.0202 | 2358 | 1.2117 | 0.6222 |
| 0.1031 | 18.0202 | 2489 | 1.2174 | 0.6667 |
| 1.193 | 19.0202 | 2620 | 1.2043 | 0.6444 |
| 0.322 | 20.0202 | 2751 | 1.3639 | 0.6444 |
| 0.3791 | 21.0202 | 2882 | 1.3107 | 0.6444 |
| 0.6201 | 22.0202 | 3013 | 1.2797 | 0.6889 |
| 0.9547 | 23.0202 | 3144 | 1.1654 | 0.6444 |
| 1.4286 | 24.0202 | 3275 | 1.4078 | 0.6667 |
| 0.6023 | 25.0202 | 3406 | 1.5069 | 0.7333 |
| 0.2925 | 26.0202 | 3537 | 1.4529 | 0.6889 |
| 0.1445 | 27.0202 | 3668 | 1.4417 | 0.7333 |
| 0.2717 | 28.0202 | 3799 | 2.1237 | 0.6444 |
| 0.411 | 29.0202 | 3930 | 1.5399 | 0.6889 |
| 0.6632 | 30.0202 | 4061 | 1.6289 | 0.7333 |
| 0.3 | 31.0202 | 4192 | 1.9944 | 0.6222 |
| 0.386 | 32.0202 | 4323 | 1.9271 | 0.6889 |
| 0.1569 | 33.0202 | 4454 | 1.8172 | 0.6889 |
| 0.2135 | 34.0202 | 4585 | 1.7862 | 0.6889 |
| 0.3142 | 35.0202 | 4716 | 1.6904 | 0.7111 |
| 0.2179 | 36.0202 | 4847 | 1.9549 | 0.7111 |
| 0.7634 | 37.0202 | 4978 | 1.9367 | 0.6889 |
| 0.0008 | 38.0202 | 5109 | 1.9890 | 0.6667 |
| 0.1467 | 39.0202 | 5240 | 1.9472 | 0.6889 |
| 0.6641 | 40.0202 | 5371 | 2.2295 | 0.6889 |
| 0.3125 | 41.0202 | 5502 | 1.8309 | 0.7111 |
| 0.1987 | 42.0202 | 5633 | 2.1643 | 0.6889 |
| 0.067 | 43.0202 | 5764 | 2.1776 | 0.6667 |
| 0.1513 | 44.0202 | 5895 | 2.1978 | 0.6667 |
| 0.0032 | 45.0202 | 6026 | 1.9291 | 0.7333 |
| 0.2596 | 46.0202 | 6157 | 2.0961 | 0.6889 |
| 0.0006 | 47.0202 | 6288 | 2.0126 | 0.7111 |
| 0.0305 | 48.0202 | 6419 | 2.0029 | 0.7333 |
| 0.0004 | 49.0125 | 6500 | 2.0025 | 0.7333 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
krowiemlekommm/PJN_moondream2 | krowiemlekommm | 2025-01-29T07:46:00Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"moondream1",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-01-29T07:44:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathanialhunt/54be0fc1-5f35-4ada-b449-48347a20051f | nathanialhunt | 2025-01-29T07:45:55Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:41:59Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54be0fc1-5f35-4ada-b449-48347a20051f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f04259c91cb5f8b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f04259c91cb5f8b9_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/54be0fc1-5f35-4ada-b449-48347a20051f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f04259c91cb5f8b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aac7786a-015b-44a1-9c8e-ad88dd9f945c
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: aac7786a-015b-44a1-9c8e-ad88dd9f945c
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 54be0fc1-5f35-4ada-b449-48347a20051f
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | nan |
| 0.0 | 0.0078 | 13 | nan |
| 0.0 | 0.0157 | 26 | nan |
| 0.0 | 0.0235 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TweedleDeepLearnings/29ec791c-168b-4f34-acee-3161602a6154 | TweedleDeepLearnings | 2025-01-29T07:45:44Z | 251 | 0 | peft | [
"peft",
"safetensors",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-29T05:08:20Z |
---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4b201cf-0eeb-4380-a91f-cd6329614a81
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
bf16: auto
chat_template: llama3
dataset_prepared_path: null
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
gradient_clipping: 0.1
group_by_length: false
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-04
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: linear
max_steps: 200
micro_batch_size: 128
mlflow_experiment_name: /tmp/aed51b8e2c089967_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 4096
special_tokens:
pad_token: </PAD>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6a8f76dd-7262-490a-905c-7b83c0f56891
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6a8f76dd-7262-490a-905c-7b83c0f56891
warmup_steps: 5
weight_decay: 0.1
xformers_attention: true
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 128
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 2048
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
prxy5604/16e1abfb-e0bf-41b2-813c-51c3105e4cc1 | prxy5604 | 2025-01-29T07:45:03Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:41:05Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16e1abfb-e0bf-41b2-813c-51c3105e4cc1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- fb3f054252ee5303_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb3f054252ee5303_train_data.json
type:
field_input: premise
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/16e1abfb-e0bf-41b2-813c-51c3105e4cc1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/fb3f054252ee5303_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa00008e-67c1-4447-afe6-ef69d7aebe9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa00008e-67c1-4447-afe6-ef69d7aebe9e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 16e1abfb-e0bf-41b2-813c-51c3105e4cc1
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8549 | 0.0048 | 1 | 3.9853 |
| 1.563 | 0.2415 | 50 | 1.3595 |
| 1.2227 | 0.4831 | 100 | 1.2786 |
| 0.9032 | 0.7246 | 150 | 1.1970 |
| 1.2691 | 0.9662 | 200 | 1.1563 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhungphammmmm/4dada709-a18d-42bf-9bbf-1358a29e405b | nhungphammmmm | 2025-01-29T07:44:28Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:03:08Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4dada709-a18d-42bf-9bbf-1358a29e405b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac004a2a3ec8e832_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac004a2a3ec8e832_train_data.json
type:
field_input: title
field_instruction: content
field_output: summary1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/4dada709-a18d-42bf-9bbf-1358a29e405b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac004a2a3ec8e832_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77344871-dc6c-43c2-89a7-28217f41b23c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 77344871-dc6c-43c2-89a7-28217f41b23c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4dada709-a18d-42bf-9bbf-1358a29e405b
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.879 | 0.0027 | 200 | 1.9078 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SSethisak/xlsr-khmer-fleur | SSethisak | 2025-01-29T07:43:15Z | 172 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"km",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-16T15:44:36Z | ---
library_name: transformers
language:
- km
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Fine Tuned wav2vec2 asr on khmer dataset
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robiual-awal/af590153-3994-402a-918c-4c7af9d54083 | robiual-awal | 2025-01-29T07:43:15Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"region:us"
] | null | 2025-01-29T07:38:16Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af590153-3994-402a-918c-4c7af9d54083
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 682a834cc2a59bd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/682a834cc2a59bd6_train_data.json
type:
field_input: context
field_instruction: question
field_output: cleaned_atom
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/af590153-3994-402a-918c-4c7af9d54083
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/682a834cc2a59bd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 313417c2-c5dc-47a4-9b02-d2be42090d8e
wandb_project: Birthday-SN56-30-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 313417c2-c5dc-47a4-9b02-d2be42090d8e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# af590153-3994-402a-918c-4c7af9d54083
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.4773 |
| 4.9829 | 0.0033 | 13 | 0.4564 |
| 1.9428 | 0.0067 | 26 | 0.3447 |
| 1.3989 | 0.0100 | 39 | 0.3213 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso16/7bdf206b-d218-447d-9628-3b3bba87cdc5 | lesso16 | 2025-01-29T07:42:54Z | 7 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | 2025-01-29T07:42:04Z | ---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7bdf206b-d218-447d-9628-3b3bba87cdc5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0df5b3e9787ca7a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0df5b3e9787ca7a4_train_data.json
type:
field_input: Genre
field_instruction: Title
field_output: Overview
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso16/7bdf206b-d218-447d-9628-3b3bba87cdc5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/0df5b3e9787ca7a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 354c2a43-076f-4bcc-92cb-a8316275eb69
wandb_project: multi
wandb_run: your_name
wandb_runid: 354c2a43-076f-4bcc-92cb-a8316275eb69
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7bdf206b-d218-447d-9628-3b3bba87cdc5
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 44.2882 | 0.2789 | 200 | 11.0712 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Minerva-14b-V0.1-GGUF | mradermacher | 2025-01-29T07:42:08Z | 297 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Triangle104/Minerva-14b-V0.1",
"base_model:quantized:Triangle104/Minerva-14b-V0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-27T13:23:45Z | ---
base_model: Triangle104/Minerva-14b-V0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Triangle104/Minerva-14b-V0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF/resolve/main/Minerva-14b-V0.1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Minerva-14b-V0.1-i1-GGUF | mradermacher | 2025-01-29T07:42:08Z | 649 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Triangle104/Minerva-14b-V0.1",
"base_model:quantized:Triangle104/Minerva-14b-V0.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-29T00:05:43Z | ---
base_model: Triangle104/Minerva-14b-V0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Triangle104/Minerva-14b-V0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Minerva-14b-V0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Minerva-14b-V0.1-i1-GGUF/resolve/main/Minerva-14b-V0.1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lesso06/6469a921-b91b-42a3-a0b8-5de95f0ba723 | lesso06 | 2025-01-29T07:40:51Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"region:us"
] | null | 2025-01-29T07:28:55Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6469a921-b91b-42a3-a0b8-5de95f0ba723
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 682a834cc2a59bd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/682a834cc2a59bd6_train_data.json
type:
field_input: context
field_instruction: question
field_output: cleaned_atom
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso06/6469a921-b91b-42a3-a0b8-5de95f0ba723
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/682a834cc2a59bd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 313417c2-c5dc-47a4-9b02-d2be42090d8e
wandb_project: multi
wandb_run: your_name
wandb_runid: 313417c2-c5dc-47a4-9b02-d2be42090d8e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6469a921-b91b-42a3-a0b8-5de95f0ba723
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9214 | 0.4107 | 200 | 0.2535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5604/b225d555-ab72-4134-9a1c-d31b506b8bab | prxy5604 | 2025-01-29T07:40:20Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:13:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b225d555-ab72-4134-9a1c-d31b506b8bab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- c710dbacd1baf82d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c710dbacd1baf82d_train_data.json
type:
field_instruction: prompt
field_output: story
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/b225d555-ab72-4134-9a1c-d31b506b8bab
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/c710dbacd1baf82d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96aa06fc-7593-4da9-898b-b6eb1b530143
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96aa06fc-7593-4da9-898b-b6eb1b530143
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b225d555-ab72-4134-9a1c-d31b506b8bab
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6271 | 0.0120 | 1 | 2.8036 |
| 2.6876 | 0.6006 | 50 | 2.6502 |
| 2.6007 | 1.2012 | 100 | 2.6470 |
| 2.5217 | 1.8018 | 150 | 2.6516 |
| 2.4413 | 2.4024 | 200 | 2.6540 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung01/e88f772c-9042-44b4-92e9-087a69d265aa | nhung01 | 2025-01-29T07:39:40Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:16:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e88f772c-9042-44b4-92e9-087a69d265aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 425476553ab111b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/425476553ab111b0_train_data.json
type:
field_input: Content
field_instruction: Title
field_output: Summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/e88f772c-9042-44b4-92e9-087a69d265aa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/425476553ab111b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6972c938-4c63-447c-ab05-b15cf2af5926
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6972c938-4c63-447c-ab05-b15cf2af5926
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e88f772c-9042-44b4-92e9-087a69d265aa
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9891 | 0.0233 | 200 | 1.6897 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Lil-R/BLYMM-Qwen-DareTies-V1 | Lil-R | 2025-01-29T07:38:44Z | 227 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-21T09:25:07Z | ---
library_name: transformers
license: apache-2.0
---
# **BLYMM-Qwen-DareTies-V1**
This model has been produced by:
- **ROBERGE Marial**, engineering student at French Engineering School ECE
- **ESCRIVA Mathis**, engineering student at French Engineering School ECE
- **LALAIN Youri**, engineering student at French Engineering School ECE
- **RAGE LILIAN**, engineering student at French Engineering School ECE
- **HUVELLE Baptiste**, engineering student at French Engineering School ECE
Under the supervision of:
- **Andre-Louis Rochet**, Lecturer at ECE & Co-Founder of TW3 Partners
- **Paul Lemaistre**, CTO of TW3 Partners
With the contribution of:
- **ECE engineering school** as sponsor and financial contributor
- **François STEPHAN** as director of ECE
- **Gérard REUS** as acting director of iLAB
- **Matthieu JOLLARD** ECE Alumni
- **Louis GARCIA** ECE Alumni
### Supervisory structure
The iLab (intelligence Lab) is a structure created by the ECE and dedicated to artificial intelligence
### About ECE
ECE, a multi-program, multi-campus, and multi-sector engineering school specializing in digital engineering, trains engineers and technology experts for the 21st century, capable of meeting the challenges of the dual digital and sustainable development revolutions.
## **Caractéristiques**
- **Méthode de fusion :** Dare_ties
- **Modèles sources :**
- [newsbang/Homer-v1.0-Qwen2.5-72B](https://huggingface.co/newsbang/Homer-v1.0-Qwen2.5-72B)
- [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
Kromtao/01_Kromtao_07 | Kromtao | 2025-01-29T07:37:51Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-29T07:37:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nhung03/736d0c2e-26ce-451c-9230-5862cee5cb26 | nhung03 | 2025-01-29T07:37:39Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:16:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 736d0c2e-26ce-451c-9230-5862cee5cb26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 425476553ab111b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/425476553ab111b0_train_data.json
type:
field_input: Content
field_instruction: Title
field_output: Summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/736d0c2e-26ce-451c-9230-5862cee5cb26
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/425476553ab111b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6972c938-4c63-447c-ab05-b15cf2af5926
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6972c938-4c63-447c-ab05-b15cf2af5926
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 736d0c2e-26ce-451c-9230-5862cee5cb26
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9905 | 0.0233 | 200 | 1.6911 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiual-awal/4834c128-a0af-4e5a-ba84-ea4d1c20ba91 | robiual-awal | 2025-01-29T07:36:27Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"region:us"
] | null | 2025-01-29T07:31:37Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4834c128-a0af-4e5a-ba84-ea4d1c20ba91
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 682a834cc2a59bd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/682a834cc2a59bd6_train_data.json
type:
field_input: context
field_instruction: question
field_output: cleaned_atom
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/4834c128-a0af-4e5a-ba84-ea4d1c20ba91
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/682a834cc2a59bd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 313417c2-c5dc-47a4-9b02-d2be42090d8e
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 313417c2-c5dc-47a4-9b02-d2be42090d8e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4834c128-a0af-4e5a-ba84-ea4d1c20ba91
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.4773 |
| 4.9795 | 0.0033 | 13 | 0.4543 |
| 1.9429 | 0.0067 | 26 | 0.3398 |
| 1.3899 | 0.0100 | 39 | 0.3185 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/75262155-049f-44bb-915f-8c5d9f31d576 | trenden | 2025-01-29T07:36:17Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-7b-hf-flash",
"region:us"
] | null | 2025-01-29T07:31:36Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75262155-049f-44bb-915f-8c5d9f31d576
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 682a834cc2a59bd6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/682a834cc2a59bd6_train_data.json
type:
field_input: context
field_instruction: question
field_output: cleaned_atom
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/75262155-049f-44bb-915f-8c5d9f31d576
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/682a834cc2a59bd6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 313417c2-c5dc-47a4-9b02-d2be42090d8e
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 313417c2-c5dc-47a4-9b02-d2be42090d8e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 75262155-049f-44bb-915f-8c5d9f31d576
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-7b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.4773 |
| 4.9838 | 0.0033 | 13 | 0.4583 |
| 1.9594 | 0.0067 | 26 | 0.3408 |
| 1.3953 | 0.0100 | 39 | 0.3192 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
John6666/llama-joycaption-alpha-two-hf-llava-nf4 | John6666 | 2025-01-29T07:35:37Z | 571 | 14 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"captioning",
"conversational",
"en",
"base_model:fancyfeast/llama-joycaption-alpha-two-hf-llava",
"base_model:quantized:fancyfeast/llama-joycaption-alpha-two-hf-llava",
"license:llama3.1",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-10-13T08:26:31Z | ---
language:
- en
license: llama3.1
library_name: transformers
base_model: fancyfeast/llama-joycaption-alpha-two-hf-llava
tags:
- captioning
- transformers
---
bitsandbytes NF4 quants of [fancyfeast/llama-joycaption-alpha-two-hf-llava](https://huggingface.co/fancyfeast/llama-joycaption-alpha-two-hf-llava).
The following is almost from the original model card.
# Model Card for Llama JoyCaption Alpha Two
[Github](https://github.com/fpgaminer/joycaption)
JoyCaption is an image captioning Visual Language Model (VLM) being built from the ground up as a free, open, and uncensored model for the community to use in training Diffusion models.
Key Features:
- **Free and Open**: It will be released for free, open weights, no restrictions, and just like [bigASP](https://www.reddit.com/r/StableDiffusion/comments/1dbasvx/the_gory_details_of_finetuning_sdxl_for_30m/), will come with training scripts and lots of juicy details on how it gets built.
- **Uncensored**: Equal coverage of SFW and NSFW concepts. No "cylindrical shaped object with a white substance coming out on it" here.
- **Diversity**: All are welcome here. Do you like digital art? Photoreal? Anime? Furry? JoyCaption is for everyone. Pains are being taken to ensure broad coverage of image styles, content, ethnicity, gender, orientation, etc.
- **Minimal Filtering**: JoyCaption is trained on large swathes of images so that it can understand almost all aspects of our world. almost. Illegal content will never be tolerated in JoyCaption's training.
## Motivation
Automated descriptive captions enable the training and finetuning of diffusion models on a wider range of images, since trainers are no longer required to either find images with already associated text or write the descriptions themselves. They also improve the quality of generations produced by Text-to-Image models trained on them (ref: DALL-E 3 paper). But to-date, the community has been stuck with ChatGPT, which is expensive and heavily censored; or alternative models, like CogVLM, which are weaker than ChatGPT and have abysmal performance outside of the SFW domain.
I'm building JoyCaption to help fill this gap by performing near or on-par with GPT4o in captioning images, while being free, unrestricted, and open.
## How to Get Started with the Model
Please see the [Github](https://github.com/fpgaminer/joycaption) for more details.
Example usage:
```
import torch
import torch.amp
import torchvision.transforms.functional as TVF
from PIL import Image
from transformers import AutoTokenizer, LlavaForConditionalGeneration
IMAGE_PATH = "image.jpg"
PROMPT = "Write a long descriptive caption for this image in a formal tone."
MODEL_NAME = "John6666/llama-joycaption-alpha-two-hf-llava-nf4"
# Load JoyCaption
# bfloat16 is the native dtype of the LLM used in JoyCaption (Llama 3.1)
# device_map=0 loads the model into the first GPU
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
llava_model = LlavaForConditionalGeneration.from_pretrained(MODEL_NAME, torch_dtype="bfloat16", device_map=0)
llava_model.eval()
with torch.no_grad():
# Load and preprocess image
# Normally you would use the Processor here, but the image module's processor
# has some buggy behavior and a simple resize in Pillow yields higher quality results
image = Image.open(IMAGE_PATH)
if image.size != (384, 384):
image = image.resize((384, 384), Image.LANCZOS)
image = image.convert("RGB")
pixel_values = TVF.pil_to_tensor(image)
# Normalize the image
pixel_values = pixel_values / 255.0
pixel_values = TVF.normalize(pixel_values, [0.5], [0.5])
pixel_values = pixel_values.to(torch.bfloat16).unsqueeze(0)
# Build the conversation
convo = [
{
"role": "system",
"content": "You are a helpful image captioner.",
},
{
"role": "user",
"content": PROMPT,
},
]
# Format the conversation
convo_string = tokenizer.apply_chat_template(convo, tokenize=False, add_generation_prompt=True)
# Tokenize the conversation
convo_tokens = tokenizer.encode(convo_string, add_special_tokens=False, truncation=False)
# Repeat the image tokens
input_tokens = []
for token in convo_tokens:
if token == llava_model.config.image_token_index:
input_tokens.extend([llava_model.config.image_token_index] * llava_model.config.image_seq_length)
else:
input_tokens.append(token)
input_ids = torch.tensor(input_tokens, dtype=torch.long).unsqueeze(0)
attention_mask = torch.ones_like(input_ids)
# Generate the caption
generate_ids = llava_model.generate(input_ids=input_ids.to('cuda'), pixel_values=pixel_values.to('cuda'), attention_mask=attention_mask.to('cuda'), max_new_tokens=300, do_sample=True, suppress_tokens=None, use_cache=True)[0]
# Trim off the prompt
generate_ids = generate_ids[input_ids.shape[1]:]
# Decode the caption
caption = tokenizer.decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
caption = caption.strip()
print(caption)
``` |
facu1321/geno1 | facu1321 | 2025-01-29T07:34:42Z | 39 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:21:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: geno1
---
# Geno1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `geno1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('facu1321/geno1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mrHunghddddd/ecb7817c-1340-42a2-b8c1-de49acd161c3 | mrHunghddddd | 2025-01-29T07:34:03Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:23:57Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ecb7817c-1340-42a2-b8c1-de49acd161c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb3f054252ee5303_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb3f054252ee5303_train_data.json
type:
field_input: premise
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/ecb7817c-1340-42a2-b8c1-de49acd161c3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb3f054252ee5303_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa00008e-67c1-4447-afe6-ef69d7aebe9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa00008e-67c1-4447-afe6-ef69d7aebe9e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ecb7817c-1340-42a2-b8c1-de49acd161c3
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1727 | 0.2415 | 200 | 2.1166 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF | Triangle104 | 2025-01-29T07:33:41Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/MN-12B-Mimicore-WhiteSnake",
"base_model:quantized:DoppelReflEx/MN-12B-Mimicore-WhiteSnake",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T07:13:42Z | ---
license: cc-by-nc-4.0
base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF
This model was converted to GGUF format from [`DoppelReflEx/MN-12B-Mimicore-WhiteSnake`](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) for more details on the model.
---
Model details:
-
Better version of GreenSnake, not too much different in OpenLLM
LeaderBoard scores. Merge with
cgato/Nemo-12b-Humanize-KTO-Experimental-Latest so this model could
perform 'human response'.
This merge model is a gift for Lunar New Year, haha. Enjoy it.
Good for RP, ERP, Story Telling.
PS: It's don't have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest Tokenization issue.
Update: Still have cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
Tokenization issue, but randomly occur in rare rate. If you are
experiencing this issue, just press re-generate to reroll other
message/response.
Chat Format? ChatML of course!
Models Merged
The following models were included in the merge:
cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
DoppelReflEx/MN-12B-Mimicore-GreenSnake
Configuration
The following YAML configuration was used to produce this model:
models:
- model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
parameters:
density: 0.9
weight: 1
- model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
parameters:
density: 0.6
weight: 0.8
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/MN-12B-Mimicore-WhiteSnake-Q4_K_M-GGUF --hf-file mn-12b-mimicore-whitesnake-q4_k_m.gguf -c 2048
```
|
mrferr3t/a6178fc7-d53d-4063-a386-18062781d83c | mrferr3t | 2025-01-29T07:32:42Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B",
"base_model:adapter:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:31:59Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a6178fc7-d53d-4063-a386-18062781d83c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 08df1ebc5b3fbd74_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/08df1ebc5b3fbd74_train_data.json
type:
field_instruction: source
field_output: hyp1
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/a6178fc7-d53d-4063-a386-18062781d83c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 8
micro_batch_size: 2
mlflow_experiment_name: /tmp/08df1ebc5b3fbd74_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c870557-3dbf-46fd-a40c-ee656b727226
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c870557-3dbf-46fd-a40c-ee656b727226
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a6178fc7-d53d-4063-a386-18062781d83c
This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5773 | 0.2353 | 1 | 1.3842 |
| 1.9126 | 0.4706 | 2 | 1.3859 |
| 1.7345 | 0.9412 | 4 | 1.3765 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
reds0510/npo_gdr_1e-6_ckpt75 | reds0510 | 2025-01-29T07:32:34Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T07:14:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso01/4a057ddc-fe5a-4d52-a0bd-d4ed15e456e0 | lesso01 | 2025-01-29T07:30:19Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:27:21Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4a057ddc-fe5a-4d52-a0bd-d4ed15e456e0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: true
chat_template: llama3
datasets:
- data_files:
- fb3f054252ee5303_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb3f054252ee5303_train_data.json
type:
field_input: premise
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/4a057ddc-fe5a-4d52-a0bd-d4ed15e456e0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb3f054252ee5303_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: aa00008e-67c1-4447-afe6-ef69d7aebe9e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: aa00008e-67c1-4447-afe6-ef69d7aebe9e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4a057ddc-fe5a-4d52-a0bd-d4ed15e456e0
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0012 | 1 | nan |
| 0.0 | 0.0060 | 5 | nan |
| 0.0 | 0.0121 | 10 | nan |
| 0.0 | 0.0181 | 15 | nan |
| 0.0 | 0.0242 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/a5f54f29-dde0-4519-8ea1-9e0b173b0558 | mrferr3t | 2025-01-29T07:30:16Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:17:54Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a5f54f29-dde0-4519-8ea1-9e0b173b0558
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 425476553ab111b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/425476553ab111b0_train_data.json
type:
field_input: Content
field_instruction: Title
field_output: Summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/a5f54f29-dde0-4519-8ea1-9e0b173b0558
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 24
micro_batch_size: 2
mlflow_experiment_name: /tmp/425476553ab111b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6972c938-4c63-447c-ab05-b15cf2af5926
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6972c938-4c63-447c-ab05-b15cf2af5926
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a5f54f29-dde0-4519-8ea1-9e0b173b0558
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9133 | 0.0001 | 1 | 2.2337 |
| 2.463 | 0.0007 | 6 | 2.2105 |
| 2.3493 | 0.0014 | 12 | 2.0745 |
| 1.9768 | 0.0021 | 18 | 2.0276 |
| 1.7809 | 0.0028 | 24 | 2.0030 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/567e266b-33d8-46a8-933f-19f89ea6e377 | mrferr3t | 2025-01-29T07:29:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Base-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:19:31Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Base-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 567e266b-33d8-46a8-933f-19f89ea6e377
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Base-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e11d3af61284289e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e11d3af61284289e_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: reference_completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/567e266b-33d8-46a8-933f-19f89ea6e377
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/e11d3af61284289e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 33053983-d2d7-46cd-86bd-33b197e4dd4c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 33053983-d2d7-46cd-86bd-33b197e4dd4c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 567e266b-33d8-46a8-933f-19f89ea6e377
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1819 | 0.0001 | 1 | 0.8374 |
| 2.9609 | 0.0007 | 5 | 0.8297 |
| 3.4358 | 0.0014 | 10 | 0.8031 |
| 2.8805 | 0.0021 | 15 | 0.7771 |
| 3.0493 | 0.0028 | 20 | 0.7716 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Sarveshj/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF | Sarveshj | 2025-01-29T07:28:31Z | 229 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T07:26:54Z | ---
license: mit
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- llama-cpp
- gguf-my-repo
---
# Sarveshj/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-32B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Sarveshj/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Sarveshj/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Sarveshj/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Sarveshj/DeepSeek-R1-Distill-Qwen-32B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-32b-q4_k_m.gguf -c 2048
```
|
lautaflase/lofaan2026 | lautaflase | 2025-01-29T07:28:00Z | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2025-01-29T07:27:49Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "\0\0\0\0\0\0\0\0Version 1.0.0"
output:
url: images/71nL6kZW51L.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lofaan2025
---
# lofaan2025
<Gallery />
## Trigger words
You should use `lofaan2025` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/lautaflase/lofaan2026/tree/main) them in the Files & versions tab.
|
kartikgupta373/as15671-509038-white | kartikgupta373 | 2025-01-29T07:27:15Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:27:13Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15671 509038 White
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15671-509038-white', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Best000/f68ac3ab-8d68-403d-88cc-8a3069d29f91 | Best000 | 2025-01-29T07:26:13Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:23:59Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f68ac3ab-8d68-403d-88cc-8a3069d29f91
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dcef816926ec2838_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dcef816926ec2838_train_data.json
type:
field_input: activity
field_instruction: topic
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/f68ac3ab-8d68-403d-88cc-8a3069d29f91
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/dcef816926ec2838_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d997858c-edf3-49a2-a1d9-29c48b4b7819
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d997858c-edf3-49a2-a1d9-29c48b4b7819
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f68ac3ab-8d68-403d-88cc-8a3069d29f91
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.1771 |
| 2.0815 | 0.0062 | 13 | 1.8749 |
| 1.892 | 0.0123 | 26 | 1.7472 |
| 1.7838 | 0.0185 | 39 | 1.7154 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jrinky/model4 | Jrinky | 2025-01-29T07:26:00Z | 84 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:20816",
"loss:Infonce",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-01-29T07:20:44Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:20816
- loss:Infonce
base_model: BAAI/bge-m3
widget:
- source_sentence: What do studies show about the configurations of eukaryotic polysomes
sentences:
- "Eukaryotic\n\nIn cells \nin situ (in cell) studies have shown that eukaryotic\
\ polysomes exhibit linear configurations. Densely packed 3-D helices and planar\
\ double-row polysomes were found with variable packing including “top-to-top”\
\ contacts similar to prokaryotic polysomes."
- Carlo Dante Rota (born 17 April 1961) is a British-born Canadian actor. He has
appeared in Little Mosque on the Prairie and as systems analyst Morris O'Brian
on the Fox series 24.
- 'Ronnie & Jo Wood Still ‘Close Friends’ Despite Joint Property Auction
Celebrity auctioneer Darren Julien is gearing up for a massive sale of over 600
items belonging to Rolling Stones guitarist Ronnie Wood’s and his ex-wife Jo Wood.
Much like many of Julian’s Auctions past collections, this auction has created
some controversy because Ronnie has recently come out as opposed to the sale of
his personal belongings, denying his involvement in the ‘joint’ sale. In response
to those recent statements coming out Ronnie Wood’s camp saying he’s “shocked
and disappointed” at the auctioning off his personal belongings, and that the
auction has been “misrepresented as a joint sale,” Julien claims Ronnie has known
about the auction since its start.'
- source_sentence: What was Mike Holober's role at the BMI Jazz Composer’s Workshop
from 2007 to 2015
sentences:
- '- Establishing a named ''link’ person within an organisation with a liaison role
between service users and the organisation. This can help to reduce the problems
that can occur with personnel changes or restructuring.'
- 'A professor of obstetrics from 1895 at Kraków''s Jagiellonian University, Jordan
became best known for organizing children’s playgrounds, called "Jordan''s parks"
after him. Life
Henryk Jordan was born into an impoverished noble family from the village of Zakliczyn,
which over time moved to other places in Polish Galicia (for example Przemyśl).
His father, Bonifacy Jordan, gave private lessons. His mother, Salomea Wędrychowska,
was a homemaker. Jordan received his high-school education in Tarnopol and Tarnów.
In 1861, however, he took part in pro-Polish demonstrations for which he was threatened
with expulsion from school. In 1862 he moved to Trieste and a year later passed
his high-school examinations, in Italian, with honors. Jordan began his university
studies in Vienna, and from 1863 continued them at Kraków''s Jagiellonian University.
He passed his science examinations in 1867 but did not receive his master''s degree
due to pneumonia.'
- "From 2007 - 2015 he served as Associate Director of the BMI Jazz Composer’s Workshop,\
\ where he taught with Musical Director Jim McNeely. Discography \n The Mike Holober\
\ Quintet, Canyon (Sons of Sound, 2003)\n The Gotham Jazz Orchestra, T Thought\
\ Trains (Sons of Sound, 2004)\n The Mike Holober Quintet, Wish List (Sons of\
\ Sound, 2006)\n The Gotham Jazz Orchestra, Quake (Sunnyside, 2009)\n Mike Holober\
\ & Balancing Act, Balancing Act (Palmetto, 2015)\n The Gotham Jazz Orchestra,\
\ Hiding Out (Zoho Music, 2019)\n\nReferences\n\nExternal links \n Artist's official\
\ website\n Sons of Sound, Label page for Mike Holober\n Manhattan School of Music\
\ faculty profile\n CCNY Stuart Katz Professorship announcement\n Interview with\
\ WBGO's Gary Walker\n\nVideos\n Westchester Jazz Orchestra - promotional video\
\ written and directed by Darryl Estrine 2013\n \"Oh No\" - hr-Bigband plays Frank\
\ Zappa; Deutsches Jazzfestival Frankfurt 2015\n \"We Are Not Alone\" - hr-Bigband\
\ plays Frank Zappa; Deutsches Jazzfestival Frankfurt 2015\n \"G-Spot Tornado\
\ - hr-Bigband plays Frank Zappa; Deutsches Jazzfestival Frankfurt 2015\n\n\"\
Star of Jupiter\" - Kurt Rosenwinkel & hr-Bigband; Kurt Rosenwinkel & hr-Bigband\
\ im hr-Sendesaal 12.06.2015\n \"Heavenly Bodies\" - Kurt Rosenwinkel & hr-Bigband;\
\ Kurt Rosenwinkel & hr-Bigband im hr-Sendesaal 12.06.2015\n \"East Coast Love\
\ Affair\" - Kurt Rosenwinkel & hr-Bigband; Kurt Rosenwinkel & hr-Bigband im hr-Sendesaal\
\ 12.06.2015\n \"Brooklyn Sometimes\" - Kurt Rosenwinkel & hr-Bigband; Kurt Rosenwinkel\
\ & hr-Bigband im hr-Sendesaal 12.06.2015\n Al Foster feat. by WDR BIG BAND -\
\ Douglas (Rehearsal) - WDR rehearsal featuring Al Foster; 04.14.2016\n Al Foster\
\ feat."
- source_sentence: What problems does Alice encounter due to her roommate Merv's TV
watching habits
sentences:
- Roommate from hell Merv (Jeremy Strong) is an unrepentant yogurt-pilferer and,
far worse, the kind of TV addict who likes to "interact" by loudly critiquing
the very junk he's mainlining. The overwhelming blaring of the television rankles
Alice (Katie Kreisler), who starts out musing about a part of Vermont that's cut
off from TV -- and then ends up furiously plotting Merv's ouster.
- And it does help a bit in public places--there are a few people who will hold
open doors for me, or offer me other courtesies, as a result of my using the cane.
It's a real ego-killer to occasionally catch sight of myself, reflected in a plate-glass
window, stumping along with the cane and lurching from side-to-side.
- 'That''s an important step in literacy development. Why you''ll like it: I love
reading this book aloud at story hours.'
- source_sentence: What was the role of the Sri Lankan High Commissioner in Pretoria,
South Africa
sentences:
- "As the Sri Lankan High Commissioner, he functioned as the executive head of the\
\ Sri Lankan diplomatic mission in Pretoria, South Africa. Secretary to the Prime\
\ Minister \nFollowing the Appointment of the new prime minister D.M."
- xv + 191 pp. + 1 plate.
- "Winters are generally mild in Alabama, as they are throughout most of the southeastern\
\ United States, with average January low temperatures around in Mobile, around\
\ in Huntsville, around in Montgomery, and around in Birmingham. Extremes\n\
\nPrecipitation\nThe amount of precipitation is greatest along the coast (62 inches/1,574 mm)\
\ and evenly distributed through the rest of the state (about 52 inches/1,320 mm).\
\ Much of the rainfall is produced by thunderstorms and, occasionally, by hurricanes\
\ and other tropical disturbances. In central and northern Alabama, average monthly\
\ precipitation amounts are highest from November to April, typically peaking\
\ in December or March, as at Huntsville (December maximum) or Birmingham (March\
\ maximum), with August to October the driest months. Along the coast, summer\
\ thunderstorm rains are markedly more frequent and tropical weather systems are\
\ a threat from July to October. Accordingly, at Mobile, virtually the wettest\
\ city annually anywhere in the eastern United States (wetter than even Miami,\
\ FL with its drier winters), monthly average precipitation peaks in July and\
\ August, but virtually the entire year is wet, with October a slightly drier\
\ month. Although snow is a rare event in much of Alabama, areas of the state\
\ north of Montgomery may receive a dusting of snow a few times every winter,\
\ with an occasional moderately heavy snowfall every few years. Historic heavy\
\ snowfall events include the New Year's Eve 1963 snowstorm and the 1993 Storm\
\ of the Century. The annual average snowfall for the Birmingham area is per\
\ year. In the southern Gulf coast, snowfall is less frequent, sometimes going\
\ several years without any snowfall. El Niño and La Niña\nDuring El Niño, Alabama\
\ receives colder than average winter temperatures with wetter than average conditions\
\ along the southern parts of the state and drier than average conditions in the\
\ northern parts. La Niña brings warmer than average temperatures with the drier\
\ weather in the southern parts of the state due to a northern storm track. Hazards\n\
\nAlabama is also prone to tropical storms and even hurricanes. Areas of the state\
\ far away from the Gulf are not immune to the effects of the storms, which often\
\ dump tremendous amounts of rain as they move inland and weaken. Thunderstorms\
\ are common during the summer throughout Alabama and also occur during other\
\ times of the year including winter. South Alabama reports many thunderstorms.\
\ The Gulf Coast, around Mobile Bay, averages between 100 and 110 days per year\
\ with thunder reported, which eastern and northwest Alabama have 70 to 80 thunderstorm\
\ days per year. Occasionally, thunderstorms are severe with frequent lightning\
\ and large hail – the central and northern parts of the state are most vulnerable\
\ to this type of storm, the northern and central regions of Alabama are especially\
\ prone to tornadoes. Alabama ranks seventh in the number of deaths from lightning\
\ and ninth in the number of deaths from lightning strikes per capita. Tornadoes\
\ occur frequently in Alabama during the spring and fall months, these tornadoes\
\ can be devastating and even deadly.– these are common throughout the state,\
\ although the peak season for tornadoes varies from the northern to southern\
\ parts of the state. Alabama, along with Kansas, has the most reported F5/EF5\
\ tornadoes than any other state – according to statistics from the National Climatic\
\ Data Center for the period January 1, 1950, to October 31, 2006. An F5 tornado\
\ is the most powerful of its kind. Several long – tracked F5 tornadoes have contributed\
\ to Alabama reporting more tornado fatalities than any other state except for\
\ Texas and Mississippi. The Super Outbreaks of April 1974 and April 2011 both\
\ badly affected Alabama. The northern part of the state – along the Tennessee\
\ Valley – is one of the areas in the US most vulnerable to violent tornadoes\
\ . The area of Alabama and Mississippi most affected by tornadoes is sometimes\
\ referred to as Dixie Alley, as distinct from the Tornado Alley of the Southern\
\ Plains. Alabama is one of the few places in the world that has a secondary tornado\
\ season (November and December) along with the spring severe weather season.\
\ See also\nClimate change in Alabama\n\nReferences\n\n \nGeography of Alabama"
- source_sentence: What is the significance of the first written mention of Metylovice,
and in which year did it occur
sentences:
- Users could also get discounts when they bought the coins in bulk and earn coins
through certain apps on the Appstore. In 2014, with the release of the Fire Phone,
Amazon offered app developers 500,000 Amazon Coins for each paid app or app with
in-app purchasing developed and optimized for the Fire Phone.
- 'Contents
Hard Times moves the Traveller universe forward into a time where the galaxy is
riven by economic stagnation and collapse of the empire. Rick Swan wrote, "Planets
are gasping for life like guppies flung from a fish bowl, and the luckless survivors
face a future of staggering adversity."'
- 'The Olešná Stream flows through the municipality. History
The first written mention of Metylovice is in a deed of Bishop Dětřich from 1299.
From the second half of the 17th century, tanning developed in the village, thanks
to which the originally agricultural village began to prosper and grow. Brick
houses began to replace the original wooden ones and the education and cultural
life of the inhabitants increased. Sights
The most important monument is the Church of All Saints.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Jrinky/model4")
# Run inference
sentences = [
'What is the significance of the first written mention of Metylovice, and in which year did it occur',
'The Olešná Stream flows through the municipality. History\nThe first written mention of Metylovice is in a deed of Bishop Dětřich from 1299. From the second half of the 17th century, tanning developed in the village, thanks to which the originally agricultural village began to prosper and grow. Brick houses began to replace the original wooden ones and the education and cultural life of the inhabitants increased. Sights\nThe most important monument is the Church of All Saints.',
'Users could also get discounts when they bought the coins in bulk and earn coins through certain apps on the Appstore. In 2014, with the release of the Fire Phone, Amazon offered app developers 500,000 Amazon Coins for each paid app or app with in-app purchasing developed and optimized for the Fire Phone.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 20,816 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.92 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 168.82 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What was the birth date and place of Helena Binder, also known as Blanche Blotto</code> | <code>Born June 13, 1955 in Batavia, New York. Helena Binder, aka Blanche Blotto (keyboards, vocals; 1978-1980).</code> |
| <code>What incidents involving Israeli soldiers occurred in the occupied West Bank on Tuesday</code> | <code>Also Tuesday, Israeli soldiers fired a barrage of gas bombs and concussion grenades at a Palestinian home in the Masafer Yatta area, south of Hebron, in the southern part of the occupied West Bank, wounding an entire family, including children. On Tuesday evening, Israeli soldiers invaded the al-Maghayir village northeast of Ramallah, in the central West Bank, after many illegal colonizers attacked Palestinian cars. In related news, the soldiers shot three Palestinian construction workers near the illegal Annexation Wall, west of Hebron, in the southern part of the occupied West Bank, and abducted them.</code> |
| <code>How was the Mosbrucher Maar formed, and when did it occur</code> | <code>The Mosbrucher Weiher, also called the Mosbrucher Maar, is a silted up maar east of the municipal boundary of the village of Mosbruch in the county Vulkaneifel in Germany. It is located immediately at the foot of the 675-metre-high Hochkelberg, a former volcano. The floor of the maar is in the shape of an elongated oval and is about 700×500 metres in size, its upper boundary has a diameter of about 1,300 × 1,050 metres. This makes the Mosbrucher Maar the third largest of the maars in the western Eifel region. The Üßbach stream flows past and close to the Mosbrucher Weiher. Origin <br>According to pollen analysis studies, the crater was formed about 11,000 years ago by a volcanic eruption. In the area around the maar there are very few volcanic tuffs in comparison to other Eifel maars; only in two places are there greater accumulations of tuff; the rest of the surrounding area is covered only by a thin layer.</code> |
* Loss: <code>selfloss.Infonce</code> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,096 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.26 tokens</li><li>max: 574 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 189.9 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What architectural features are present on the front and southern sides of the Martínez Adobe house</code> | <code>The front and southern sides of the house have wooden wrap-around porches at each level. Wood shingles of either cedar or redwood originally covered the roof. The Martínez Adobe is now part of the John Muir National Historic Site and is open to the public. See also<br>California Historical Landmarks in Contra Costa County<br>National Register of Historic Places listings in Contra Costa County, California<br><br>References<br><br>Further reading<br>Feasibility Report John Muir Home and Vicente Martinez Adobe, Martinez, California. (1963). United States: National Park Service, U.S. Department of the Interior. Western Regional Office. Vincent, G., Mariotti, J., Rubin, J. (2009). Pinole. United States: Arcadia Publishing.</code> |
| <code>What are the cognitive aspects being assessed in relation to TBI, and how do they impact the rehabilitation services for individuals, including warfighters with hearing problems</code> | <code>“Within AASC, we’ve been very proactive as part of interdisciplinary teams assessing TBI. Another area we’re looking at involves cognitive aspects associated with TBI and mild TBI and the best approach to providing rehabilitative services.”<br>As with warfighters who return to duty – including combat – with prosthetic feet or legs, many with hearing problems also want to continue serving rather than accept medical discharges.</code> |
| <code>What are the benefits mentioned by BIO President & CEO Jim Greenwood regarding the energy title programs in rural America</code> | <code>BIO President & CEO Jim Greenwood said, “The important energy title programs authorized and funded in this bill are just beginning to have a positive impact in revitalizing rural America, fueling economic growth and creating well-paying opportunities where we need it most -- in manufacturing, energy, agriculture and forestry. These programs can also help meet our responsibilities to revitalize rural areas, reduce dependence on foreign oil, and renew economic growth.</code> |
* Loss: <code>selfloss.Infonce</code> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0961 | 100 | 0.2849 | 0.0915 |
| 0.1921 | 200 | 0.0963 | 0.0511 |
| 0.2882 | 300 | 0.069 | 0.0459 |
| 0.3842 | 400 | 0.0622 | 0.0445 |
| 0.4803 | 500 | 0.0544 | 0.0441 |
| 0.5764 | 600 | 0.0615 | 0.0418 |
| 0.6724 | 700 | 0.0573 | 0.0416 |
| 0.7685 | 800 | 0.0524 | 0.0435 |
| 0.8646 | 900 | 0.0523 | 0.0398 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.4.0
- Transformers: 4.42.4
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### Infonce
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
kartikgupta373/as15669-mustardyellow | kartikgupta373 | 2025-01-29T07:25:47Z | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:25:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15669 Mustardyellow
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15669-mustardyellow', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lesso11/3457c5f0-b44d-4efa-beb9-ff95835195d6 | lesso11 | 2025-01-29T07:25:18Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:21:42Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3457c5f0-b44d-4efa-beb9-ff95835195d6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 775410f20973b41e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/775410f20973b41e_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso11/3457c5f0-b44d-4efa-beb9-ff95835195d6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/775410f20973b41e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2a9c3890-5cf7-4888-91af-b81ebd4af89f
wandb_project: multi
wandb_run: your_name
wandb_runid: 2a9c3890-5cf7-4888-91af-b81ebd4af89f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3457c5f0-b44d-4efa-beb9-ff95835195d6
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1886 | 0.3384 | 200 | 2.1742 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/e93356a9-aa11-4850-89e0-e6831b6962c8 | Best000 | 2025-01-29T07:25:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:22:00Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e93356a9-aa11-4850-89e0-e6831b6962c8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 775410f20973b41e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/775410f20973b41e_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/e93356a9-aa11-4850-89e0-e6831b6962c8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/775410f20973b41e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2a9c3890-5cf7-4888-91af-b81ebd4af89f
wandb_project: Birthday-SN56-32-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2a9c3890-5cf7-4888-91af-b81ebd4af89f
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e93356a9-aa11-4850-89e0-e6831b6962c8
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.7100 |
| 2.7609 | 0.0028 | 13 | 2.6856 |
| 2.722 | 0.0055 | 26 | 2.5662 |
| 2.5732 | 0.0083 | 39 | 2.4285 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso15/b288b863-f9b2-4c1a-9587-7c62df850262 | lesso15 | 2025-01-29T07:23:39Z | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:20:00Z | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b288b863-f9b2-4c1a-9587-7c62df850262
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 75ea8b2b0ce0747b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/75ea8b2b0ce0747b_train_data.json
type:
field_input: Resume_str
field_instruction: Category
field_output: Resume_html
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso15/b288b863-f9b2-4c1a-9587-7c62df850262
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/75ea8b2b0ce0747b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 09b31402-03d6-4e52-b0bc-a10763cac165
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 09b31402-03d6-4e52-b0bc-a10763cac165
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b288b863-f9b2-4c1a-9587-7c62df850262
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8504 | 0.7373 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SandLogicTechnologies/DeepSeek-R1-Distill-Qwen-1.5B-GGUF | SandLogicTechnologies | 2025-01-29T07:23:25Z | 221 | 2 | null | [
"gguf",
"Qwen2",
"Conversational",
"EdgeAI",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T06:56:05Z | ---
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- Qwen2
- Conversational
- EdgeAI
---
# DeepSeek-R1-Distill-Qwen-1.5B Quantized Models
This repository contains Q4_KM and Q5_KM quantized versions of the [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) model, optimized for efficient deployment while maintaining strong performance.
Discover our full range of quantized language models by visiting our [SandLogic Lexicon HuggingFace](https://huggingface.co/SandLogicTechnologies). To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com/).
## Model Description
These models are quantized versions of DeepSeek-R1-Distill-Qwen-1.5B, which is a highly efficient distilled 1.5B parameter model based on the Qwen architecture. This lightweight model demonstrates that reasoning patterns from larger models can be effectively distilled into much smaller architectures, making it ideal for resource-constrained deployments.
### Key Features
- Ultra-lightweight model with only 1.5B parameters
- Fine-tuned using DeepSeek-R1 generated reasoning data
- Modified configurations and tokenizer optimized for performance
- Excellent balance of performance and resource efficiency
- Perfect for edge devices and limited compute environments
### Available Quantized Versions
1. **Q4_KM Version**
- 4-bit quantization using the K-means method
- Approximately 1.12GB model size
- Exceptional efficiency for deployment
- Ideal for mobile and edge devices
2. **Q5_KM Version**
- 5-bit quantization using the K-means method
- Approximately 1.30GB model size
- Higher precision while maintaining small size
- Recommended for balanced performance requirements
## Usage
```bash
pip install llama-cpp-python
```
Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support.
### Basic Text Completion
Here's an example demonstrating how to use the high-level API for basic text completion:
```python
from llama_cpp import Llama
llm = Llama(
model_path="model/path/",
verbose=False,
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# n_ctx=2048, # Uncomment to increase the context window
)
# Example of a simple task
output = llm(
"Q: What are the benefits of using smaller language models? A: ",
max_tokens=128,
stop=["Q:", "\n\n"],
echo=False
)
print(output["choices"][0]["text"])
```
## Model Configuration Changes
Please note that DeepSeek have made slight modifications to the original Qwen-1.5B configurations and tokenizer to optimize performance. When using these models, ensure you're using provided settings rather than the original Qwen-1.5B configurations.
## Deployment Benefits
- Minimal RAM requirements (< 2GB)
- Fast inference speed
- Suitable for CPU-only environments
- Excellent for edge computing applications
- Efficient batching capabilities
## License
This model inherits the license of the original DeepSeek-R1-Distill-Qwen-1.5B model. Please refer to the original model's license for usage terms and conditions.
## Acknowledgments
We thank the DeepSeek AI team for open-sourcing their distilled models and demonstrating that even very small models can achieve impressive performance through effective distillation techniques. Special thanks also to the Qwen team for providing the base model architecture. |
kartikgupta373/as15662-509032-pastel-green | kartikgupta373 | 2025-01-29T07:21:52Z | 25 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:21:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15662 509032 Pastel Green
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15662-509032-pastel-green', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
kartikgupta373/as15661-509023-caramine-pink | kartikgupta373 | 2025-01-29T07:21:18Z | 10 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T07:21:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# As15661 509023 Caramine Pink
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/as15661-509023-caramine-pink', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
AIFunOver/DeepSeek-R1-Distill-Llama-8B-openvino-4bit | AIFunOver | 2025-01-29T07:21:12Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"openvino",
"llama",
"text-generation",
"nncf",
"4-bit",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T07:04:45Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
library_name: transformers
license: mit
tags:
- openvino
- nncf
- 4-bit
base_model_relation: quantized
---
This model is a quantized version of [`deepseek-ai/DeepSeek-R1-Distill-Llama-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
First make sure you have `optimum-intel` installed:
```bash
pip install optimum[openvino]
```
To load your model you can do as follows:
```python
from optimum.intel import OVModelForCausalLM
model_id = "AIFunOver/DeepSeek-R1-Distill-Llama-8B-openvino-4bit"
model = OVModelForCausalLM.from_pretrained(model_id)
```
|
YOYO-AI/Qwen2.5-14B-YOYO-1005-v2 | YOYO-AI | 2025-01-29T07:20:50Z | 11 | 0 | null | [
"safetensors",
"qwen2",
"merge",
"text-generation",
"conversational",
"en",
"zh",
"base_model:YOYO-AI/Qwen2.5-14B-YOYO-1005",
"base_model:finetune:YOYO-AI/Qwen2.5-14B-YOYO-1005",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-01-29T02:30:38Z | ---
license: apache-2.0
language:
- en
- zh
base_model:
- YOYO-AI/Qwen2.5-14B-YOYO-1005
pipeline_tag: text-generation
tags:
- merge
---
I will release the second-generation versions of **1010**, **1005**, **0510**, and **0505**, test them on the **open_llm_leaderboard**, and select the model with the highest average score as the **latest** version for this iteration. Finally, I will generalize this merging methodology for broader application. |
ardaspear/9d7c64a0-a477-4ddb-be0d-253947673083 | ardaspear | 2025-01-29T07:20:00Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:39:34Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d7c64a0-a477-4ddb-be0d-253947673083
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f13f8c7f24d1c82b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f13f8c7f24d1c82b_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/9d7c64a0-a477-4ddb-be0d-253947673083
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/f13f8c7f24d1c82b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: fe5a2fbf-53c6-40d5-bfc2-dd765f3feb4e
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: fe5a2fbf-53c6-40d5-bfc2-dd765f3feb4e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9d7c64a0-a477-4ddb-be0d-253947673083
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 1.1209 |
| 1.1429 | 0.0012 | 9 | 1.1171 |
| 1.114 | 0.0024 | 18 | 1.1026 |
| 1.0849 | 0.0036 | 27 | 1.0858 |
| 1.0434 | 0.0048 | 36 | 1.0698 |
| 1.0651 | 0.0060 | 45 | 1.0571 |
| 1.1012 | 0.0072 | 54 | 1.0479 |
| 1.0737 | 0.0084 | 63 | 1.0407 |
| 1.0229 | 0.0096 | 72 | 1.0360 |
| 1.0336 | 0.0108 | 81 | 1.0332 |
| 0.9747 | 0.0120 | 90 | 1.0321 |
| 1.0246 | 0.0132 | 99 | 1.0319 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tryingpro/bd50aa04-2634-48bd-9154-e083b9863b7f | tryingpro | 2025-01-29T07:19:30Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | 2025-01-29T02:19:57Z | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bd50aa04-2634-48bd-9154-e083b9863b7f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 93a2807477853fd7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/93a2807477853fd7_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 256
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: false
hub_model_id: tryingpro/bd50aa04-2634-48bd-9154-e083b9863b7f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lr_scheduler: cosine
max_grad_norm: 2
max_steps: 90
micro_batch_size: 2
mlflow_experiment_name: /tmp/93a2807477853fd7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: tryingpro-unicourt
wandb_mode: online
wandb_name: 1baad95d-3392-4bf7-aae8-e00a80f185c4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1baad95d-3392-4bf7-aae8-e00a80f185c4
warmup_steps: 20
weight_decay: 0.02
xformers_attention: false
```
</details><br>
# bd50aa04-2634-48bd-9154-e083b9863b7f
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | nan |
| 0.0 | 0.0027 | 8 | nan |
| 0.0 | 0.0053 | 16 | nan |
| 0.0 | 0.0080 | 24 | nan |
| 0.0 | 0.0107 | 32 | nan |
| 0.0 | 0.0133 | 40 | nan |
| 0.0 | 0.0160 | 48 | nan |
| 0.0 | 0.0187 | 56 | nan |
| 0.0 | 0.0213 | 64 | nan |
| 0.0 | 0.0240 | 72 | nan |
| 0.0 | 0.0266 | 80 | nan |
| 0.0 | 0.0293 | 88 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
memevis/p12 | memevis | 2025-01-29T07:16:39Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T07:11:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
outlookAi/emRGMxaKX5 | outlookAi | 2025-01-29T07:13:37Z | 12 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-29T06:51:35Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NidtaP
---
# Emrgmxakx5
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NidtaP` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/emRGMxaKX5', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
duyphu/015a3dba-1747-4578-b959-b3877f3beec8 | duyphu | 2025-01-29T07:11:59Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T07:00:52Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 015a3dba-1747-4578-b959-b3877f3beec8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c710dbacd1baf82d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c710dbacd1baf82d_train_data.json
type:
field_instruction: prompt
field_output: story
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/015a3dba-1747-4578-b959-b3877f3beec8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c710dbacd1baf82d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96aa06fc-7593-4da9-898b-b6eb1b530143
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96aa06fc-7593-4da9-898b-b6eb1b530143
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 015a3dba-1747-4578-b959-b3877f3beec8
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0030 | 1 | nan |
| 0.0 | 0.0301 | 10 | nan |
| 0.0 | 0.0602 | 20 | nan |
| 0.0 | 0.0902 | 30 | nan |
| 0.0 | 0.1203 | 40 | nan |
| 0.0 | 0.1504 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
reds0510/npo_gdr_1e-6_ckpt50 | reds0510 | 2025-01-29T07:11:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:59:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/795c5b88-53f5-4b67-b3f4-e18696e1879d | mrferr3t | 2025-01-29T07:09:08Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:39:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 795c5b88-53f5-4b67-b3f4-e18696e1879d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f13f8c7f24d1c82b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f13f8c7f24d1c82b_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/795c5b88-53f5-4b67-b3f4-e18696e1879d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 19
micro_batch_size: 2
mlflow_experiment_name: /tmp/f13f8c7f24d1c82b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fe5a2fbf-53c6-40d5-bfc2-dd765f3feb4e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fe5a2fbf-53c6-40d5-bfc2-dd765f3feb4e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 795c5b88-53f5-4b67-b3f4-e18696e1879d
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 19
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1519 | 0.0000 | 1 | 1.1686 |
| 1.1649 | 0.0002 | 5 | 1.1682 |
| 1.0092 | 0.0003 | 10 | 1.1638 |
| 1.1278 | 0.0005 | 15 | 1.1569 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5604/5c25f201-61c2-4c24-b99c-71e35882361e | prxy5604 | 2025-01-29T07:07:01Z | 9 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-01-29T06:43:18Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c25f201-61c2-4c24-b99c-71e35882361e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- f65209fd2b79f576_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f65209fd2b79f576_train_data.json
type:
field_instruction: text
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/5c25f201-61c2-4c24-b99c-71e35882361e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/f65209fd2b79f576_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5c25f201-61c2-4c24-b99c-71e35882361e
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.55 | 0.0006 | 1 | 0.7866 |
| 2.4345 | 0.0316 | 50 | 0.3219 |
| 1.9095 | 0.0632 | 100 | 0.2910 |
| 2.0765 | 0.0947 | 150 | 0.2781 |
| 2.0481 | 0.1263 | 200 | 0.2736 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/1f13060f-2597-4e29-89e7-320382c88449 | kk-aivio | 2025-01-29T07:06:49Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"region:us"
] | null | 2025-01-29T07:04:59Z | ---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f13060f-2597-4e29-89e7-320382c88449
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 349dac9ba163f0a5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/349dac9ba163f0a5_train_data.json
type:
field_instruction: question
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/1f13060f-2597-4e29-89e7-320382c88449
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/349dac9ba163f0a5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f1b7d9e-507c-4d19-8049-642ebf7e0fb6
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f1b7d9e-507c-4d19-8049-642ebf7e0fb6
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1f13060f-2597-4e29-89e7-320382c88449
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 4.2997 |
| 2.79 | 0.0466 | 13 | 1.2274 |
| 1.1457 | 0.0933 | 26 | 1.0730 |
| 1.047 | 0.1399 | 39 | 1.0469 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso05/8b59edcc-7cfa-41c7-b687-e5698d7da29d | lesso05 | 2025-01-29T07:03:30Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:00:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b59edcc-7cfa-41c7-b687-e5698d7da29d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: true
chat_template: llama3
datasets:
- data_files:
- c710dbacd1baf82d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c710dbacd1baf82d_train_data.json
type:
field_instruction: prompt
field_output: story
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/8b59edcc-7cfa-41c7-b687-e5698d7da29d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/c710dbacd1baf82d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96aa06fc-7593-4da9-898b-b6eb1b530143
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96aa06fc-7593-4da9-898b-b6eb1b530143
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8b59edcc-7cfa-41c7-b687-e5698d7da29d
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0030 | 1 | nan |
| 0.0 | 0.0150 | 5 | nan |
| 0.0 | 0.0301 | 10 | nan |
| 0.0 | 0.0451 | 15 | nan |
| 0.0 | 0.0602 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gavrilstep/ea5cc700-5d72-458d-a2fa-14d39fe0f3e8 | gavrilstep | 2025-01-29T07:02:41Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T07:00:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ea5cc700-5d72-458d-a2fa-14d39fe0f3e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c710dbacd1baf82d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c710dbacd1baf82d_train_data.json
type:
field_instruction: prompt
field_output: story
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/ea5cc700-5d72-458d-a2fa-14d39fe0f3e8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/c710dbacd1baf82d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 96aa06fc-7593-4da9-898b-b6eb1b530143
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 96aa06fc-7593-4da9-898b-b6eb1b530143
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ea5cc700-5d72-458d-a2fa-14d39fe0f3e8
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0030 | 1 | nan |
| 0.0 | 0.0150 | 5 | nan |
| 0.0 | 0.0301 | 10 | nan |
| 0.0 | 0.0451 | 15 | nan |
| 0.0 | 0.0602 | 20 | nan |
| 0.0 | 0.0752 | 25 | nan |
| 0.0 | 0.0902 | 30 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NalDice/askvox-1.2 | NalDice | 2025-01-29T07:01:56Z | 32 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:49:45Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** NalDice
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Lora2025-01-27-GGUF | mradermacher | 2025-01-29T07:00:07Z | 279 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ianfoster/Lora2025-01-27",
"base_model:quantized:ianfoster/Lora2025-01-27",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T05:58:09Z | ---
base_model: ianfoster/Lora2025-01-27
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/ianfoster/Lora2025-01-27
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lora2025-01-27-GGUF/resolve/main/Lora2025-01-27.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
naijavoices/mms-tts-hau-finetuned-AQ87U | naijavoices | 2025-01-29T06:58:46Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-01-29T06:58:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Moustapha91/TTS_WOLOF_FINAL | Moustapha91 | 2025-01-29T06:55:09Z | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-01-29T06:54:49Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: TTS_WOLOF_FINAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TTS_WOLOF_FINAL
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.4017 | 6.2706 | 5000 | 0.3795 |
| 0.3821 | 12.5412 | 10000 | 0.3702 |
| 0.3708 | 18.8117 | 15000 | 0.3769 |
| 0.3605 | 25.0823 | 20000 | 0.3705 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
beingbatman/CTMAE-P2-V2-S5 | beingbatman | 2025-01-29T06:53:48Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-01-29T04:11:04Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CTMAE-P2-V2-S5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CTMAE-P2-V2-S5
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2006
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 13050
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5874 | 0.02 | 261 | 2.2577 | 0.5682 |
| 0.581 | 1.02 | 522 | 2.4954 | 0.5682 |
| 1.5552 | 2.02 | 783 | 2.2144 | 0.5682 |
| 0.7597 | 3.02 | 1044 | 2.1388 | 0.5682 |
| 1.8176 | 4.02 | 1305 | 1.5857 | 0.5682 |
| 0.9596 | 5.02 | 1566 | 1.9454 | 0.5682 |
| 0.8402 | 6.02 | 1827 | 2.0550 | 0.5682 |
| 1.0823 | 7.02 | 2088 | 1.7864 | 0.5682 |
| 1.0229 | 8.02 | 2349 | 1.8592 | 0.5682 |
| 0.7113 | 9.02 | 2610 | 1.4045 | 0.5682 |
| 1.3068 | 10.02 | 2871 | 1.4536 | 0.5682 |
| 1.7964 | 11.02 | 3132 | 1.8695 | 0.5682 |
| 1.6925 | 12.02 | 3393 | 0.7860 | 0.5682 |
| 0.3966 | 13.02 | 3654 | 2.1610 | 0.5682 |
| 0.0112 | 14.02 | 3915 | 2.7138 | 0.5682 |
| 0.5847 | 15.02 | 4176 | 0.8433 | 0.7045 |
| 0.6547 | 16.02 | 4437 | 1.7384 | 0.6136 |
| 0.7854 | 17.02 | 4698 | 1.3477 | 0.6818 |
| 1.0052 | 18.02 | 4959 | 1.4197 | 0.7045 |
| 1.4927 | 19.02 | 5220 | 2.2046 | 0.6136 |
| 0.5386 | 20.02 | 5481 | 1.2006 | 0.75 |
| 0.7256 | 21.02 | 5742 | 1.5015 | 0.7273 |
| 0.8462 | 22.02 | 6003 | 1.6405 | 0.6591 |
| 0.64 | 23.02 | 6264 | 2.2160 | 0.5682 |
| 1.0358 | 24.02 | 6525 | 2.6674 | 0.5682 |
| 0.0003 | 25.02 | 6786 | 3.2237 | 0.5682 |
| 1.449 | 26.02 | 7047 | 2.9910 | 0.5455 |
| 0.6425 | 27.02 | 7308 | 2.9668 | 0.5682 |
| 0.0038 | 28.02 | 7569 | 3.2074 | 0.5455 |
| 0.4198 | 29.02 | 7830 | 3.4554 | 0.5455 |
| 0.0002 | 30.02 | 8091 | 2.2222 | 0.6591 |
| 0.0087 | 31.02 | 8352 | 2.7093 | 0.5455 |
| 0.2823 | 32.02 | 8613 | 2.8994 | 0.5909 |
| 0.0009 | 33.02 | 8874 | 2.9261 | 0.5909 |
| 0.0064 | 34.02 | 9135 | 2.4037 | 0.6818 |
| 0.7506 | 35.02 | 9396 | 2.8436 | 0.6364 |
| 0.6686 | 36.02 | 9657 | 3.1198 | 0.5682 |
| 0.0089 | 37.02 | 9918 | 2.2353 | 0.6591 |
| 0.6753 | 38.02 | 10179 | 3.0288 | 0.6364 |
| 0.0003 | 39.02 | 10440 | 2.4052 | 0.6591 |
| 0.295 | 40.02 | 10701 | 3.7579 | 0.5682 |
| 0.0002 | 41.02 | 10962 | 3.3831 | 0.5909 |
| 0.5379 | 42.02 | 11223 | 3.5119 | 0.5455 |
| 0.0001 | 43.02 | 11484 | 3.3207 | 0.5909 |
| 0.0001 | 44.02 | 11745 | 3.1331 | 0.6136 |
| 0.0002 | 45.02 | 12006 | 3.1938 | 0.5909 |
| 0.0001 | 46.02 | 12267 | 3.2387 | 0.5909 |
| 0.6632 | 47.02 | 12528 | 3.3889 | 0.5909 |
| 0.2849 | 48.02 | 12789 | 3.3584 | 0.6364 |
| 0.0001 | 49.02 | 13050 | 3.2970 | 0.6136 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
kiranpantha/whisper-large-v3-nepali-peft-dora-speaker2-rank128-targetxckv-epochs3 | kiranpantha | 2025-01-29T06:53:34Z | 7 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/OpenSLR54-Balanced-Nepali",
"base_model:kiranpantha/whisper-large-v3-nepali",
"base_model:adapter:kiranpantha/whisper-large-v3-nepali",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:48:01Z | ---
library_name: peft
language:
- ne
license: apache-2.0
base_model: kiranpantha/whisper-large-v3-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/OpenSLR54-Balanced-Nepali
model-index:
- name: kiranpantha/whisper-large-v3-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiranpantha/whisper-large-v3-nepali
This model is a fine-tuned version of [kiranpantha/whisper-large-v3-nepali](https://huggingface.co/kiranpantha/whisper-large-v3-nepali) on the OpenSLR54 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 0.6995 |
| No log | 2.0 | 12 | 0.3506 |
| No log | 3.0 | 18 | 0.3029 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cxx11.abi
- Datasets 3.2.0
- Tokenizers 0.21.0 |
gunchoi/json-pair-hwasan | gunchoi | 2025-01-29T06:50:47Z | 869 | 0 | diffusers | [
"diffusers",
"sd3",
"sd3-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"standard",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2025-01-20T23:51:30Z | ---
license: other
base_model: stabilityai/stable-diffusion-3.5-large
tags:
- sd3
- sd3-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
- standard
inference: true
widget:
- text: unconditional (blank prompt)
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_0_0.png
- text: >-
k4s4,
{"scene_id":34,"characters":[{"character_id":"Unknown122","action":"speakingwithagesture","emotion":"concerned","position":"top-right","appearance":"darkhairtiedback,mustacheandgoatee,wearingaredrobewithyellowaccentsandadecorativehat"},{"character_id":"Unknown121","action":"listening","emotion":"focused","position":"bottom-center","appearance":"brownhairtiedup,wearingagreenrobewithacollar"}],"dialogue":[{"character_id":"Unknown122","dialogue_type":"exclamation","original_text":"하면...!","translated_text":"Then...!","position":"top-left"},{"character_id":"Unknown121","dialogue_type":"normalspeech","original_text":"하면대체이게무슨병이란말입니까!","translated_text":"Thenwhatillnessarewedealingwith?","position":"bottom-center"}],"description":"Unknown122,appearinganimatedandconcerned,questionsthenatureoftheillness,whileUnknown121listensintently,standingcloseinaninteriorroom.","setting":{"location":"interiorroomwithlatticewindows","time_of_the_day":"n/a"},"purpose_of_the_scene":"Toportraytheurgentsearchforananswertothemysteriousillnesstroublingthecharacters,addingintensitytothedilemma.","camera_angle":"high-angleshotcapturingbothcharacters","continuity_note":"MaintainUnknown122'sconcerneddemeanorandattire,consistentwithhispreviousscenes.","focal_points":["Unknown122'sanimatedexpression","dialoguebubbles"]}
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_1_0.png
---
# simpletuner-lora
This is a standard PEFT LoRA derived from [stabilityai/stable-diffusion-3.5-large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large).
The main validation prompt used during training was:
```
k4s4, {"scene_id":34,"characters":[{"character_id":"Unknown122","action":"speakingwithagesture","emotion":"concerned","position":"top-right","appearance":"darkhairtiedback,mustacheandgoatee,wearingaredrobewithyellowaccentsandadecorativehat"},{"character_id":"Unknown121","action":"listening","emotion":"focused","position":"bottom-center","appearance":"brownhairtiedup,wearingagreenrobewithacollar"}],"dialogue":[{"character_id":"Unknown122","dialogue_type":"exclamation","original_text":"하면...!","translated_text":"Then...!","position":"top-left"},{"character_id":"Unknown121","dialogue_type":"normalspeech","original_text":"하면대체이게무슨병이란말입니까!","translated_text":"Thenwhatillnessarewedealingwith?","position":"bottom-center"}],"description":"Unknown122,appearinganimatedandconcerned,questionsthenatureoftheillness,whileUnknown121listensintently,standingcloseinaninteriorroom.","setting":{"location":"interiorroomwithlatticewindows","time_of_the_day":"n/a"},"purpose_of_the_scene":"Toportraytheurgentsearchforananswertothemysteriousillnesstroublingthecharacters,addingintensitytothedilemma.","camera_angle":"high-angleshotcapturingbothcharacters","continuity_note":"MaintainUnknown122'sconcerneddemeanorandattire,consistentwithhispreviousscenes.","focal_points":["Unknown122'sanimatedexpression","dialoguebubbles"]}
```
## Validation settings
- CFG: `7.5`
- CFG Rescale: `0.0`
- Steps: `30`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `512x512`
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 7
- Training steps: 10768
- Learning rate: 1e-05
- Learning rate schedule: cosine
- Warmup steps: 2500
- Max grad norm: 0.1
- Effective batch size: 6
- Micro-batch size: 6
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['shift=3'])
- Optimizer: adamw_bf16
- Trainable parameter precision: Pure BF16
- Caption dropout probability: 20.0%
- LoRA Rank: 16
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### webtoon-storyboard
- Repeats: 2
- Total number of images: 2692
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'stabilityai/stable-diffusion-3.5-large'
adapter_id = 'gunchoi/simpletuner-lora'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)
prompt = "k4s4, {"scene_id":34,"characters":[{"character_id":"Unknown122","action":"speakingwithagesture","emotion":"concerned","position":"top-right","appearance":"darkhairtiedback,mustacheandgoatee,wearingaredrobewithyellowaccentsandadecorativehat"},{"character_id":"Unknown121","action":"listening","emotion":"focused","position":"bottom-center","appearance":"brownhairtiedup,wearingagreenrobewithacollar"}],"dialogue":[{"character_id":"Unknown122","dialogue_type":"exclamation","original_text":"하면...!","translated_text":"Then...!","position":"top-left"},{"character_id":"Unknown121","dialogue_type":"normalspeech","original_text":"하면대체이게무슨병이란말입니까!","translated_text":"Thenwhatillnessarewedealingwith?","position":"bottom-center"}],"description":"Unknown122,appearinganimatedandconcerned,questionsthenatureoftheillness,whileUnknown121listensintently,standingcloseinaninteriorroom.","setting":{"location":"interiorroomwithlatticewindows","time_of_the_day":"n/a"},"purpose_of_the_scene":"Toportraytheurgentsearchforananswertothemysteriousillnesstroublingthecharacters,addingintensitytothedilemma.","camera_angle":"high-angleshotcapturingbothcharacters","continuity_note":"MaintainUnknown122'sconcerneddemeanorandattire,consistentwithhispreviousscenes.","focal_points":["Unknown122'sanimatedexpression","dialoguebubbles"]}"
negative_prompt = 'blurry, cropped, ugly'
## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=30,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=512,
height=512,
guidance_scale=7.5,
).images[0]
image.save("output.png", format="PNG")
``` |
Kuongan/xlm-roberta-base-esp-finetuned | Kuongan | 2025-01-29T06:49:13Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T06:24:40Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-esp-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-esp-finetuned
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- F1: 0.7715
- Roc Auc: 0.8559
- Accuracy: 0.6033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.556 | 1.0 | 98 | 0.4924 | 0.1173 | 0.5484 | 0.125 |
| 0.3942 | 2.0 | 196 | 0.3753 | 0.6280 | 0.7772 | 0.4293 |
| 0.3051 | 3.0 | 294 | 0.3283 | 0.7282 | 0.8250 | 0.5380 |
| 0.2566 | 4.0 | 392 | 0.3234 | 0.7277 | 0.8307 | 0.5380 |
| 0.2077 | 5.0 | 490 | 0.3109 | 0.7502 | 0.8392 | 0.5652 |
| 0.1646 | 6.0 | 588 | 0.3135 | 0.7383 | 0.8336 | 0.5435 |
| 0.1524 | 7.0 | 686 | 0.3132 | 0.7456 | 0.8359 | 0.5707 |
| 0.1346 | 8.0 | 784 | 0.3253 | 0.7427 | 0.8341 | 0.5380 |
| 0.1076 | 9.0 | 882 | 0.3272 | 0.7549 | 0.8457 | 0.5924 |
| 0.0963 | 10.0 | 980 | 0.3384 | 0.7671 | 0.8528 | 0.5978 |
| 0.0888 | 11.0 | 1078 | 0.3381 | 0.7620 | 0.8485 | 0.5870 |
| 0.0762 | 12.0 | 1176 | 0.3345 | 0.7715 | 0.8559 | 0.6033 |
| 0.0528 | 13.0 | 1274 | 0.3566 | 0.7683 | 0.8577 | 0.5924 |
| 0.0512 | 14.0 | 1372 | 0.3522 | 0.7643 | 0.8534 | 0.5924 |
| 0.0435 | 15.0 | 1470 | 0.3595 | 0.7635 | 0.8517 | 0.5978 |
| 0.0415 | 16.0 | 1568 | 0.3651 | 0.7646 | 0.8550 | 0.5870 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
lesso02/44f62377-57e0-48f6-bb52-b4c07682bfbc | lesso02 | 2025-01-29T06:49:07Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T06:39:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 44f62377-57e0-48f6-bb52-b4c07682bfbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f13f8c7f24d1c82b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f13f8c7f24d1c82b_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/44f62377-57e0-48f6-bb52-b4c07682bfbc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/f13f8c7f24d1c82b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fe5a2fbf-53c6-40d5-bfc2-dd765f3feb4e
wandb_project: multi
wandb_run: your_name
wandb_runid: fe5a2fbf-53c6-40d5-bfc2-dd765f3feb4e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 44f62377-57e0-48f6-bb52-b4c07682bfbc
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0534 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso04/6801f452-bf72-4dd4-bb87-8d7f1ece48c3 | lesso04 | 2025-01-29T06:48:17Z | 9 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-01-29T06:36:26Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6801f452-bf72-4dd4-bb87-8d7f1ece48c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f65209fd2b79f576_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f65209fd2b79f576_train_data.json
type:
field_instruction: text
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso04/6801f452-bf72-4dd4-bb87-8d7f1ece48c3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/f65209fd2b79f576_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
wandb_project: multi
wandb_run: your_name
wandb_runid: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6801f452-bf72-4dd4-bb87-8d7f1ece48c3
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8426 | 0.2526 | 200 | 0.3059 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/8521530e-949b-412d-a088-9b8575ff5f89 | thalllsssss | 2025-01-29T06:46:28Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:45:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8521530e-949b-412d-a088-9b8575ff5f89
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47d54f36be91dd39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47d54f36be91dd39_train_data.json
type:
field_input: choices
field_instruction: question_eng
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/8521530e-949b-412d-a088-9b8575ff5f89
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/47d54f36be91dd39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f1df40a9-a29a-4e64-9bf4-df4241b29729
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f1df40a9-a29a-4e64-9bf4-df4241b29729
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8521530e-949b-412d-a088-9b8575ff5f89
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7952 | 0.96 | 12 | 2.6245 |
| 4.6313 | 1.04 | 13 | 2.6001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
denbeo/5c806032-c3fd-4436-88de-de1f4ddbc97e | denbeo | 2025-01-29T06:46:27Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:45:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c806032-c3fd-4436-88de-de1f4ddbc97e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47d54f36be91dd39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47d54f36be91dd39_train_data.json
type:
field_input: choices
field_instruction: question_eng
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/5c806032-c3fd-4436-88de-de1f4ddbc97e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/47d54f36be91dd39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f1df40a9-a29a-4e64-9bf4-df4241b29729
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f1df40a9-a29a-4e64-9bf4-df4241b29729
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5c806032-c3fd-4436-88de-de1f4ddbc97e
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7811 | 0.96 | 12 | 2.6076 |
| 4.5854 | 1.04 | 13 | 2.5995 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso10/68209fb2-f5a7-4e04-b19f-a0c3db0119bd | lesso10 | 2025-01-29T06:46:10Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:11:47Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 68209fb2-f5a7-4e04-b19f-a0c3db0119bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: true
chat_template: llama3
datasets:
- data_files:
- ab9f66717531643e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ab9f66717531643e_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso10/68209fb2-f5a7-4e04-b19f-a0c3db0119bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/ab9f66717531643e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 99226ce4-70ae-47e9-94ba-26f819deda4a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 99226ce4-70ae-47e9-94ba-26f819deda4a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 68209fb2-f5a7-4e04-b19f-a0c3db0119bd
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3075 | 0.0002 | 1 | 2.5874 |
| 2.5536 | 0.0008 | 5 | 2.5263 |
| 2.1837 | 0.0016 | 10 | 2.2839 |
| 2.1648 | 0.0024 | 15 | 2.1811 |
| 1.9718 | 0.0033 | 20 | 2.1177 |
| 2.0016 | 0.0041 | 25 | 2.1023 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nghiatrannnnnn/9907a2d9-244d-4bc1-a282-1cef43daf6db | nghiatrannnnnn | 2025-01-29T06:45:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T06:45:23Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9907a2d9-244d-4bc1-a282-1cef43daf6db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47d54f36be91dd39_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47d54f36be91dd39_train_data.json
type:
field_input: choices
field_instruction: question_eng
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nghiatrannnnnn/9907a2d9-244d-4bc1-a282-1cef43daf6db
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/47d54f36be91dd39_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f1df40a9-a29a-4e64-9bf4-df4241b29729
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f1df40a9-a29a-4e64-9bf4-df4241b29729
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9907a2d9-244d-4bc1-a282-1cef43daf6db
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7951 | 0.96 | 12 | 2.5965 |
| 4.5822 | 1.04 | 13 | 2.5733 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/424e8751-0f86-4399-980d-db8080299df0 | mrferr3t | 2025-01-29T06:45:40Z | 8 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-01-29T06:36:54Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 424e8751-0f86-4399-980d-db8080299df0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f65209fd2b79f576_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f65209fd2b79f576_train_data.json
type:
field_instruction: text
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/424e8751-0f86-4399-980d-db8080299df0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 16
micro_batch_size: 2
mlflow_experiment_name: /tmp/f65209fd2b79f576_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 424e8751-0f86-4399-980d-db8080299df0
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.2854 | 0.0002 | 1 | 0.7521 |
| 8.0356 | 0.0006 | 4 | 0.7514 |
| 8.5858 | 0.0013 | 8 | 0.7271 |
| 7.7803 | 0.0019 | 12 | 0.6742 |
| 9.4474 | 0.0025 | 16 | 0.6521 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q6_K-GGUF | Triangle104 | 2025-01-29T06:45:33Z | 314 | 1 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-29T06:45:04Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Instruct-1M-abliterated-Q6_K-GGUF --hf-file qwen2.5-7b-instruct-1m-abliterated-q6_k.gguf -c 2048
```
|
calico-1226/video-cost-model-1216 | calico-1226 | 2025-01-29T06:43:21Z | 7 | 0 | null | [
"safetensors",
"llava_score",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-01-29T03:37:48Z | ---
license: cc-by-nc-4.0
---
|
great0001/74b9ee60-2bb7-4c7a-bdba-d42fbbb84c5d | great0001 | 2025-01-29T06:40:52Z | 8 | 0 | peft | [
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-01-29T06:35:17Z | ---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 74b9ee60-2bb7-4c7a-bdba-d42fbbb84c5d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f65209fd2b79f576_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f65209fd2b79f576_train_data.json
type:
field_instruction: text
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/74b9ee60-2bb7-4c7a-bdba-d42fbbb84c5d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f65209fd2b79f576_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7fba0349-cbce-4a47-81c7-be27ce53fcc2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 74b9ee60-2bb7-4c7a-bdba-d42fbbb84c5d
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.6943 | 0.0002 | 1 | 0.7522 |
| 9.3066 | 0.0021 | 13 | 0.6642 |
| 3.4034 | 0.0041 | 26 | 0.4466 |
| 3.346 | 0.0062 | 39 | 0.3819 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Theros/Qwen2.5-ColdBrew-R1-test4 | Theros | 2025-01-29T06:40:27Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Theros/Qwen2.5-ColdBrew-R1-test2",
"base_model:merge:Theros/Qwen2.5-ColdBrew-R1-test2",
"base_model:bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2",
"base_model:merge:bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T06:35:09Z | ---
base_model:
- bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2
- Theros/Qwen2.5-ColdBrew-R1-test2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2](https://huggingface.co/bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2)
* [Theros/Qwen2.5-ColdBrew-R1-test2](https://huggingface.co/Theros/Qwen2.5-ColdBrew-R1-test2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Theros/Qwen2.5-ColdBrew-R1-test2
layer_range: [0, 28]
- model: bunnycore/Qwen-2.5-7B-Stock-Deep-Bespoke-v2
layer_range: [0, 28]
merge_method: slerp
base_model: Theros/Qwen2.5-ColdBrew-R1-test2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
tokenizer_source: Theros/Qwen2.5-ColdBrew-R1-test2
```
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_8-hook_resid_post-635.018737792969-76 | Prisma-Multimodal | 2025-01-29T06:40:24Z | 19 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:40:14Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 8
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0028
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 635.0187
- **Dead Features**: 0
- **Mean Passes Since Fired**: 45.8548
### Reconstruction
- **Explained Variance**: 0.7672
- **Explained Variance Std**: 0.2072
- **MSE Loss**: 0.0015
- **L1 Loss**: 230.6383
- **Overall Loss**: 0.0015
## Training Details
- **Training Duration**: 360 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/c0dcb7e7-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/ii5o7h2h
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_7-hook_resid_post-492.959381103516-88 | Prisma-Multimodal | 2025-01-29T06:40:13Z | 12 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:40:05Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 7
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0036
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 492.9594
- **Dead Features**: 0
- **Mean Passes Since Fired**: 121.3178
### Reconstruction
- **Explained Variance**: 0.8836
- **Explained Variance Std**: 0.0215
- **MSE Loss**: 0.0005
- **L1 Loss**: 215.2193
- **Overall Loss**: 0.0010
## Training Details
- **Training Duration**: 252 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/21aa4c67-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/5tdstmwv
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_6-hook_resid_post-430.556243896484-92 | Prisma-Multimodal | 2025-01-29T06:40:04Z | 13 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:39:53Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 6
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0061
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 430.5562
- **Dead Features**: 0
- **Mean Passes Since Fired**: 179.1497
### Reconstruction
- **Explained Variance**: 0.9292
- **Explained Variance Std**: 0.0209
- **MSE Loss**: 0.0003
- **L1 Loss**: 342.2079
- **Overall Loss**: 0.0003
## Training Details
- **Training Duration**: 254 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/a4f2874e-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/lqwere3b
- **Random Seed**: 42
|
Prisma-Multimodal/imagenet-sweep-vanilla-x64-CLS_4-hook_resid_post-682.543762207031-95 | Prisma-Multimodal | 2025-01-29T06:39:43Z | 13 | 0 | null | [
"region:us"
] | null | 2025-01-29T06:39:34Z | # CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
## Model Details
### Architecture
- **Layer**: 4
- **Layer Type**: hook_resid_post
- **Model**: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- **Dictionary Size**: 49152
- **Input Dimension**: 768
- **Expansion Factor**: 64
- **CLS Token Only**: True
### Training
- **Training Images**: 1298432
- **Learning Rate**: 0.0076
- **L1 Coefficient**: 0.0000
- **Batch Size**: 4096
- **Context Size**: 1
## Performance Metrics
### Sparsity
- **L0 (Active Features)**: 682.5438
- **Dead Features**: 0
- **Mean Passes Since Fired**: 232.3228
### Reconstruction
- **Explained Variance**: 0.9544
- **Explained Variance Std**: 0.0125
- **MSE Loss**: 0.0001
- **L1 Loss**: 318.7141
- **Overall Loss**: 0.0001
## Training Details
- **Training Duration**: 249 seconds
- **Final Learning Rate**: 0.0000
- **Warm Up Steps**: 200
- **Gradient Clipping**: 1
## Additional Information
- **Original Checkpoint Path**: /network/scratch/p/praneet.suresh/imgnet_checkpoints/f2bb5300-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1302528.pt
- **Wandb Run**: https://wandb.ai/perceptual-alignment/vanilla-imagenet-CLS_only-sweep/runs/9qbjy580
- **Random Seed**: 42
|
Subsets and Splits