modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 12:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 12:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nhunglaaaaaaa/fbbbf01c-3291-42a8-b954-285a7708768d | nhunglaaaaaaa | 2025-01-28T11:16:28Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T11:00:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fbbbf01c-3291-42a8-b954-285a7708768d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d1574132bffb371_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d1574132bffb371_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/fbbbf01c-3291-42a8-b954-285a7708768d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/3d1574132bffb371_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9b171e8-52ec-4f29-ac24-4094f9180312
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9b171e8-52ec-4f29-ac24-4094f9180312
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fbbbf01c-3291-42a8-b954-285a7708768d
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9862 | 0.9122 | 200 | 0.7664 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thakkkkkk/fbbfadd8-dbd0-48d6-8633-a50ea3fe4f7b | thakkkkkk | 2025-01-28T11:16:11Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T11:00:34Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fbbfadd8-dbd0-48d6-8633-a50ea3fe4f7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d1574132bffb371_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d1574132bffb371_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/fbbfadd8-dbd0-48d6-8633-a50ea3fe4f7b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/3d1574132bffb371_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9b171e8-52ec-4f29-ac24-4094f9180312
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9b171e8-52ec-4f29-ac24-4094f9180312
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fbbfadd8-dbd0-48d6-8633-a50ea3fe4f7b
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 110
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7959 | 0.9932 | 109 | 0.7977 |
| 1.1792 | 1.0068 | 110 | 0.7960 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClimatePolicyRadar/national-climate-targets | ClimatePolicyRadar | 2025-01-28T11:15:24Z | 259 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"climate",
"en",
"dataset:ClimatePolicyRadar/national-climate-targets",
"arxiv:2404.02822",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-14T09:20:26Z | ---
license: apache-2.0
datasets:
- ClimatePolicyRadar/national-climate-targets
language:
- en
pipeline_tag: text-classification
tags:
- climate
widget:
- text: "The Net Zero Strategy, published in October 2021, was the first document of its kind for a major economy. It set out the government’s vision for a market-led, technology-driven transition to decarbonise the UK economy and reach net zero by 2050."
inference:
parameters:
function_to_apply: "sigmoid"
---
## National Climate Targets Classifier - Climate Policy Radar
A multi-label text-classifier trained on the National Climate Targets dataset by Climate Policy Radar.
Using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) model as a starting point, this classifier is trained on the [ClimatePolicyRadar/national-climate-targets](https://huggingface.co/datasets/ClimatePolicyRadar/national-climate-targets) dataset to predict Net Zero ("NZT")
, "Reduction" and "Other" targets in a multi-label setting. The training data is an expert annotated subset of national laws, policies and UNFCCC submissions.
For more information on the annotation methodology and classifier training [see our paper](https://arxiv.org/abs/2404.02822).
## Getting started
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "ClimatePolicyRadar/national-climate-targets"
example = "The Net Zero Strategy, published in October 2021, was the first "\
"document of its kind for a major economy. It set out the government’s "\
"vision for a market-led, technology-driven transition to decarbonise "\
"the UK economy and reach net zero by 2050."
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# using sigmoid because the model is multi-label
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, function_to_apply="sigmoid")
pipe(example, padding=True, truncation=True, return_all_scores=True)
>>> [[{'label': 'NZT', 'score': 0.9142044186592102},
{'label': 'Reduction', 'score': 0.04552844911813736},
{'label': 'Other', 'score': 0.07590094953775406}]]
```
## Licence
Our classifier is licensed as [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Please read our [Terms of Use](https://app.climatepolicyradar.org/terms-of-use), including any specific terms relevant to commercial use. Contact [email protected] with any questions.
## Links
- [Paper](https://arxiv.org/abs/2404.02822)
## Citation
```
@misc{juhasz2024identifying,
title={Identifying Climate Targets in National Laws and Policies using Machine Learning},
author={Matyas Juhasz and Tina Marchand and Roshan Melwani and Kalyan Dutia and Sarah Goodenough and Harrison Pim and Henry Franks},
year={2024},
eprint={2404.02822},
archivePrefix={arXiv},
primaryClass={cs.CY}
}
```
## Authors & Contact
Climate Policy Radar team: Matyas Juhasz, Tina Marchand, Roshan Melwani, Kalyan Dutia, Sarah Goodenough, Harrison Pim, and Henry Franks.
[email protected]
https://climatepolicyradar.org
|
KoichiYasuoka/modernbert-large-thai-wikipedia | KoichiYasuoka | 2025-01-28T11:14:57Z | 12 | 0 | null | [
"pytorch",
"modernbert",
"thai",
"masked-lm",
"fill-mask",
"custom_code",
"th",
"dataset:wikimedia/wikipedia",
"license:apache-2.0",
"region:us"
] | fill-mask | 2025-01-25T05:08:50Z | ---
language:
- "th"
tags:
- "thai"
- "masked-lm"
- "modernbert"
datasets:
- "wikimedia/wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# modernbert-large-thai-wikipedia
## Model Description
This is a ModernBERT model pre-trained on Thai Wikipedia texts. NVIDIA A100-SXM4-40GB×8 took 5 hours 30 minutes for training. You can fine-tune `modernbert-large-thai-wikipedia` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/modernbert-large-thai-wikipedia-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/modernbert-large-thai-wikipedia-ud-embeds), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-large-thai-wikipedia")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-large-thai-wikipedia",trust_remote_code=True)
```
|
nhoxinh/f5f148af-9110-418a-8a43-e830075b7837 | nhoxinh | 2025-01-28T11:14:00Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T10:57:08Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5f148af-9110-418a-8a43-e830075b7837
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/f5f148af-9110-418a-8a43-e830075b7837
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f5f148af-9110-418a-8a43-e830075b7837
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2128 | 0.2244 | 200 | 1.7437 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso/26a2299d-c78a-44da-9a2b-956371dc6925 | lesso | 2025-01-28T11:06:13Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T11:00:58Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26a2299d-c78a-44da-9a2b-956371dc6925
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d1574132bffb371_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d1574132bffb371_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso/26a2299d-c78a-44da-9a2b-956371dc6925
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/3d1574132bffb371_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9b171e8-52ec-4f29-ac24-4094f9180312
wandb_project: lesso18
wandb_run: your_name
wandb_runid: b9b171e8-52ec-4f29-ac24-4094f9180312
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 26a2299d-c78a-44da-9a2b-956371dc6925
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.9122 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jfreiter/kidsfit1 | jfreiter | 2025-01-28T11:05:19Z | 15 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-28T10:39:05Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KIDSFIT1
---
# Kidsfit1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KIDSFIT1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jfreiter/kidsfit1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mrferr3t/5358d6ea-f837-4c0c-a8c9-eb0756361431 | mrferr3t | 2025-01-28T11:03:25Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-28T11:01:32Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5358d6ea-f837-4c0c-a8c9-eb0756361431
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/5358d6ea-f837-4c0c-a8c9-eb0756361431
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 24
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5358d6ea-f837-4c0c-a8c9-eb0756361431
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1705 | 0.0011 | 1 | 2.0811 |
| 2.366 | 0.0067 | 6 | 2.0609 |
| 2.2914 | 0.0135 | 12 | 1.8999 |
| 1.8087 | 0.0202 | 18 | 1.8163 |
| 2.0879 | 0.0269 | 24 | 1.8049 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/c4c5c6be-33f9-4938-9c2b-6a9db436515d | Best000 | 2025-01-28T11:02:48Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-28T11:01:38Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c4c5c6be-33f9-4938-9c2b-6a9db436515d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/c4c5c6be-33f9-4938-9c2b-6a9db436515d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Birthday-SN56-32-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c4c5c6be-33f9-4938-9c2b-6a9db436515d
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0011 | 1 | nan |
| 0.0 | 0.0034 | 3 | nan |
| 0.0 | 0.0067 | 6 | nan |
| 0.0 | 0.0101 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/e4c2439d-7b13-489b-b78d-6ddb0e259a53 | great0001 | 2025-01-28T11:02:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T11:01:08Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e4c2439d-7b13-489b-b78d-6ddb0e259a53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d1574132bffb371_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d1574132bffb371_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/e4c2439d-7b13-489b-b78d-6ddb0e259a53
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/3d1574132bffb371_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9b171e8-52ec-4f29-ac24-4094f9180312
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9b171e8-52ec-4f29-ac24-4094f9180312
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e4c2439d-7b13-489b-b78d-6ddb0e259a53
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0023 | 1 | nan |
| 0.0 | 0.0296 | 13 | nan |
| 0.0 | 0.0593 | 26 | nan |
| 0.0 | 0.0889 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fahd200581/AIEGYPOSTERAI | fahd200581 | 2025-01-28T11:01:27Z | 12 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-28T10:33:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIEGYPOSTERAI
---
# Aiegyposterai
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIEGYPOSTERAI` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fahd200581/AIEGYPOSTERAI', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ClarenceDan/37747312-56b7-472e-b03a-62c6e3c71971 | ClarenceDan | 2025-01-28T11:01:08Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T11:00:14Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 37747312-56b7-472e-b03a-62c6e3c71971
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3d1574132bffb371_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d1574132bffb371_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/37747312-56b7-472e-b03a-62c6e3c71971
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3d1574132bffb371_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9b171e8-52ec-4f29-ac24-4094f9180312
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9b171e8-52ec-4f29-ac24-4094f9180312
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 37747312-56b7-472e-b03a-62c6e3c71971
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0046 | 1 | nan |
| 0.0 | 0.0137 | 3 | nan |
| 0.0 | 0.0274 | 6 | nan |
| 0.0 | 0.0410 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso01/300a07a9-190a-4706-a273-b5989d5c00bd | lesso01 | 2025-01-28T11:00:55Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T10:57:03Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 300a07a9-190a-4706-a273-b5989d5c00bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: true
chat_template: llama3
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso01/300a07a9-190a-4706-a273-b5989d5c00bd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 300a07a9-190a-4706-a273-b5989d5c00bd
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0011 | 1 | nan |
| 0.0 | 0.0056 | 5 | nan |
| 0.0 | 0.0112 | 10 | nan |
| 0.0 | 0.0168 | 15 | nan |
| 0.0 | 0.0224 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/fac149b9-ac37-4ed0-8e7d-9ac803d3081d | shibajustfor | 2025-01-28T10:59:16Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-28T10:57:45Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fac149b9-ac37-4ed0-8e7d-9ac803d3081d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/fac149b9-ac37-4ed0-8e7d-9ac803d3081d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fac149b9-ac37-4ed0-8e7d-9ac803d3081d
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | nan |
| 0.0 | 0.0146 | 13 | nan |
| 0.0 | 0.0292 | 26 | nan |
| 0.0 | 0.0438 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/66ef2158-27bc-4fc2-9bfe-0fa668c6d1dd | havinash-ai | 2025-01-28T10:59:08Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-28T10:57:37Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 66ef2158-27bc-4fc2-9bfe-0fa668c6d1dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/66ef2158-27bc-4fc2-9bfe-0fa668c6d1dd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 66ef2158-27bc-4fc2-9bfe-0fa668c6d1dd
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | nan |
| 0.0 | 0.0146 | 13 | nan |
| 0.0 | 0.0292 | 26 | nan |
| 0.0 | 0.0438 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/825624dd-0964-4e19-a5f2-35636f6a3a79 | daniel40 | 2025-01-28T10:59:07Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-28T10:57:31Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 825624dd-0964-4e19-a5f2-35636f6a3a79
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/825624dd-0964-4e19-a5f2-35636f6a3a79
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 825624dd-0964-4e19-a5f2-35636f6a3a79
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | nan |
| 0.0 | 0.0146 | 13 | nan |
| 0.0 | 0.0292 | 26 | nan |
| 0.0 | 0.0438 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/0a4ff6b2-dade-4d2b-8bb2-0e147f5095c7 | Best000 | 2025-01-28T10:58:25Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2025-01-28T10:57:13Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0a4ff6b2-dade-4d2b-8bb2-0e147f5095c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 899a846bf6acb565_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/899a846bf6acb565_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/0a4ff6b2-dade-4d2b-8bb2-0e147f5095c7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/899a846bf6acb565_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddebea0-c86c-4bbf-a72d-ee20bd33886d
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddebea0-c86c-4bbf-a72d-ee20bd33886d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0a4ff6b2-dade-4d2b-8bb2-0e147f5095c7
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0011 | 1 | nan |
| 0.0 | 0.0034 | 3 | nan |
| 0.0 | 0.0067 | 6 | nan |
| 0.0 | 0.0101 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KenLi315/Conan-embedding-v1-Q4_K_M-GGUF | KenLi315 | 2025-01-28T10:57:31Z | 565 | 1 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"llama-cpp",
"gguf-my-repo",
"zh",
"base_model:TencentBAC/Conan-embedding-v1",
"base_model:quantized:TencentBAC/Conan-embedding-v1",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-01-28T10:57:29Z | ---
tags:
- mteb
- llama-cpp
- gguf-my-repo
language:
- zh
license: cc-by-nc-4.0
library_name: sentence-transformers
base_model: TencentBAC/Conan-embedding-v1
model-index:
- name: conan-embedding
results:
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 56.613572467148856
- type: cos_sim_spearman
value: 60.66446211824284
- type: euclidean_pearson
value: 58.42080485872613
- type: euclidean_spearman
value: 59.82750030458164
- type: manhattan_pearson
value: 58.39885271199772
- type: manhattan_spearman
value: 59.817749720366734
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 56.60530380552331
- type: cos_sim_spearman
value: 58.63822441736707
- type: euclidean_pearson
value: 62.18551665180664
- type: euclidean_spearman
value: 58.23168804495912
- type: manhattan_pearson
value: 62.17191480770053
- type: manhattan_spearman
value: 58.22556219601401
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 50.308
- type: f1
value: 46.927458607895126
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 72.6472074172711
- type: cos_sim_spearman
value: 74.50748447236577
- type: euclidean_pearson
value: 72.51833296451854
- type: euclidean_spearman
value: 73.9898922606105
- type: manhattan_pearson
value: 72.50184948939338
- type: manhattan_spearman
value: 73.97797921509638
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 60.63545326048343
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 52.64834762325994
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 91.38528814655234
- type: mrr
value: 93.35857142857144
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 89.72084678877096
- type: mrr
value: 91.74380952380953
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.987
- type: map_at_10
value: 40.675
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.596000000000004
- type: map_at_3
value: 36.195
- type: map_at_5
value: 38.704
- type: mrr_at_1
value: 41.21
- type: mrr_at_10
value: 49.816
- type: mrr_at_100
value: 50.743
- type: mrr_at_1000
value: 50.77700000000001
- type: mrr_at_3
value: 47.312
- type: mrr_at_5
value: 48.699999999999996
- type: ndcg_at_1
value: 41.21
- type: ndcg_at_10
value: 47.606
- type: ndcg_at_100
value: 54.457
- type: ndcg_at_1000
value: 56.16100000000001
- type: ndcg_at_3
value: 42.108000000000004
- type: ndcg_at_5
value: 44.393
- type: precision_at_1
value: 41.21
- type: precision_at_10
value: 10.593
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 23.881
- type: precision_at_5
value: 17.339
- type: recall_at_1
value: 26.987
- type: recall_at_10
value: 58.875
- type: recall_at_100
value: 87.023
- type: recall_at_1000
value: 98.328
- type: recall_at_3
value: 42.265
- type: recall_at_5
value: 49.334
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 85.91701743836441
- type: cos_sim_ap
value: 92.53650618807644
- type: cos_sim_f1
value: 86.80265975431082
- type: cos_sim_precision
value: 83.79025239338556
- type: cos_sim_recall
value: 90.039747486556
- type: dot_accuracy
value: 77.17378232110643
- type: dot_ap
value: 85.40244368166546
- type: dot_f1
value: 79.03038001481951
- type: dot_precision
value: 72.20502901353966
- type: dot_recall
value: 87.2808043020809
- type: euclidean_accuracy
value: 84.65423932651834
- type: euclidean_ap
value: 91.47775530034588
- type: euclidean_f1
value: 85.64471499723298
- type: euclidean_precision
value: 81.31567885666246
- type: euclidean_recall
value: 90.46060322656068
- type: manhattan_accuracy
value: 84.58208057726999
- type: manhattan_ap
value: 91.46228709402014
- type: manhattan_f1
value: 85.6631626034444
- type: manhattan_precision
value: 82.10075026795283
- type: manhattan_recall
value: 89.5487491232172
- type: max_accuracy
value: 85.91701743836441
- type: max_ap
value: 92.53650618807644
- type: max_f1
value: 86.80265975431082
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 83.693
- type: map_at_10
value: 90.098
- type: map_at_100
value: 90.145
- type: map_at_1000
value: 90.146
- type: map_at_3
value: 89.445
- type: map_at_5
value: 89.935
- type: mrr_at_1
value: 83.878
- type: mrr_at_10
value: 90.007
- type: mrr_at_100
value: 90.045
- type: mrr_at_1000
value: 90.046
- type: mrr_at_3
value: 89.34
- type: mrr_at_5
value: 89.835
- type: ndcg_at_1
value: 84.089
- type: ndcg_at_10
value: 92.351
- type: ndcg_at_100
value: 92.54599999999999
- type: ndcg_at_1000
value: 92.561
- type: ndcg_at_3
value: 91.15299999999999
- type: ndcg_at_5
value: 91.968
- type: precision_at_1
value: 84.089
- type: precision_at_10
value: 10.011000000000001
- type: precision_at_100
value: 1.009
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 32.28
- type: precision_at_5
value: 19.789
- type: recall_at_1
value: 83.693
- type: recall_at_10
value: 99.05199999999999
- type: recall_at_100
value: 99.895
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 95.917
- type: recall_at_5
value: 97.893
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.924
- type: map_at_10
value: 81.392
- type: map_at_100
value: 84.209
- type: map_at_1000
value: 84.237
- type: map_at_3
value: 56.998000000000005
- type: map_at_5
value: 71.40100000000001
- type: mrr_at_1
value: 91.75
- type: mrr_at_10
value: 94.45
- type: mrr_at_100
value: 94.503
- type: mrr_at_1000
value: 94.505
- type: mrr_at_3
value: 94.258
- type: mrr_at_5
value: 94.381
- type: ndcg_at_1
value: 91.75
- type: ndcg_at_10
value: 88.53
- type: ndcg_at_100
value: 91.13900000000001
- type: ndcg_at_1000
value: 91.387
- type: ndcg_at_3
value: 87.925
- type: ndcg_at_5
value: 86.461
- type: precision_at_1
value: 91.75
- type: precision_at_10
value: 42.05
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 78.55
- type: precision_at_5
value: 65.82000000000001
- type: recall_at_1
value: 26.924
- type: recall_at_10
value: 89.338
- type: recall_at_100
value: 97.856
- type: recall_at_1000
value: 99.11
- type: recall_at_3
value: 59.202999999999996
- type: recall_at_5
value: 75.642
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 54.800000000000004
- type: map_at_10
value: 65.613
- type: map_at_100
value: 66.185
- type: map_at_1000
value: 66.191
- type: map_at_3
value: 62.8
- type: map_at_5
value: 64.535
- type: mrr_at_1
value: 54.800000000000004
- type: mrr_at_10
value: 65.613
- type: mrr_at_100
value: 66.185
- type: mrr_at_1000
value: 66.191
- type: mrr_at_3
value: 62.8
- type: mrr_at_5
value: 64.535
- type: ndcg_at_1
value: 54.800000000000004
- type: ndcg_at_10
value: 70.991
- type: ndcg_at_100
value: 73.434
- type: ndcg_at_1000
value: 73.587
- type: ndcg_at_3
value: 65.324
- type: ndcg_at_5
value: 68.431
- type: precision_at_1
value: 54.800000000000004
- type: precision_at_10
value: 8.790000000000001
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.2
- type: precision_at_5
value: 16.02
- type: recall_at_1
value: 54.800000000000004
- type: recall_at_10
value: 87.9
- type: recall_at_100
value: 98.6
- type: recall_at_1000
value: 99.8
- type: recall_at_3
value: 72.6
- type: recall_at_5
value: 80.10000000000001
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 51.94305502116199
- type: f1
value: 39.82197338426721
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 90.31894934333957
- type: ap
value: 63.89821836499594
- type: f1
value: 85.93687177603624
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 73.18906216730208
- type: cos_sim_spearman
value: 79.44570226735877
- type: euclidean_pearson
value: 78.8105072242798
- type: euclidean_spearman
value: 79.15605680863212
- type: manhattan_pearson
value: 78.80576507484064
- type: manhattan_spearman
value: 79.14625534068364
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 41.58107192600853
- type: mrr
value: 41.37063492063492
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 68.33
- type: map_at_10
value: 78.261
- type: map_at_100
value: 78.522
- type: map_at_1000
value: 78.527
- type: map_at_3
value: 76.236
- type: map_at_5
value: 77.557
- type: mrr_at_1
value: 70.602
- type: mrr_at_10
value: 78.779
- type: mrr_at_100
value: 79.00500000000001
- type: mrr_at_1000
value: 79.01
- type: mrr_at_3
value: 77.037
- type: mrr_at_5
value: 78.157
- type: ndcg_at_1
value: 70.602
- type: ndcg_at_10
value: 82.254
- type: ndcg_at_100
value: 83.319
- type: ndcg_at_1000
value: 83.449
- type: ndcg_at_3
value: 78.46
- type: ndcg_at_5
value: 80.679
- type: precision_at_1
value: 70.602
- type: precision_at_10
value: 9.989
- type: precision_at_100
value: 1.05
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.598999999999997
- type: precision_at_5
value: 18.948
- type: recall_at_1
value: 68.33
- type: recall_at_10
value: 94.00800000000001
- type: recall_at_100
value: 98.589
- type: recall_at_1000
value: 99.60799999999999
- type: recall_at_3
value: 84.057
- type: recall_at_5
value: 89.32900000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 78.13718897108272
- type: f1
value: 74.07613180855328
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.20040349697376
- type: f1
value: 85.05282136519973
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 56.8
- type: map_at_10
value: 64.199
- type: map_at_100
value: 64.89
- type: map_at_1000
value: 64.917
- type: map_at_3
value: 62.383
- type: map_at_5
value: 63.378
- type: mrr_at_1
value: 56.8
- type: mrr_at_10
value: 64.199
- type: mrr_at_100
value: 64.89
- type: mrr_at_1000
value: 64.917
- type: mrr_at_3
value: 62.383
- type: mrr_at_5
value: 63.378
- type: ndcg_at_1
value: 56.8
- type: ndcg_at_10
value: 67.944
- type: ndcg_at_100
value: 71.286
- type: ndcg_at_1000
value: 71.879
- type: ndcg_at_3
value: 64.163
- type: ndcg_at_5
value: 65.96600000000001
- type: precision_at_1
value: 56.8
- type: precision_at_10
value: 7.9799999999999995
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 23.1
- type: precision_at_5
value: 14.74
- type: recall_at_1
value: 56.8
- type: recall_at_10
value: 79.80000000000001
- type: recall_at_100
value: 95.39999999999999
- type: recall_at_1000
value: 99.8
- type: recall_at_3
value: 69.3
- type: recall_at_5
value: 73.7
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 78.57666666666667
- type: f1
value: 78.23373528202681
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 85.43584190579317
- type: cos_sim_ap
value: 90.76665640338129
- type: cos_sim_f1
value: 86.5021770682148
- type: cos_sim_precision
value: 79.82142857142858
- type: cos_sim_recall
value: 94.40337909186906
- type: dot_accuracy
value: 78.66811044937737
- type: dot_ap
value: 85.84084363880804
- type: dot_f1
value: 80.10075566750629
- type: dot_precision
value: 76.58959537572254
- type: dot_recall
value: 83.9493136219641
- type: euclidean_accuracy
value: 84.46128857606931
- type: euclidean_ap
value: 88.62351100230491
- type: euclidean_f1
value: 85.7709469509172
- type: euclidean_precision
value: 80.8411214953271
- type: euclidean_recall
value: 91.34107708553326
- type: manhattan_accuracy
value: 84.51543042772063
- type: manhattan_ap
value: 88.53975607870393
- type: manhattan_f1
value: 85.75697211155378
- type: manhattan_precision
value: 81.14985862393968
- type: manhattan_recall
value: 90.91869060190075
- type: max_accuracy
value: 85.43584190579317
- type: max_ap
value: 90.76665640338129
- type: max_f1
value: 86.5021770682148
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 95.06999999999998
- type: ap
value: 93.45104559324996
- type: f1
value: 95.06036329426092
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 40.01998290519605
- type: cos_sim_spearman
value: 46.5989769986853
- type: euclidean_pearson
value: 45.37905883182924
- type: euclidean_spearman
value: 46.22213849806378
- type: manhattan_pearson
value: 45.40925124776211
- type: manhattan_spearman
value: 46.250705124226386
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 42.719516197112526
- type: cos_sim_spearman
value: 44.57507789581106
- type: euclidean_pearson
value: 35.73062264160721
- type: euclidean_spearman
value: 40.473523909913695
- type: manhattan_pearson
value: 35.69868964086357
- type: manhattan_spearman
value: 40.46349925372903
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.340118285801104
- type: cos_sim_spearman
value: 67.72781908620632
- type: euclidean_pearson
value: 63.161965746091596
- type: euclidean_spearman
value: 67.36825684340769
- type: manhattan_pearson
value: 63.089863788261425
- type: manhattan_spearman
value: 67.40868898995384
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 79.1646360962365
- type: cos_sim_spearman
value: 81.24426700767087
- type: euclidean_pearson
value: 79.43826409936123
- type: euclidean_spearman
value: 79.71787965300125
- type: manhattan_pearson
value: 79.43377784961737
- type: manhattan_spearman
value: 79.69348376886967
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 68.35595092507496
- type: mrr
value: 79.00244892585788
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.588
- type: map_at_10
value: 75.327
- type: map_at_100
value: 79.095
- type: map_at_1000
value: 79.163
- type: map_at_3
value: 52.637
- type: map_at_5
value: 64.802
- type: mrr_at_1
value: 88.103
- type: mrr_at_10
value: 91.29899999999999
- type: mrr_at_100
value: 91.408
- type: mrr_at_1000
value: 91.411
- type: mrr_at_3
value: 90.801
- type: mrr_at_5
value: 91.12700000000001
- type: ndcg_at_1
value: 88.103
- type: ndcg_at_10
value: 83.314
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.83999999999999
- type: ndcg_at_3
value: 84.408
- type: ndcg_at_5
value: 83.078
- type: precision_at_1
value: 88.103
- type: precision_at_10
value: 41.638999999999996
- type: precision_at_100
value: 5.006
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 73.942
- type: precision_at_5
value: 62.056
- type: recall_at_1
value: 26.588
- type: recall_at_10
value: 82.819
- type: recall_at_100
value: 95.334
- type: recall_at_1000
value: 98.51299999999999
- type: recall_at_3
value: 54.74
- type: recall_at_5
value: 68.864
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 55.029
- type: f1
value: 53.043617905026764
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 77.83675116835911
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 74.19701455865277
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.7
- type: map_at_10
value: 75.593
- type: map_at_100
value: 75.863
- type: map_at_1000
value: 75.863
- type: map_at_3
value: 73.63300000000001
- type: map_at_5
value: 74.923
- type: mrr_at_1
value: 64.7
- type: mrr_at_10
value: 75.593
- type: mrr_at_100
value: 75.863
- type: mrr_at_1000
value: 75.863
- type: mrr_at_3
value: 73.63300000000001
- type: mrr_at_5
value: 74.923
- type: ndcg_at_1
value: 64.7
- type: ndcg_at_10
value: 80.399
- type: ndcg_at_100
value: 81.517
- type: ndcg_at_1000
value: 81.517
- type: ndcg_at_3
value: 76.504
- type: ndcg_at_5
value: 78.79899999999999
- type: precision_at_1
value: 64.7
- type: precision_at_10
value: 9.520000000000001
- type: precision_at_100
value: 1
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 28.266999999999996
- type: precision_at_5
value: 18.060000000000002
- type: recall_at_1
value: 64.7
- type: recall_at_10
value: 95.19999999999999
- type: recall_at_100
value: 100
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 84.8
- type: recall_at_5
value: 90.3
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.69999999999999
- type: ap
value: 75.91371640164184
- type: f1
value: 88.34067777698694
---
# KenLi315/Conan-embedding-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`TencentBAC/Conan-embedding-v1`](https://huggingface.co/TencentBAC/Conan-embedding-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TencentBAC/Conan-embedding-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KenLi315/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KenLi315/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KenLi315/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KenLi315/Conan-embedding-v1-Q4_K_M-GGUF --hf-file conan-embedding-v1-q4_k_m.gguf -c 2048
```
|
earnxus/1cc0087b-f4f8-47ea-9f8e-54ec62e33dbb | earnxus | 2025-01-28T10:57:15Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T10:43:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1cc0087b-f4f8-47ea-9f8e-54ec62e33dbb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/1cc0087b-f4f8-47ea-9f8e-54ec62e33dbb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 1cc0087b-f4f8-47ea-9f8e-54ec62e33dbb
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5945 | 0.0484 | 200 | 2.1274 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/f7a237aa-36c9-4c0e-88df-d841e330c7c0 | mrferr3t | 2025-01-28T10:52:21Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T10:48:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f7a237aa-36c9-4c0e-88df-d841e330c7c0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/f7a237aa-36c9-4c0e-88df-d841e330c7c0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 2
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f7a237aa-36c9-4c0e-88df-d841e330c7c0
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6161 | 0.0002 | 1 | 3.8398 |
| 3.3484 | 0.0007 | 3 | 3.8375 |
| 3.9161 | 0.0015 | 6 | 3.8177 |
| 3.1883 | 0.0022 | 9 | 3.7493 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso16/3ac4c30c-b6a7-48fd-9f22-686522698f93 | lesso16 | 2025-01-28T10:48:01Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T10:44:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ac4c30c-b6a7-48fd-9f22-686522698f93
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso16/3ac4c30c-b6a7-48fd-9f22-686522698f93
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: multi
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 3ac4c30c-b6a7-48fd-9f22-686522698f93
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.3870 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
philip-hightech/58e1cd4a-109a-4ea8-9da7-c27348f07094 | philip-hightech | 2025-01-28T10:47:22Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T10:44:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 58e1cd4a-109a-4ea8-9da7-c27348f07094
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/58e1cd4a-109a-4ea8-9da7-c27348f07094
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 58e1cd4a-109a-4ea8-9da7-c27348f07094
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0016 | 13 | nan |
| 0.0 | 0.0031 | 26 | nan |
| 0.0 | 0.0047 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiual-awal/d1dfd873-d789-4251-87f2-22df3994c074 | robiual-awal | 2025-01-28T10:47:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T10:44:07Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1dfd873-d789-4251-87f2-22df3994c074
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/d1dfd873-d789-4251-87f2-22df3994c074
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d1dfd873-d789-4251-87f2-22df3994c074
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0031 | 13 | nan |
| 0.0 | 0.0063 | 26 | nan |
| 0.0 | 0.0094 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/8bf810b1-4e1a-45cc-8fee-33e85c04ec4f | shibajustfor | 2025-01-28T10:47:00Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T10:44:02Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8bf810b1-4e1a-45cc-8fee-33e85c04ec4f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 22b70be0f94320a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/22b70be0f94320a3_train_data.json
type:
field_instruction: sentence1
field_output: sentence2
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/8bf810b1-4e1a-45cc-8fee-33e85c04ec4f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/22b70be0f94320a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02f8361-84b9-4479-bc3c-c6ea227f1563
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d02f8361-84b9-4479-bc3c-c6ea227f1563
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8bf810b1-4e1a-45cc-8fee-33e85c04ec4f
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0031 | 13 | nan |
| 0.0 | 0.0063 | 26 | nan |
| 0.0 | 0.0094 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ahsanf/lexia-finetuned-DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit-v.0.0.2 | ahsanf | 2025-01-28T10:44:41Z | 975 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-23T18:07:10Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Fine-Tuned Deepsek R1 Model
This repository contains a fine-tuned version of the Mistral language model. The fine-tuning was performed using a dataset derived from a CSV file, enabling the model to specialize in tasks related to the specific context of the dataset.
## Model Details
- **Base Model**: Deepsek Instruct (base version)
- **Fine-Tuning Framework**: [Unsloth](https://github.com/UnslothAI) and [Hugging Face Transformers](https://huggingface.co/docs/transformers/index)
- **Dataset**: 141 rows of input-output pairs derived from a CSV file
- **Objective**: Enhance the model's capability to generate accurate and contextually appropriate responses for tasks specific to the provided dataset.
## Dataset
The dataset used for fine-tuning contains conversational data structured as follows:
- **Input**: User queries or prompts
- **Output**: Model-generated responses or target answers
### Example Entry
```json
{
"conversations": [
{ "from": "human", "value": "<input-text>" },
{ "from": "gpt", "value": "<output-text>" }
]
}
```
## Fine-Tuning Process
1. **Preprocessing**:
- Converted the CSV file into a JSON format compatible with the Mistral model using the ShareGPT template.
- Applied tokenization and ensured compatibility with the Mistral chat template.
2. **Training Configuration**:
- **Epochs**: 30
- **Batch Size**: 2 (per device)
- **Gradient Accumulation**: 4 steps
- **Optimizer**: AdamW with 8-bit precision
- **Learning Rate**: 2e-4
3. **Hardware**:
- Training was conducted on a single GPU.
4. **Frameworks**:
- [Unsloth](https://github.com/UnslothAI) for chat template handling and training
- [Hugging Face Transformers](https://huggingface.co/docs/transformers) for model fine-tuning
## Installation and Setup
### Prerequisites
- Python 3.8+
- Install dependencies:
```bash
pip install torch transformers datasets unsloth
```
### Usage
To use the fine-tuned model, load it with the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("path_to_your_finetuned_model")
tokenizer = AutoTokenizer.from_pretrained("path_to_your_finetuned_model")
# Generate a response
input_text = "<your input>"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Inference Example
```python
input_text = "What is the weather like today?"
response = get_response(input_text)
print(response)
```
## Results
The fine-tuned model achieved:
- **Improved Response Quality**: The model generates responses closely aligned with the target dataset.
- **Faster Convergence**: Optimized for a small dataset with minimal overfitting.
## Limitations
- **Dataset Size**: The model was fine-tuned on a small dataset (141 rows), which may limit generalization to other tasks.
- **Domain-Specific**: Performance is optimal for the domain represented by the dataset.
## Acknowledgments
Special thanks to the open-source AI community for providing tools like Unsloth and Hugging Face Transformers. Their contributions make fine-tuning large language models accessible to all.
# Fine-Tuned Mistral Model
## Example Chat Conversation
Below is an example conversation showcasing the capabilities of the fine-tuned Mistral model. The model is optimized for customer service in the FMCG industry and supports use cases like recommending healthy snacks.
### Input Data Example
| Field | Value |
|---------------------------------|---------------------------------------------|
| **doc_id** | cs001 |
| **last_update** | 2024-12-06 |
| **use_case** | customer service |
| **topic** | FMCG |
| **stage** | awareness |
| **input** | Halo, aku cari camilan sehat nih. |
| **language** | bahasa |
| **intent** | menyapa dan memulai pencarian produk |
| **input_emotion** | penasaran |
| **instruction** | Sapa pelanggan dengan hangat, tawarkan beberapa rekomendasi camilan sehat, dan ajak mereka untuk memilih kategori tertentu. |
| **output** | Hai Kak! Aku [ai_agent_name] 😊. Lagi cari camilan sehat ya? Kita punya banyak pilihan nih, mulai dari granola bar, kacang panggang, sampai keripik sayur. Kakak lagi butuh yang praktis buat dibawa atau buat ngemil di rumah? 🛍️🍴 |
| **output_tone** | ramah, santai |
| **metadata/product_category/0** | snacks |
| **metadata/dynamic_placeholders/0** | [ai_agent_name] |
---
### Example Conversation
#### **Human**:
> Halo, aku cari camilan sehat nih.
#### **Assistant**:
> Hai Kak! Aku **SnackBot** 😊. Lagi cari camilan sehat ya? Kita punya banyak pilihan nih, mulai dari granola bar, kacang panggang, sampai keripik sayur. Kakak lagi butuh yang praktis buat dibawa atau buat ngemil di rumah? 🛍️🍴
---
This demonstrates the model's ability to:
1. **Understand intent**: Identify the user's need for healthy snacks.
2. **Generate relevant responses**: Provide a list of options in a friendly tone.
3. **Utilize placeholders dynamically**: Replace `[ai_agent_name]` with a suitable agent name, e.g., **SnackBot**.
For more usage examples, refer to the instructions in the dataset or try interacting with the model directly!
## License
This project is licensed under the [MIT License](LICENSE).
---
Feel free to raise any issues or contribute improvements to this repository!
# Uploaded model
- **Developed by:** ahsanf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
adammandic87/06f69ed1-71af-4694-aa4d-5a03c02d1f06 | adammandic87 | 2025-01-28T10:36:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-28T10:34:59Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 06f69ed1-71af-4694-aa4d-5a03c02d1f06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbb569b9308b1364_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbb569b9308b1364_train_data.json
type:
field_instruction: en
field_output: ko
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/06f69ed1-71af-4694-aa4d-5a03c02d1f06
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/fbb569b9308b1364_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56ada706-7796-4d50-99a6-f689b2188837
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 56ada706-7796-4d50-99a6-f689b2188837
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 06f69ed1-71af-4694-aa4d-5a03c02d1f06
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0022 | 1 | 2.1773 |
| 1.9553 | 0.0287 | 13 | 1.4756 |
| 1.5205 | 0.0573 | 26 | 1.3874 |
| 1.4332 | 0.0860 | 39 | 1.3661 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thakkkkkk/c8e7ad48-df02-494c-a5cf-c9b1bf0a69b1 | thakkkkkk | 2025-01-28T10:35:59Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:19:43Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c8e7ad48-df02-494c-a5cf-c9b1bf0a69b1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc20aa83d25b3ceb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc20aa83d25b3ceb_train_data.json
type:
field_input: rejected
field_instruction: chosen
field_output: chosen_feedback
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thakkkkkk/c8e7ad48-df02-494c-a5cf-c9b1bf0a69b1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/dc20aa83d25b3ceb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ad7557e0-b545-425a-9916-a596d5073e2d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ad7557e0-b545-425a-9916-a596d5073e2d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c8e7ad48-df02-494c-a5cf-c9b1bf0a69b1
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4899 | 0.0273 | 200 | 0.4983 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/test | kk-aivio | 2025-01-28T10:35:48Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T10:20:59Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8f98792a60ccddb6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8f98792a60ccddb6_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: kk-aivio/test
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8f98792a60ccddb6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ec775fcf-a2f3-4d2e-b927-b54ca72381d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ec775fcf-a2f3-4d2e-b927-b54ca72381d3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# test
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.5690 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/afro-xlmr-base-amh-finetuned-augmentation-LUNAR | sercetexam9 | 2025-01-28T10:35:38Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T10:11:59Z | ---
library_name: transformers
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: afro-xlmr-base-amh-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-amh-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3598
- F1: 0.7180
- Roc Auc: 0.8216
- Accuracy: 0.5711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3732 | 1.0 | 238 | 0.3390 | 0.3871 | 0.6361 | 0.3741 |
| 0.3078 | 2.0 | 476 | 0.2957 | 0.4993 | 0.6893 | 0.4573 |
| 0.255 | 3.0 | 714 | 0.3038 | 0.4991 | 0.6918 | 0.4489 |
| 0.2146 | 4.0 | 952 | 0.2743 | 0.5751 | 0.7355 | 0.4889 |
| 0.1812 | 5.0 | 1190 | 0.2834 | 0.5907 | 0.7533 | 0.4932 |
| 0.1566 | 6.0 | 1428 | 0.2816 | 0.6564 | 0.7837 | 0.5153 |
| 0.1454 | 7.0 | 1666 | 0.2748 | 0.6717 | 0.7939 | 0.5427 |
| 0.1151 | 8.0 | 1904 | 0.2930 | 0.6693 | 0.8009 | 0.5469 |
| 0.0807 | 9.0 | 2142 | 0.3085 | 0.6799 | 0.7997 | 0.5458 |
| 0.0643 | 10.0 | 2380 | 0.3011 | 0.6978 | 0.8078 | 0.5574 |
| 0.0626 | 11.0 | 2618 | 0.3296 | 0.6945 | 0.8138 | 0.5522 |
| 0.0461 | 12.0 | 2856 | 0.3366 | 0.6896 | 0.8001 | 0.5564 |
| 0.0342 | 13.0 | 3094 | 0.3503 | 0.6893 | 0.8178 | 0.5522 |
| 0.0301 | 14.0 | 3332 | 0.3453 | 0.7036 | 0.8136 | 0.5669 |
| 0.0209 | 15.0 | 3570 | 0.3575 | 0.7135 | 0.8176 | 0.5680 |
| 0.0171 | 16.0 | 3808 | 0.3632 | 0.7042 | 0.8158 | 0.5616 |
| 0.017 | 17.0 | 4046 | 0.3598 | 0.7180 | 0.8216 | 0.5711 |
| 0.0223 | 18.0 | 4284 | 0.3610 | 0.7065 | 0.8170 | 0.5701 |
| 0.0207 | 19.0 | 4522 | 0.3622 | 0.7153 | 0.8212 | 0.5680 |
| 0.0179 | 20.0 | 4760 | 0.3629 | 0.7117 | 0.8213 | 0.5669 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
swinakk/mistral-7b-inst-v22 | swinakk | 2025-01-28T10:34:35Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-01-28T10:29:09Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF | mradermacher | 2025-01-28T10:33:33Z | 580 | 1 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus",
"base_model:quantized:mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-28T07:02:51Z | ---
base_model: mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF | mradermacher | 2025-01-28T10:33:32Z | 289 | 1 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus",
"base_model:quantized:mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-27T17:01:56Z | ---
base_model: mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlfoundations-dev/DCFT-Stratos-Verified-114k-7B-4gpus
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DCFT-Stratos-Verified-114k-7B-4gpus-GGUF/resolve/main/DCFT-Stratos-Verified-114k-7B-4gpus.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nathanialhunt/ffdd7358-97d3-45f6-b261-554f3830421a | nathanialhunt | 2025-01-28T10:33:05Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-28T10:31:50Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ffdd7358-97d3-45f6-b261-554f3830421a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbb569b9308b1364_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbb569b9308b1364_train_data.json
type:
field_instruction: en
field_output: ko
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/ffdd7358-97d3-45f6-b261-554f3830421a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/fbb569b9308b1364_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56ada706-7796-4d50-99a6-f689b2188837
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 56ada706-7796-4d50-99a6-f689b2188837
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ffdd7358-97d3-45f6-b261-554f3830421a
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0022 | 1 | 2.1773 |
| 1.956 | 0.0287 | 13 | 1.4769 |
| 1.521 | 0.0573 | 26 | 1.3889 |
| 1.4314 | 0.0860 | 39 | 1.3666 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nadejdatarabukina/ad7a0660-1d83-4864-8b2e-ace1c54c7aca | nadejdatarabukina | 2025-01-28T10:29:04Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-28T10:26:11Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ad7a0660-1d83-4864-8b2e-ace1c54c7aca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbb569b9308b1364_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbb569b9308b1364_train_data.json
type:
field_instruction: en
field_output: ko
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/ad7a0660-1d83-4864-8b2e-ace1c54c7aca
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/fbb569b9308b1364_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56ada706-7796-4d50-99a6-f689b2188837
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 56ada706-7796-4d50-99a6-f689b2188837
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ad7a0660-1d83-4864-8b2e-ace1c54c7aca
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0022 | 1 | 3.0584 |
| 2.9035 | 0.0110 | 5 | 2.5737 |
| 2.5319 | 0.0221 | 10 | 2.2829 |
| 2.4052 | 0.0331 | 15 | 2.1636 |
| 2.1313 | 0.0441 | 20 | 2.1002 |
| 2.123 | 0.0551 | 25 | 2.0776 |
| 2.1695 | 0.0662 | 30 | 2.0732 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q2_K-GGUF | raimundsz | 2025-01-28T10:28:43Z | 24 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"de",
"fr",
"it",
"es",
"pt",
"hi",
"th",
"base_model:raimundsz/Llama-3.2-3B-Instruct_4TINA2",
"base_model:quantized:raimundsz/Llama-3.2-3B-Instruct_4TINA2",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-28T10:25:26Z | ---
license: llama3.2
language:
- en
- de
- fr
- it
- es
- pt
- hi
- th
base_model: raimundsz/Llama-3.2-3B-Instruct_4TINA2
tags:
- llama-cpp
- gguf-my-repo
---
# raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q2_K-GGUF
This model was converted to GGUF format from [`raimundsz/Llama-3.2-3B-Instruct_4TINA2`](https://huggingface.co/raimundsz/Llama-3.2-3B-Instruct_4TINA2) using llama.cpp.
Refer to the [original model card](https://huggingface.co/raimundsz/Llama-3.2-3B-Instruct_4TINA2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q2_K-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q2_K-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q2_K-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q2_K-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q2_k.gguf -c 2048
```
|
mrferr3t/56a43b71-88a8-49a2-a8a6-e1ba57df7849 | mrferr3t | 2025-01-28T10:28:30Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-28T10:26:54Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 56a43b71-88a8-49a2-a8a6-e1ba57df7849
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbb569b9308b1364_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbb569b9308b1364_train_data.json
type:
field_instruction: en
field_output: ko
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/56a43b71-88a8-49a2-a8a6-e1ba57df7849
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/fbb569b9308b1364_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56ada706-7796-4d50-99a6-f689b2188837
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 56ada706-7796-4d50-99a6-f689b2188837
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 56a43b71-88a8-49a2-a8a6-e1ba57df7849
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9244 | 0.0022 | 1 | 2.1779 |
| 2.4485 | 0.0154 | 7 | 1.7891 |
| 1.5459 | 0.0309 | 14 | 1.4860 |
| 1.4846 | 0.0463 | 21 | 1.4270 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso06/a0705743-df8c-451f-8a7b-5aa45e5d1ccf | lesso06 | 2025-01-28T10:28:08Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-01-28T10:26:54Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a0705743-df8c-451f-8a7b-5aa45e5d1ccf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbb569b9308b1364_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbb569b9308b1364_train_data.json
type:
field_instruction: en
field_output: ko
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso06/a0705743-df8c-451f-8a7b-5aa45e5d1ccf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/fbb569b9308b1364_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 56ada706-7796-4d50-99a6-f689b2188837
wandb_project: multi
wandb_run: your_name
wandb_runid: 56ada706-7796-4d50-99a6-f689b2188837
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a0705743-df8c-451f-8a7b-5aa45e5d1ccf
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 57
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3295 | 0.9868 | 56 | 1.3827 |
| 1.6657 | 1.0044 | 57 | 1.3829 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LucileFavero/AM_model_gem_T_Q_no_instr | LucileFavero | 2025-01-28T10:23:53Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-28T10:22:42Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LucileFavero
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abanm/Dubs_V_0_0_1_4_bit | abanm | 2025-01-28T10:22:28Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-01-28T10:22:09Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abanm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sercetexam9/afro-xlmr-base-arq-finetuned-augmentation-LUNAR | sercetexam9 | 2025-01-28T10:22:19Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T10:12:56Z | ---
library_name: transformers
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: afro-xlmr-base-arq-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-arq-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5136
- F1: 0.5725
- Roc Auc: 0.6987
- Accuracy: 0.2957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6493 | 1.0 | 58 | 0.5823 | 0.0 | 0.4993 | 0.1087 |
| 0.5943 | 2.0 | 116 | 0.5618 | 0.1307 | 0.5245 | 0.1130 |
| 0.5728 | 3.0 | 174 | 0.5463 | 0.2063 | 0.5497 | 0.1348 |
| 0.5326 | 4.0 | 232 | 0.5335 | 0.2462 | 0.5581 | 0.1348 |
| 0.4913 | 5.0 | 290 | 0.5259 | 0.3849 | 0.6084 | 0.1652 |
| 0.4505 | 6.0 | 348 | 0.5067 | 0.4337 | 0.6319 | 0.1826 |
| 0.3857 | 7.0 | 406 | 0.5034 | 0.5164 | 0.6635 | 0.2 |
| 0.3759 | 8.0 | 464 | 0.5013 | 0.4906 | 0.6597 | 0.2 |
| 0.3219 | 9.0 | 522 | 0.5048 | 0.5114 | 0.6624 | 0.2087 |
| 0.2938 | 10.0 | 580 | 0.5037 | 0.5247 | 0.6744 | 0.2478 |
| 0.2736 | 11.0 | 638 | 0.5054 | 0.5363 | 0.6788 | 0.2478 |
| 0.2491 | 12.0 | 696 | 0.5079 | 0.5447 | 0.6863 | 0.2478 |
| 0.2412 | 13.0 | 754 | 0.5149 | 0.5502 | 0.6857 | 0.2609 |
| 0.2071 | 14.0 | 812 | 0.5159 | 0.5617 | 0.6905 | 0.2739 |
| 0.2084 | 15.0 | 870 | 0.5196 | 0.5573 | 0.6893 | 0.2609 |
| 0.1965 | 16.0 | 928 | 0.5136 | 0.5725 | 0.6987 | 0.2957 |
| 0.185 | 17.0 | 986 | 0.5141 | 0.5663 | 0.6924 | 0.2957 |
| 0.188 | 18.0 | 1044 | 0.5156 | 0.5651 | 0.6913 | 0.2783 |
| 0.1932 | 19.0 | 1102 | 0.5163 | 0.5640 | 0.6917 | 0.2826 |
| 0.184 | 20.0 | 1160 | 0.5165 | 0.5655 | 0.6928 | 0.2826 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
sercetexam9/afro-xlmr-base-ary-finetuned-augmentation-LUNAR | sercetexam9 | 2025-01-28T10:21:33Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-base",
"base_model:finetune:Davlan/afro-xlmr-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T10:13:26Z | ---
library_name: transformers
license: mit
base_model: Davlan/afro-xlmr-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: afro-xlmr-base-ary-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-ary-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3039
- F1: 0.5359
- Roc Auc: 0.7304
- Accuracy: 0.5105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3909 | 1.0 | 108 | 0.4005 | 0.0 | 0.5 | 0.2471 |
| 0.3379 | 2.0 | 216 | 0.3598 | 0.0617 | 0.5153 | 0.2727 |
| 0.3195 | 3.0 | 324 | 0.3172 | 0.2951 | 0.6014 | 0.4336 |
| 0.2657 | 4.0 | 432 | 0.3034 | 0.3616 | 0.6395 | 0.4848 |
| 0.2547 | 5.0 | 540 | 0.2844 | 0.4671 | 0.6847 | 0.5035 |
| 0.2004 | 6.0 | 648 | 0.2903 | 0.4405 | 0.6738 | 0.5012 |
| 0.1695 | 7.0 | 756 | 0.2880 | 0.4687 | 0.6886 | 0.5221 |
| 0.1317 | 8.0 | 864 | 0.3039 | 0.5359 | 0.7304 | 0.5105 |
| 0.112 | 9.0 | 972 | 0.3076 | 0.4946 | 0.6997 | 0.5198 |
| 0.1025 | 10.0 | 1080 | 0.3090 | 0.4987 | 0.7032 | 0.5058 |
| 0.0946 | 11.0 | 1188 | 0.3187 | 0.5202 | 0.7145 | 0.5221 |
| 0.0848 | 12.0 | 1296 | 0.3326 | 0.4990 | 0.7012 | 0.5198 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q4_K_M-GGUF | raimundsz | 2025-01-28T10:18:26Z | 23 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"de",
"fr",
"it",
"es",
"pt",
"hi",
"th",
"base_model:raimundsz/Llama-3.2-3B-Instruct_4TINA2",
"base_model:quantized:raimundsz/Llama-3.2-3B-Instruct_4TINA2",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-28T10:17:17Z | ---
license: llama3.2
language:
- en
- de
- fr
- it
- es
- pt
- hi
- th
base_model: raimundsz/Llama-3.2-3B-Instruct_4TINA2
tags:
- llama-cpp
- gguf-my-repo
---
# raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q4_K_M-GGUF
This model was converted to GGUF format from [`raimundsz/Llama-3.2-3B-Instruct_4TINA2`](https://huggingface.co/raimundsz/Llama-3.2-3B-Instruct_4TINA2) using llama.cpp.
Refer to the [original model card](https://huggingface.co/raimundsz/Llama-3.2-3B-Instruct_4TINA2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo raimundsz/Llama-3.2-3B-Instruct_4TINA2-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct_4tina2-q4_k_m.gguf -c 2048
```
|
RyanYr/reflect_single_llm8B_SftT12 | RyanYr | 2025-01-28T10:17:35Z | 242 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-27T22:03:32Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: reflect_single_llm8B_SftT12
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for reflect_single_llm8B_SftT12
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_single_llm8B_SftT12", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/6tbqxl5o)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sercetexam9/xlm-roberta-base-amh-finetuned-augmentation-LUNAR | sercetexam9 | 2025-01-28T10:14:17Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T09:50:25Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-amh-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-amh-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3820
- F1: 0.6576
- Roc Auc: 0.7940
- Accuracy: 0.5110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4179 | 1.0 | 238 | 0.4173 | 0.0 | 0.5 | 0.1735 |
| 0.3969 | 2.0 | 476 | 0.3760 | 0.2222 | 0.5646 | 0.2808 |
| 0.3427 | 3.0 | 714 | 0.3236 | 0.4125 | 0.6565 | 0.3922 |
| 0.2814 | 4.0 | 952 | 0.2987 | 0.5339 | 0.7071 | 0.4448 |
| 0.2655 | 5.0 | 1190 | 0.2953 | 0.5704 | 0.7415 | 0.4658 |
| 0.2185 | 6.0 | 1428 | 0.3105 | 0.5699 | 0.7519 | 0.4448 |
| 0.2206 | 7.0 | 1666 | 0.3094 | 0.5764 | 0.7509 | 0.4669 |
| 0.161 | 8.0 | 1904 | 0.3088 | 0.5871 | 0.7524 | 0.4879 |
| 0.1473 | 9.0 | 2142 | 0.3198 | 0.6278 | 0.7683 | 0.4921 |
| 0.1237 | 10.0 | 2380 | 0.3405 | 0.6264 | 0.7680 | 0.4774 |
| 0.1032 | 11.0 | 2618 | 0.3341 | 0.6362 | 0.7720 | 0.5079 |
| 0.0857 | 12.0 | 2856 | 0.3452 | 0.6521 | 0.7771 | 0.5058 |
| 0.0695 | 13.0 | 3094 | 0.3604 | 0.6552 | 0.7872 | 0.5058 |
| 0.0626 | 14.0 | 3332 | 0.3686 | 0.6472 | 0.7815 | 0.5089 |
| 0.0481 | 15.0 | 3570 | 0.3666 | 0.6477 | 0.7763 | 0.5121 |
| 0.0516 | 16.0 | 3808 | 0.3820 | 0.6576 | 0.7940 | 0.5110 |
| 0.0469 | 17.0 | 4046 | 0.3752 | 0.6493 | 0.7846 | 0.5121 |
| 0.0473 | 18.0 | 4284 | 0.3817 | 0.6448 | 0.7821 | 0.5100 |
| 0.0405 | 19.0 | 4522 | 0.3830 | 0.6529 | 0.7871 | 0.5110 |
| 0.0474 | 20.0 | 4760 | 0.3827 | 0.6519 | 0.7874 | 0.5100 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
cunghoctienganh/aeebc410-b594-4253-acf0-dc95dd827704 | cunghoctienganh | 2025-01-28T10:10:28Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:18:50Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aeebc410-b594-4253-acf0-dc95dd827704
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc20aa83d25b3ceb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc20aa83d25b3ceb_train_data.json
type:
field_input: rejected
field_instruction: chosen
field_output: chosen_feedback
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/aeebc410-b594-4253-acf0-dc95dd827704
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc20aa83d25b3ceb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ad7557e0-b545-425a-9916-a596d5073e2d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ad7557e0-b545-425a-9916-a596d5073e2d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aeebc410-b594-4253-acf0-dc95dd827704
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.454 | 0.0137 | 200 | 0.5168 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nbninh/95ad82ef-c1d5-464b-93e8-21a79ad6eb85 | nbninh | 2025-01-28T10:10:06Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:18:43Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 95ad82ef-c1d5-464b-93e8-21a79ad6eb85
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dc20aa83d25b3ceb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dc20aa83d25b3ceb_train_data.json
type:
field_input: rejected
field_instruction: chosen
field_output: chosen_feedback
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/95ad82ef-c1d5-464b-93e8-21a79ad6eb85
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/dc20aa83d25b3ceb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ad7557e0-b545-425a-9916-a596d5073e2d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ad7557e0-b545-425a-9916-a596d5073e2d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 95ad82ef-c1d5-464b-93e8-21a79ad6eb85
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4496 | 0.0137 | 200 | 0.5165 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
alexia-allal/ner-model-camembert | alexia-allal | 2025-01-28T10:01:34Z | 29 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-01-27T11:04:18Z | ---
library_name: transformers
license: mit
base_model: camembert-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-model-camembert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-model-camembert
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1642
- Precision: 0.8721
- Recall: 0.7732
- F1: 0.8197
- Accuracy: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 24 | 0.3640 | 0.0 | 0.0 | 0.0 | 0.8739 |
| No log | 2.0 | 48 | 0.2640 | 0.6884 | 0.4312 | 0.5303 | 0.9037 |
| No log | 3.0 | 72 | 0.2248 | 0.6976 | 0.6431 | 0.6692 | 0.9198 |
| No log | 4.0 | 96 | 0.2163 | 0.8182 | 0.6022 | 0.6938 | 0.9330 |
| No log | 5.0 | 120 | 0.1690 | 0.7336 | 0.8086 | 0.7692 | 0.9388 |
| No log | 6.0 | 144 | 0.1768 | 0.8558 | 0.6840 | 0.7603 | 0.9456 |
| No log | 7.0 | 168 | 0.1838 | 0.8578 | 0.6952 | 0.7680 | 0.9470 |
| No log | 8.0 | 192 | 0.1591 | 0.8158 | 0.8067 | 0.8112 | 0.9526 |
| No log | 9.0 | 216 | 0.1688 | 0.8571 | 0.7584 | 0.8047 | 0.9536 |
| No log | 10.0 | 240 | 0.1596 | 0.8431 | 0.7993 | 0.8206 | 0.9559 |
| No log | 11.0 | 264 | 0.1599 | 0.8563 | 0.7751 | 0.8137 | 0.9552 |
| No log | 12.0 | 288 | 0.1713 | 0.8515 | 0.7565 | 0.8012 | 0.9526 |
| No log | 13.0 | 312 | 0.1646 | 0.8394 | 0.7770 | 0.8069 | 0.9531 |
| No log | 14.0 | 336 | 0.1705 | 0.8367 | 0.7807 | 0.8077 | 0.9531 |
| No log | 15.0 | 360 | 0.1717 | 0.8236 | 0.7900 | 0.8065 | 0.9522 |
| No log | 16.0 | 384 | 0.1689 | 0.8631 | 0.7732 | 0.8157 | 0.9559 |
| No log | 17.0 | 408 | 0.1608 | 0.8835 | 0.7751 | 0.8257 | 0.9587 |
| No log | 18.0 | 432 | 0.1499 | 0.8849 | 0.7862 | 0.8327 | 0.9602 |
| No log | 19.0 | 456 | 0.1614 | 0.8846 | 0.7695 | 0.8231 | 0.9583 |
| No log | 20.0 | 480 | 0.1688 | 0.8448 | 0.7788 | 0.8104 | 0.9541 |
| 0.0983 | 21.0 | 504 | 0.1672 | 0.8482 | 0.7788 | 0.8120 | 0.9545 |
| 0.0983 | 22.0 | 528 | 0.1668 | 0.8563 | 0.7751 | 0.8137 | 0.9552 |
| 0.0983 | 23.0 | 552 | 0.1678 | 0.8545 | 0.7751 | 0.8129 | 0.9550 |
| 0.0983 | 24.0 | 576 | 0.1645 | 0.8703 | 0.7732 | 0.8189 | 0.9569 |
| 0.0983 | 25.0 | 600 | 0.1642 | 0.8721 | 0.7732 | 0.8197 | 0.9571 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
sercetexam9/xlm-roberta-base-arq-finetuned-augmentation-LUNAR | sercetexam9 | 2025-01-28T09:57:33Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T09:48:24Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-arq-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-arq-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5420
- F1: 0.5356
- Roc Auc: 0.6725
- Accuracy: 0.2304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6462 | 1.0 | 58 | 0.6017 | 0.1036 | 0.5140 | 0.1522 |
| 0.5961 | 2.0 | 116 | 0.5969 | 0.0395 | 0.5107 | 0.1348 |
| 0.5849 | 3.0 | 174 | 0.5930 | 0.0240 | 0.5065 | 0.1261 |
| 0.5928 | 4.0 | 232 | 0.5806 | 0.1794 | 0.5377 | 0.1565 |
| 0.552 | 5.0 | 290 | 0.5605 | 0.2082 | 0.5577 | 0.1696 |
| 0.5449 | 6.0 | 348 | 0.5546 | 0.2651 | 0.5744 | 0.1957 |
| 0.541 | 7.0 | 406 | 0.5525 | 0.2456 | 0.5679 | 0.1739 |
| 0.4948 | 8.0 | 464 | 0.5311 | 0.3354 | 0.5972 | 0.2217 |
| 0.4643 | 9.0 | 522 | 0.5414 | 0.3664 | 0.6102 | 0.2087 |
| 0.4264 | 10.0 | 580 | 0.5234 | 0.4415 | 0.6267 | 0.1870 |
| 0.4062 | 11.0 | 638 | 0.5514 | 0.4091 | 0.6172 | 0.1826 |
| 0.3899 | 12.0 | 696 | 0.5293 | 0.4375 | 0.6264 | 0.2 |
| 0.3657 | 13.0 | 754 | 0.5280 | 0.4947 | 0.6531 | 0.1913 |
| 0.3385 | 14.0 | 812 | 0.5429 | 0.5122 | 0.6641 | 0.2130 |
| 0.3162 | 15.0 | 870 | 0.5428 | 0.5211 | 0.6682 | 0.2043 |
| 0.2809 | 16.0 | 928 | 0.5431 | 0.5266 | 0.6682 | 0.2304 |
| 0.3018 | 17.0 | 986 | 0.5426 | 0.5309 | 0.6702 | 0.2304 |
| 0.2953 | 18.0 | 1044 | 0.5420 | 0.5356 | 0.6725 | 0.2304 |
| 0.2768 | 19.0 | 1102 | 0.5423 | 0.5240 | 0.6667 | 0.2217 |
| 0.269 | 20.0 | 1160 | 0.5416 | 0.5278 | 0.6687 | 0.2261 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
lesso13/7f758bdd-9295-4a45-af47-878b53f57e53 | lesso13 | 2025-01-28T09:56:32Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:30:07Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f758bdd-9295-4a45-af47-878b53f57e53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 58c39f4a37462112_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58c39f4a37462112_train_data.json
type:
field_instruction: persona
field_output: summary_label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso13/7f758bdd-9295-4a45-af47-878b53f57e53
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/58c39f4a37462112_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7f758bdd-9295-4a45-af47-878b53f57e53
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.3509 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk1205/31345906-d72c-4c00-811f-8a19db2a0b3b | kostiantynk1205 | 2025-01-28T09:56:09Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:54:46Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31345906-d72c-4c00-811f-8a19db2a0b3b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 58c39f4a37462112_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58c39f4a37462112_train_data.json
type:
field_instruction: persona
field_output: summary_label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/31345906-d72c-4c00-811f-8a19db2a0b3b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/58c39f4a37462112_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 31345906-d72c-4c00-811f-8a19db2a0b3b
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | nan |
| 0.0 | 0.0228 | 13 | nan |
| 0.0 | 0.0456 | 26 | nan |
| 0.0 | 0.0684 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SzegedAI/huDeBERTa-aux-free-MLSM | SzegedAI | 2025-01-28T09:53:50Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-01-28T09:51:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ibm-research/materials.selfies-ted2m | ibm-research | 2025-01-28T09:52:43Z | 17 | 2 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"chemistry",
"feature-extraction",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-01-28T08:44:19Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: feature-extraction
tags:
- chemistry
- transformers
---
# selfies-ted2m
selfies-ted is an transformer based encoder decoder model for molecular representations using SELFIES. This is a 2.2M parameter version of the model. For the full-sized version and more information on architecture, see [selfies-ted](https://huggingface.co/ibm-research/materials.selfies-ted).
This version also includes a projection layer to convert the last hidden state of the BART model (256-dimensional vector per token) to a single 128-dimension vector for the whole SELFIES sequence.
|
tuanna08go/b29e4e1e-78ad-42cf-81f9-4f4b5b7bd1e5 | tuanna08go | 2025-01-28T09:51:57Z | 12 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:41:30Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b29e4e1e-78ad-42cf-81f9-4f4b5b7bd1e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 58c39f4a37462112_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58c39f4a37462112_train_data.json
type:
field_instruction: persona
field_output: summary_label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/b29e4e1e-78ad-42cf-81f9-4f4b5b7bd1e5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/58c39f4a37462112_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b29e4e1e-78ad-42cf-81f9-4f4b5b7bd1e5
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 3.1715 |
| 8.0059 | 0.0175 | 10 | 1.0427 |
| 1.7577 | 0.0351 | 20 | 0.3701 |
| 1.2617 | 0.0526 | 30 | 0.3007 |
| 1.0334 | 0.0702 | 40 | 0.2667 |
| 0.7934 | 0.0877 | 50 | 0.2551 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
driwnet/mental-LongFormer-large-es | driwnet | 2025-01-28T09:51:41Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-01-28T09:49:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/LunarPass-1-i1-GGUF | mradermacher | 2025-01-28T09:49:11Z | 423 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Sakalti/LunarPass-1",
"base_model:quantized:Sakalti/LunarPass-1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-24T21:37:24Z | ---
base_model: Sakalti/LunarPass-1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sakalti/LunarPass-1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LunarPass-1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q2_K.gguf) | i1-Q2_K | 2.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ3_S.gguf) | i1-IQ3_S | 2.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q4_0.gguf) | i1-Q4_0 | 3.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q4_1.gguf) | i1-Q4_1 | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LunarPass-1-i1-GGUF/resolve/main/LunarPass-1.i1-Q6_K.gguf) | i1-Q6_K | 5.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
swinakk/mistral-7b-inst-v21 | swinakk | 2025-01-28T09:48:29Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-01-28T09:44:40Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trenden/c0cfda90-18ce-47cb-98ba-16e1211d4a3a | trenden | 2025-01-28T09:47:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T08:46:08Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c0cfda90-18ce-47cb-98ba-16e1211d4a3a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b5a15de73f892a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b5a15de73f892a3_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/c0cfda90-18ce-47cb-98ba-16e1211d4a3a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b5a15de73f892a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: baf473ee-1663-4c1b-b3a8-d4763ec805bd
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: baf473ee-1663-4c1b-b3a8-d4763ec805bd
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c0cfda90-18ce-47cb-98ba-16e1211d4a3a
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 13 | nan |
| 0.0 | 0.0002 | 26 | nan |
| 0.0 | 0.0003 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ksych/salt-wav-ru-music-3 | ksych | 2025-01-28T09:47:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T09:44:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daniel40/c3a6fd36-921b-42f6-9cad-2c73765aa0a6 | daniel40 | 2025-01-28T09:46:27Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-28T09:43:58Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3a6fd36-921b-42f6-9cad-2c73765aa0a6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b692e267d1262b06_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b692e267d1262b06_train_data.json
type:
field_input: seed
field_instruction: problem statement
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/c3a6fd36-921b-42f6-9cad-2c73765aa0a6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b692e267d1262b06_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 93ce94e0-3bde-4717-a525-ec2438b40353
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 93ce94e0-3bde-4717-a525-ec2438b40353
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c3a6fd36-921b-42f6-9cad-2c73765aa0a6
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 11.9319 |
| 11.9315 | 0.0009 | 13 | 11.9318 |
| 11.9319 | 0.0018 | 26 | 11.9316 |
| 11.9322 | 0.0027 | 39 | 11.9315 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso17/d59e6e43-63c8-4f23-a17e-cc47cd18eb3f | lesso17 | 2025-01-28T09:46:05Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T07:20:16Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d59e6e43-63c8-4f23-a17e-cc47cd18eb3f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 8a4feb53d103165a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8a4feb53d103165a_train_data.json
type:
field_instruction: anchor
field_output: positive
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/d59e6e43-63c8-4f23-a17e-cc47cd18eb3f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8a4feb53d103165a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 496a9c6e-509a-446a-9b5f-ca8b664b6e46
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 496a9c6e-509a-446a-9b5f-ca8b664b6e46
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d59e6e43-63c8-4f23-a17e-cc47cd18eb3f
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0767 | 0.0037 | 200 | 2.0593 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Trelis/SmolVLM-500M-Instruct-chess | Trelis | 2025-01-28T09:45:54Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"idefics3",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-28T09:45:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Best000/8e5f442e-7c30-47be-80a4-167bb2f4fcd0 | Best000 | 2025-01-28T09:45:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T08:44:55Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8e5f442e-7c30-47be-80a4-167bb2f4fcd0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b5a15de73f892a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b5a15de73f892a3_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/8e5f442e-7c30-47be-80a4-167bb2f4fcd0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b5a15de73f892a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: baf473ee-1663-4c1b-b3a8-d4763ec805bd
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: baf473ee-1663-4c1b-b3a8-d4763ec805bd
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8e5f442e-7c30-47be-80a4-167bb2f4fcd0
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0001 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
philip-hightech/62a16e8f-9c76-4733-bc75-f65689dcad35 | philip-hightech | 2025-01-28T09:43:23Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-28T09:40:54Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 62a16e8f-9c76-4733-bc75-f65689dcad35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b692e267d1262b06_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b692e267d1262b06_train_data.json
type:
field_input: seed
field_instruction: problem statement
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/62a16e8f-9c76-4733-bc75-f65689dcad35
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b692e267d1262b06_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 93ce94e0-3bde-4717-a525-ec2438b40353
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 93ce94e0-3bde-4717-a525-ec2438b40353
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 62a16e8f-9c76-4733-bc75-f65689dcad35
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 11.9319 |
| 11.9305 | 0.0005 | 13 | 11.9307 |
| 11.9308 | 0.0009 | 26 | 11.9270 |
| 11.9274 | 0.0014 | 39 | 11.9244 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/8363e907-2cf9-4110-885c-518d35d3e5c2 | aleegis12 | 2025-01-28T09:40:39Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-28T09:35:15Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8363e907-2cf9-4110-885c-518d35d3e5c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- b692e267d1262b06_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b692e267d1262b06_train_data.json
type:
field_input: seed
field_instruction: problem statement
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/8363e907-2cf9-4110-885c-518d35d3e5c2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/b692e267d1262b06_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 93ce94e0-3bde-4717-a525-ec2438b40353
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 93ce94e0-3bde-4717-a525-ec2438b40353
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8363e907-2cf9-4110-885c-518d35d3e5c2
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9329 | 0.0003 | 1 | 11.9319 |
| 11.9185 | 0.0139 | 50 | 11.9230 |
| 11.9172 | 0.0278 | 100 | 11.9218 |
| 11.9181 | 0.0416 | 150 | 11.9209 |
| 11.9191 | 0.0555 | 200 | 11.9206 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso17/7777ccc9-8116-4f40-b0d3-31e933def519 | lesso17 | 2025-01-28T09:40:11Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:01:12Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7777ccc9-8116-4f40-b0d3-31e933def519
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
datasets:
- data_files:
- baff8fc3dcf369b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baff8fc3dcf369b2_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/7777ccc9-8116-4f40-b0d3-31e933def519
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/baff8fc3dcf369b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0144a1a9-447e-492a-8b56-028895fbacbc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0144a1a9-447e-492a-8b56-028895fbacbc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7777ccc9-8116-4f40-b0d3-31e933def519
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.4848 | 0.0030 | 200 | 3.0636 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
CALISTA-INDUSTRY/DeepSeek-R1-Distill-Llama-8B-FineTune | CALISTA-INDUSTRY | 2025-01-28T09:38:59Z | 556 | 1 | null | [
"pytorch",
"safetensors",
"gguf",
"llama",
"unsloth",
"deepseek_v3",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-28T08:58:11Z | ---
license: mit
tags:
- unsloth
- deepseek_v3
---
DeepSeek-R1 Release
__________________________________________________________________________________________
⚡ Performance on par with OpenAI-o1
📖 Fully open-source model & technical report
🏆 MIT licensed: Distill & commercialize freely!
🌐 Website & API are live now! Try DeepThink at chat.deepseek.com today!
__________________________________________________________________________________________
🔥 Bonus: Open-Source Distilled Models!
🔬 Distilled from DeepSeek-R1, 6 small models fully open-sourced
📏 32B & 70B models on par with OpenAI-o1-mini
🤝 Empowering the open-source community
🌍 Pushing the boundaries of open AI!
_____________________________________________________________________
🛠️ DeepSeek-R1: Technical Highlights
📈 Large-scale RL in post-training
🏆 Significant performance boost with minimal labeled data
🔢 Math, code, and reasoning tasks on par with OpenAI-o1
📄 More details: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf
_____________________________________________________________________
🌐 API Access & Pricing
⚙️ Use DeepSeek-R1 by setting model=deepseek-reasoner
💰 $0.14 / million input tokens (cache hit)
💰 $0.55 / million input tokens (cache miss)
💰 $2.19 / million output tokens
📖 API guide: https://api-docs.deepseek.com/guides/reasoning_model |
mergekit-community/mergekit-sce-azzpiqv | mergekit-community | 2025-01-28T09:38:38Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:mergekit-community/Deepseek-Distill-NSFW-visible-w-NSFW-FFS",
"base_model:merge:mergekit-community/Deepseek-Distill-NSFW-visible-w-NSFW-FFS",
"base_model:mergekit-community/NSFW-FFS-w-hidden-Deepseek-Distill-NSFW",
"base_model:merge:mergekit-community/NSFW-FFS-w-hidden-Deepseek-Distill-NSFW",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T09:34:55Z | ---
base_model:
- mergekit-community/Deepseek-Distill-NSFW-visible-w-NSFW-FFS
- mergekit-community/NSFW-FFS-w-hidden-Deepseek-Distill-NSFW
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mergekit-community/NSFW-FFS-w-hidden-Deepseek-Distill-NSFW](https://huggingface.co/mergekit-community/NSFW-FFS-w-hidden-Deepseek-Distill-NSFW) as a base.
### Models Merged
The following models were included in the merge:
* [mergekit-community/Deepseek-Distill-NSFW-visible-w-NSFW-FFS](https://huggingface.co/mergekit-community/Deepseek-Distill-NSFW-visible-w-NSFW-FFS)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
models:
- model: mergekit-community/Deepseek-Distill-NSFW-visible-w-NSFW-FFS
base_model: mergekit-community/NSFW-FFS-w-hidden-Deepseek-Distill-NSFW
parameters:
select_topk: 1.0
dtype: bfloat16
normalize: true
```
|
fatihfauzan26/PEGASUS_medium | fatihfauzan26 | 2025-01-28T09:38:08Z | 33 | 1 | null | [
"safetensors",
"pegasus",
"summarization",
"id",
"dataset:fajrikoto/id_liputan6",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"license:mit",
"region:us"
] | summarization | 2025-01-28T08:20:48Z | ---
license: mit
datasets:
- fajrikoto/id_liputan6
language:
- id
metrics:
- rouge
base_model:
- google/pegasus-cnn_dailymail
pipeline_tag: summarization
---
PEGASUS Medium is a fine-tuned version of the PEGASUS model, originally pre-trained on the CNN/Daily Mail dataset. This fine-tuning is specifically tailored for abstractive text summarization of Indonesian news articles using the Liputan6 dataset.
The model has been trained on a subset of 100,000 samples from the Liputan6 dataset for 3 epochs, making it lightweight and efficient while maintaining strong summarization performance. |
shibajustfor/a943ba04-52b4-4919-b616-37aa34b19099 | shibajustfor | 2025-01-28T09:37:22Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | 2025-01-28T09:35:02Z | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a943ba04-52b4-4919-b616-37aa34b19099
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b692e267d1262b06_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b692e267d1262b06_train_data.json
type:
field_input: seed
field_instruction: problem statement
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/a943ba04-52b4-4919-b616-37aa34b19099
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b692e267d1262b06_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 93ce94e0-3bde-4717-a525-ec2438b40353
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 93ce94e0-3bde-4717-a525-ec2438b40353
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a943ba04-52b4-4919-b616-37aa34b19099
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 11.9319 |
| 11.9315 | 0.0009 | 13 | 11.9317 |
| 11.9319 | 0.0018 | 26 | 11.9316 |
| 11.9322 | 0.0027 | 39 | 11.9315 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/b920dd70-6b56-4d48-a1de-fba8c4c2255a | mrferr3t | 2025-01-28T09:37:03Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:35:27Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b920dd70-6b56-4d48-a1de-fba8c4c2255a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 58c39f4a37462112_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58c39f4a37462112_train_data.json
type:
field_instruction: persona
field_output: summary_label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/b920dd70-6b56-4d48-a1de-fba8c4c2255a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 7
micro_batch_size: 2
mlflow_experiment_name: /tmp/58c39f4a37462112_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b920dd70-6b56-4d48-a1de-fba8c4c2255a
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.333 | 0.0018 | 1 | 3.1715 |
| 12.2297 | 0.0035 | 2 | 3.1243 |
| 10.4054 | 0.0070 | 4 | 2.4358 |
| 8.0189 | 0.0105 | 6 | 1.7205 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mergekit-community/mergekit-model_stock-czbocwb | mergekit-community | 2025-01-28T09:34:41Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3",
"base_model:merge:ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2",
"base_model:Sao10K/32B-Qwen2.5-Kunou-v1",
"base_model:merge:Sao10K/32B-Qwen2.5-Kunou-v1",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T09:18:47Z | ---
base_model:
- ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- Sao10K/32B-Qwen2.5-Kunou-v1
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) as a base.
### Models Merged
The following models were included in the merge:
* [ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3)
* [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1)
* [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- model: Sao10K/32B-Qwen2.5-Kunou-v1
merge_method: model_stock
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
parameters:
filter_wise: false
dtype: bfloat16
```
|
kostiantynk/a5b4b6ef-d349-4450-8c19-303da672823a | kostiantynk | 2025-01-28T09:30:55Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:29:34Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a5b4b6ef-d349-4450-8c19-303da672823a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 58c39f4a37462112_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58c39f4a37462112_train_data.json
type:
field_instruction: persona
field_output: summary_label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/a5b4b6ef-d349-4450-8c19-303da672823a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/58c39f4a37462112_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
wandb_project: Birthday-SN56-7-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a5b4b6ef-d349-4450-8c19-303da672823a
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | nan |
| 0.0 | 0.0228 | 13 | nan |
| 0.0 | 0.0456 | 26 | nan |
| 0.0 | 0.0684 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/5ff48cdc-fec7-43f6-9874-735621b2f9f6 | ClarenceDan | 2025-01-28T09:30:26Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:29:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5ff48cdc-fec7-43f6-9874-735621b2f9f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 58c39f4a37462112_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/58c39f4a37462112_train_data.json
type:
field_instruction: persona
field_output: summary_label
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/5ff48cdc-fec7-43f6-9874-735621b2f9f6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/58c39f4a37462112_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63cadfa7-fc59-41d2-b1c2-106d49e2612d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5ff48cdc-fec7-43f6-9874-735621b2f9f6
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0018 | 1 | nan |
| 0.0 | 0.0053 | 3 | nan |
| 0.0 | 0.0105 | 6 | nan |
| 0.0 | 0.0158 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
laquythang/c2dca258-6fa6-4616-8a5b-7ea293e494ac | laquythang | 2025-01-28T09:29:16Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:28:36Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c2dca258-6fa6-4616-8a5b-7ea293e494ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/c2dca258-6fa6-4616-8a5b-7ea293e494ac
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c2dca258-6fa6-4616-8a5b-7ea293e494ac
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4542 | 0.9302 | 10 | 3.7745 |
| 6.9317 | 1.0698 | 11 | 3.7738 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhoxinh/7438e8e4-98a7-4cbe-b29f-3b918b478d1a | nhoxinh | 2025-01-28T09:29:13Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:28:34Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7438e8e4-98a7-4cbe-b29f-3b918b478d1a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/7438e8e4-98a7-4cbe-b29f-3b918b478d1a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7438e8e4-98a7-4cbe-b29f-3b918b478d1a
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4528 | 0.9302 | 10 | 3.7830 |
| 6.927 | 1.0698 | 11 | 3.7819 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/d89ff619-f3c7-43f1-a731-e60a82dc5309 | mrHungddddh | 2025-01-28T09:29:05Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:28:26Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d89ff619-f3c7-43f1-a731-e60a82dc5309
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/d89ff619-f3c7-43f1-a731-e60a82dc5309
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d89ff619-f3c7-43f1-a731-e60a82dc5309
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4329 | 0.9302 | 10 | 3.7479 |
| 6.9507 | 1.0698 | 11 | 3.7690 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
minhnguyennnnnn/2e96e256-b4e3-4aa0-a853-00acb45f4136 | minhnguyennnnnn | 2025-01-28T09:28:54Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:28:29Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2e96e256-b4e3-4aa0-a853-00acb45f4136
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhnguyennnnnn/2e96e256-b4e3-4aa0-a853-00acb45f4136
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2e96e256-b4e3-4aa0-a853-00acb45f4136
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.451 | 0.9302 | 10 | 3.7762 |
| 6.9098 | 1.0698 | 11 | 3.7540 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso/ab4395a6-2134-45df-9cd2-98cade9ebaaf | lesso | 2025-01-28T09:28:46Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | 2025-01-28T09:28:28Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ab4395a6-2134-45df-9cd2-98cade9ebaaf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso/ab4395a6-2134-45df-9cd2-98cade9ebaaf
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: lesso18
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ab4395a6-2134-45df-9cd2-98cade9ebaaf
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4428 | 0.9302 | 10 | 3.7718 |
| 6.9159 | 1.0698 | 11 | 3.7680 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/05ae3be5-ce09-4f81-a91a-7a0fbbc01f54 | aleegis12 | 2025-01-28T09:28:45Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | 2025-01-28T09:28:18Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 05ae3be5-ce09-4f81-a91a-7a0fbbc01f54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/05ae3be5-ce09-4f81-a91a-7a0fbbc01f54
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 05ae3be5-ce09-4f81-a91a-7a0fbbc01f54
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8261 | 0.3636 | 1 | 4.8928 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/f24410eb-2193-44b5-a051-4acf27121d0b | nttx | 2025-01-28T09:28:37Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"region:us"
] | null | 2025-01-28T09:28:26Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f24410eb-2193-44b5-a051-4acf27121d0b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- ac959dbc2ffea936_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac959dbc2ffea936_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/f24410eb-2193-44b5-a051-4acf27121d0b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/ac959dbc2ffea936_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d4c59097-6bc2-449c-b8e8-c10a5e54ac40
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f24410eb-2193-44b5-a051-4acf27121d0b
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.3108 | 0.9091 | 5 | 4.1468 |
| 6.1148 | 1.1364 | 6 | 3.6463 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sleepdeprived3/Beepo-22B_EXL2_4bpw_H8 | sleepdeprived3 | 2025-01-28T09:26:49Z | 14 | 0 | null | [
"safetensors",
"mistral",
"en",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:quantized:mistralai/Mistral-Small-Instruct-2409",
"4-bit",
"exl2",
"region:us"
] | null | 2025-01-28T08:45:35Z | ---
base_model:
- mistralai/Mistral-Small-Instruct-2409
language:
- en
---
<div align="center">
# Beepo-22B
</div>
This is a finetune done on top of https://huggingface.co/mistralai/Mistral-Small-Instruct-2409 making it less censored in general, while attempting to maintain excellent instruct capabilities.

Key Features:
- **Retains Intelligence** - LR was kept low and dataset heavily pruned to avoid losing too much of the original model's intelligence.
- **Instruct prompt format supports Alpaca** - Honestly, I don't know why more models don't use it. If you are an Alpaca format lover like me, this should help. The original Mistral instruct format can still be used, but is not recommended.
- **Instruct Decensoring Applied** - You should **not** need a jailbreak for a model to obey the user. The model should always do what you tell it to. No need for weird `"Sure, I will"` or kitten-murdering-threat tricks. No abliteration was done, only finetuning. This model is not evil. It does not judge or moralize. Like a good tool, it simply obeys.
You can obtain the GGUF quantization of this model here: https://huggingface.co/concedo/Beepo-22B-GGUF
<!-- prompt-template start -->
## Prompt template: Alpaca
```
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
Please leave any feedback or issues that you may have. |
Anvitha369/Deepseek-Merged-Quantized-Model-7B | Anvitha369 | 2025-01-28T09:24:28Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-01-28T09:18:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
demohong/7349e03e-67b6-4a91-943c-e6b6e45149db | demohong | 2025-01-28T09:23:37Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:01:35Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7349e03e-67b6-4a91-943c-e6b6e45149db
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- baff8fc3dcf369b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baff8fc3dcf369b2_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/7349e03e-67b6-4a91-943c-e6b6e45149db
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/baff8fc3dcf369b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0144a1a9-447e-492a-8b56-028895fbacbc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0144a1a9-447e-492a-8b56-028895fbacbc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7349e03e-67b6-4a91-943c-e6b6e45149db
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.2035 | 0.0030 | 200 | 3.0921 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nblinh/101c7bec-38e4-4f49-94ed-2a28715fdd32 | nblinh | 2025-01-28T09:23:33Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-28T09:02:12Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 101c7bec-38e4-4f49-94ed-2a28715fdd32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- baff8fc3dcf369b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baff8fc3dcf369b2_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/101c7bec-38e4-4f49-94ed-2a28715fdd32
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/baff8fc3dcf369b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0144a1a9-447e-492a-8b56-028895fbacbc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0144a1a9-447e-492a-8b56-028895fbacbc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 101c7bec-38e4-4f49-94ed-2a28715fdd32
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.6375 | 0.0030 | 200 | 3.1023 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/4cde5bd3-9b1c-4189-b50f-f7559f4d1a27 | great0001 | 2025-01-28T09:22:34Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | 2025-01-28T09:18:02Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4cde5bd3-9b1c-4189-b50f-f7559f4d1a27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d6868704bda3c01e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d6868704bda3c01e_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/4cde5bd3-9b1c-4189-b50f-f7559f4d1a27
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d6868704bda3c01e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 18c6d867-0419-462c-b0c2-2cd56ec89d17
wandb_project: Mine-SN56-20-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 18c6d867-0419-462c-b0c2-2cd56ec89d17
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4cde5bd3-9b1c-4189-b50f-f7559f4d1a27
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.9675 |
| 3.2052 | 0.0021 | 13 | 1.2318 |
| 2.4024 | 0.0042 | 26 | 1.1276 |
| 2.2512 | 0.0063 | 39 | 1.0864 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated | huihui-ai | 2025-01-28T09:22:18Z | 380 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct-1M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T06:10:41Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-7B-Instruct-1M
tags:
- chat
- abliterated
- uncensored
library_name: transformers
---
# huihui-ai/Qwen2.5-7B-Instruct-1M-abliterated
This is an uncensored version of [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
## Use with ollama
You can use [huihui_ai/qwen2.5-1m-abliterated](https://ollama.com/huihui_ai/qwen2.5-1m-abliterated) directly
```
ollama run huihui_ai/qwen2.5-1m-abliterated
``` |
huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated | huihui-ai | 2025-01-28T09:21:51Z | 864 | 8 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct-1M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T02:50:03Z | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-14B-Instruct-1M
tags:
- chat
- abliterated
- uncensored
library_name: transformers
---
# huihui-ai/Qwen2.5-14B-Instruct-1M-abliterated
This is an uncensored version of [Qwen/Qwen2.5-14B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-1M) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
## Use with ollama
You can use [huihui_ai/qwen2.5-1m-abliterated](https://ollama.com/huihui_ai/qwen2.5-1m-abliterated) directly
```
ollama run huihui_ai/qwen2.5-1m-abliterated:14b
``` |
sercetexam9/xlnet-large-cased-finetuned-augmentation-LUNAR-TAPT | sercetexam9 | 2025-01-28T09:21:51Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-large-cased",
"base_model:finetune:xlnet/xlnet-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T07:56:29Z | ---
library_name: transformers
license: mit
base_model: xlnet/xlnet-large-cased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlnet-large-cased-finetuned-augmentation-LUNAR-TAPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-large-cased-finetuned-augmentation-LUNAR-TAPT
This model is a fine-tuned version of [xlnet/xlnet-large-cased](https://huggingface.co/xlnet/xlnet-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4904
- F1: 0.8291
- Roc Auc: 0.8646
- Accuracy: 0.6215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3732 | 1.0 | 318 | 0.3677 | 0.6417 | 0.7249 | 0.4227 |
| 0.291 | 2.0 | 636 | 0.2986 | 0.7666 | 0.8238 | 0.5426 |
| 0.2296 | 3.0 | 954 | 0.2937 | 0.7774 | 0.8291 | 0.5552 |
| 0.1332 | 4.0 | 1272 | 0.3269 | 0.7980 | 0.8559 | 0.5797 |
| 0.0964 | 5.0 | 1590 | 0.3768 | 0.7977 | 0.8473 | 0.5505 |
| 0.0618 | 6.0 | 1908 | 0.4196 | 0.7833 | 0.8416 | 0.5552 |
| 0.0356 | 7.0 | 2226 | 0.4305 | 0.8041 | 0.8509 | 0.5726 |
| 0.0214 | 8.0 | 2544 | 0.4510 | 0.8112 | 0.8482 | 0.5883 |
| 0.0196 | 9.0 | 2862 | 0.4708 | 0.8118 | 0.8582 | 0.5970 |
| 0.0111 | 10.0 | 3180 | 0.4950 | 0.8174 | 0.8590 | 0.5994 |
| 0.0124 | 11.0 | 3498 | 0.5083 | 0.8094 | 0.8572 | 0.5852 |
| 0.0079 | 12.0 | 3816 | 0.4904 | 0.8291 | 0.8646 | 0.6215 |
| 0.0062 | 13.0 | 4134 | 0.5218 | 0.8155 | 0.8578 | 0.5954 |
| 0.001 | 14.0 | 4452 | 0.5225 | 0.8194 | 0.8636 | 0.6073 |
| 0.0024 | 15.0 | 4770 | 0.5248 | 0.8244 | 0.8646 | 0.6088 |
| 0.0012 | 16.0 | 5088 | 0.5259 | 0.8235 | 0.8652 | 0.6073 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
TArtx/parler-tts-mini-v1-finetuned-12 | TArtx | 2025-01-28T09:21:46Z | 73 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-15T20:19:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
THU-KEG/OpenSAE-LLaMA-3.1-Layer_05-shift_back | THU-KEG | 2025-01-28T09:20:39Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-28T09:08:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sakalti/SabaVL1-2B | Sakalti | 2025-01-28T09:17:39Z | 88 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2-VL-2B-Instruct",
"base_model:finetune:unsloth/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-12-16T22:09:59Z | ---
base_model: unsloth/Qwen2-VL-2B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
- sft
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
inference: true
---
# Uploaded model
- **Developed by:** Sakalti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
Yntec/AgainstTheWorld | Yntec | 2025-01-28T09:16:27Z | 5,821 | 0 | diffusers | [
"diffusers",
"safetensors",
"Base Model",
"Art",
"Realism",
"Photo",
"Photorealistic",
"Portrait",
"wildzzz",
"tin18688783",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"base_model:Yntec/IncredibleWorld2",
"base_model:merge:Yntec/IncredibleWorld2",
"base_model:digiplay/AgainMix_v2.0",
"base_model:merge:digiplay/AgainMix_v2.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-01-27T09:41:06Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Base Model
- Art
- Realism
- Photo
- Photorealistic
- Portrait
- wildzzz
- tin18688783
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
base_model:
- Yntec/IncredibleWorld2
- digiplay/AgainMix_v2.0
base_model_relation: merge
---
# Against the World
A mix of AgainMix V2 and Incredible World 3, with a bit of Incredible World 2! Showcase and prompts (all use seed 9119):

analog style 70s color photograph of young Bruce Willis as John McClane hugging princess Leia, star wars behind the scenes

Girl, sitting on a box of rockets, Pretty 90s EYES, background rocket, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza

Movie screenshot portrait. Dad with toddler girls. festive scene at a copper brewery with a wooden keg of cake. pretty little daughters in the center wearing jeans sitting with Santa Claus chef. Display mugs of dark beer accompanied by colorful halloween ingredients

a lemon themed hamburger, high quality
Original pages:
https://civitai.com/models/167100?modelVersionId=375171 (AgainMix 2)
https://civitai.com/models/143386?modelVersionId=177237 (Incredible World 3)
https://civitai.com/models/143386?modelVersionId=163019 (Incredible World 2)
# Recipes
- SuperMerger Weight Sum Use MBW 1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0
Model A:
AgainMix V2
Model B:
IncredibleWorld3
Output:
AgainstTheWorldAlpha
- SuperMerger Weight Sum Use MBW 0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
Model A:
AgainstTheWorldAlpha
Model B:
IncredibleWorld2
Output:
AgainstTheWorld |
aleegis12/48879609-bb9a-4a9f-9983-a151d539930c | aleegis12 | 2025-01-28T09:16:13Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:00:49Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 48879609-bb9a-4a9f-9983-a151d539930c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- baff8fc3dcf369b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baff8fc3dcf369b2_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/48879609-bb9a-4a9f-9983-a151d539930c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/baff8fc3dcf369b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0144a1a9-447e-492a-8b56-028895fbacbc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0144a1a9-447e-492a-8b56-028895fbacbc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 48879609-bb9a-4a9f-9983-a151d539930c
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.5184 | 0.0001 | 1 | 4.8207 |
| 14.7135 | 0.0030 | 50 | 3.2014 |
| 13.9811 | 0.0059 | 100 | 3.1571 |
| 13.837 | 0.0089 | 150 | 3.0583 |
| 14.5914 | 0.0118 | 200 | 3.0535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
robiulawaldev/b05faaba-a177-441d-8fd1-b7166016c766 | robiulawaldev | 2025-01-28T09:14:11Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:02:08Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b05faaba-a177-441d-8fd1-b7166016c766
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- baff8fc3dcf369b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baff8fc3dcf369b2_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/b05faaba-a177-441d-8fd1-b7166016c766
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/baff8fc3dcf369b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0144a1a9-447e-492a-8b56-028895fbacbc
wandb_project: Birthday-SN56-36-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0144a1a9-447e-492a-8b56-028895fbacbc
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b05faaba-a177-441d-8fd1-b7166016c766
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.4916 |
| 8.2233 | 0.0001 | 13 | 3.3707 |
| 6.7905 | 0.0002 | 26 | 3.2726 |
| 6.74 | 0.0003 | 39 | 3.6076 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sabafallah/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF | sabafallah | 2025-01-28T09:12:19Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-28T09:12:11Z | ---
license: mit
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
---
# sabafallah/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sabafallah/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sabafallah/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sabafallah/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sabafallah/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -c 2048
```
|
ericflo/Qwen2.5-7B-Think-KTO-v0.1 | ericflo | 2025-01-28T09:09:28Z | 167 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation",
"conversational",
"dataset:ericflo/Qwen2.5-7B-Base-Think-KTO",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T08:26:51Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B
library_name: transformers
datasets:
- ericflo/Qwen2.5-7B-Base-Think-KTO
---
# Qwen2.5-Think-KTO v0.1: A Reasoning-Enhanced Language Model
**NOTE**: This model is currently undertrained and needs some coaxing to output `<think>...</think>` tags.
## What's New in v0.1
This initial release enhances the base Qwen2.5-7B model's reasoning capabilities using Kahneman-Tversky Optimization (KTO). The model is trained using binary feedback signals, indicating whether outputs are desirable or undesirable for given inputs.
## How It Works
The model generates responses using a simple thought-then-answer format:
```
<think>
Let me approach this step by step...
First, we need to consider X...
Then, looking at Y...
Finally, Z leads us to...
</think>
[final answer based on thought process]
```
## Technical Details
### Base Architecture
- **Base Model**: Qwen2.5-7B
- **Training Approach**: Kahneman-Tversky Optimization (KTO)
- **Dataset**: Binary feedback signals (desirable/undesirable outputs)
- **Quality Control**: Programmatic validation
### Training Parameters
- **Optimization**:
- Learning Rate: 5e-6
- Scheduler: Cosine with 0.1 warmup ratio
- Optimizer: AdamW 8-bit
- Batch Size: 5 per device
- Gradient Accumulation Steps: 1
- Number of Epochs: 3
- **Model Config**:
- Max Length: 3746
- Max Prompt Length: 364
- Attention Implementation: Flash Attention 2
- Gradient Checkpointing: Enabled
- **Infrastructure**:
- Accelerate for distributed training
- Wandb logging
- LIGER optimization enabled
## What's It Good For?
✅ Tasks requiring natural thought processes
✅ Scenarios where binary feedback is available
✅ Problems benefiting from human-like reasoning
✅ Applications needing clear thought-to-answer progression
## Limitations
- Bounded by base Qwen2.5-7B capabilities
- May not generalize beyond training distribution
- First version with room for improvement
- Performance on non-reasoning tasks unchanged
- Limited by quality of binary feedback
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ericflo/Qwen2.5-Think-KTO-v0.1")
tokenizer = AutoTokenizer.from_pretrained("ericflo/Qwen2.5-Think-KTO-v0.1")
prompt = "What are the implications of Moore's Law slowing down?"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(input_ids, max_length=512)
response = tokenizer.decode(output[0])
```
## Citation
```bibtex
@misc{qwen25-think-kto,
title={Qwen2.5-Think-KTO: Enhanced Reasoning Through Human-Aware Learning},
author={[Eric Florenzano]},
year={2024},
howpublished={\url{https://huggingface.co/ericflo/Qwen2.5-Think-KTO-v0.1}}
}
```
## Acknowledgments
This model builds on the Qwen2.5-7B base model and implements the KTO approach developed by Ethayarajh et al. Special thanks to the authors of the KTO paper and the broader AI research community for their contributions to model alignment techniques. |
vojtam/gengpt2_1024_medium | vojtam | 2025-01-28T09:06:37Z | 17 | 0 | null | [
"safetensors",
"gpt2",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-01-28T09:05:49Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
sercetexam9/roberta-large-finetuned-augmentation-LUNAR-TAPT | sercetexam9 | 2025-01-28T09:06:34Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T07:53:22Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-large-finetuned-augmentation-LUNAR-TAPT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-augmentation-LUNAR-TAPT
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4897
- F1: 0.8302
- Roc Auc: 0.8696
- Accuracy: 0.6338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3371 | 1.0 | 317 | 0.3025 | 0.7356 | 0.8000 | 0.5233 |
| 0.2571 | 2.0 | 634 | 0.3055 | 0.7376 | 0.7942 | 0.5572 |
| 0.1848 | 3.0 | 951 | 0.2850 | 0.7964 | 0.8431 | 0.5912 |
| 0.124 | 4.0 | 1268 | 0.3223 | 0.7738 | 0.8164 | 0.5635 |
| 0.0701 | 5.0 | 1585 | 0.3219 | 0.8091 | 0.8597 | 0.5951 |
| 0.0491 | 6.0 | 1902 | 0.3576 | 0.8148 | 0.8547 | 0.6014 |
| 0.0432 | 7.0 | 2219 | 0.3808 | 0.8216 | 0.8665 | 0.6196 |
| 0.0352 | 8.0 | 2536 | 0.3945 | 0.8278 | 0.8721 | 0.6259 |
| 0.0282 | 9.0 | 2853 | 0.4357 | 0.8173 | 0.8580 | 0.6054 |
| 0.012 | 10.0 | 3170 | 0.4670 | 0.8208 | 0.8679 | 0.5951 |
| 0.0054 | 11.0 | 3487 | 0.4864 | 0.8177 | 0.8599 | 0.6038 |
| 0.0029 | 12.0 | 3804 | 0.4882 | 0.8289 | 0.8687 | 0.6259 |
| 0.0011 | 13.0 | 4121 | 0.4897 | 0.8302 | 0.8696 | 0.6338 |
| 0.0012 | 14.0 | 4438 | 0.5079 | 0.8273 | 0.8680 | 0.6251 |
| 0.0008 | 15.0 | 4755 | 0.5146 | 0.8285 | 0.8688 | 0.6227 |
| 0.0007 | 16.0 | 5072 | 0.5100 | 0.8282 | 0.8693 | 0.6338 |
| 0.0008 | 17.0 | 5389 | 0.5158 | 0.8282 | 0.8673 | 0.6330 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
upb-nlp/RoGEC-mt0-xl | upb-nlp | 2025-01-28T09:04:31Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-01-23T14:32:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
We fine-tuned an encoder-decoder model directly on the pairs of incorrect and correct sentences to be used as a baseline.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso06/913da51c-41d4-49d1-ac44-e61e00ac8aa4 | lesso06 | 2025-01-28T09:03:26Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T09:02:03Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 913da51c-41d4-49d1-ac44-e61e00ac8aa4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- baff8fc3dcf369b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/baff8fc3dcf369b2_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso06/913da51c-41d4-49d1-ac44-e61e00ac8aa4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/baff8fc3dcf369b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0144a1a9-447e-492a-8b56-028895fbacbc
wandb_project: multi
wandb_run: your_name
wandb_runid: 0144a1a9-447e-492a-8b56-028895fbacbc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 913da51c-41d4-49d1-ac44-e61e00ac8aa4
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.6682 | 0.0236 | 200 | 2.9878 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits